repo_name
stringclasses
6 values
pr_number
int64
512
78.9k
pr_title
stringlengths
3
144
pr_description
stringlengths
0
30.3k
author
stringlengths
2
21
date_created
unknown
date_merged
unknown
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
query
stringlengths
17
30.4k
filepath
stringlengths
9
210
before_content
stringlengths
0
112M
after_content
stringlengths
0
112M
label
int64
-1
1
dotnet/runtime
65,899
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync
I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
adamsitnik
"2022-02-25T17:17:25Z"
"2022-03-01T13:15:47Z"
097d9ea3c1584eb8745bd0a72ebf9cd3a31f1618
3cbed4b71800709a8121e50e964ae5b02bd80b94
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync. I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
./src/tests/JIT/HardwareIntrinsics/General/NotSupported/Vector256BooleanGetElementMaxValue.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics\General\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; namespace JIT.HardwareIntrinsics.General { public static partial class Program { private static void Vector256BooleanGetElementMaxValue() { bool succeeded = false; try { bool result = default(Vector256<bool>).GetElement(int.MaxValue); } catch (NotSupportedException) { succeeded = true; } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"Vector256BooleanGetElementMaxValue: RunNotSupportedScenario failed to throw NotSupportedException."); TestLibrary.TestFramework.LogInformation(string.Empty); throw new Exception("One or more scenarios did not complete as expected."); } } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics\General\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; namespace JIT.HardwareIntrinsics.General { public static partial class Program { private static void Vector256BooleanGetElementMaxValue() { bool succeeded = false; try { bool result = default(Vector256<bool>).GetElement(int.MaxValue); } catch (NotSupportedException) { succeeded = true; } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"Vector256BooleanGetElementMaxValue: RunNotSupportedScenario failed to throw NotSupportedException."); TestLibrary.TestFramework.LogInformation(string.Empty); throw new Exception("One or more scenarios did not complete as expected."); } } } }
-1
dotnet/runtime
65,899
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync
I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
adamsitnik
"2022-02-25T17:17:25Z"
"2022-03-01T13:15:47Z"
097d9ea3c1584eb8745bd0a72ebf9cd3a31f1618
3cbed4b71800709a8121e50e964ae5b02bd80b94
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync. I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
./src/mono/mono/utils/mono-signal-handler.h
/** * \file * Handle signal handler differences across platforms * * Copyright (C) 2013 Xamarin Inc * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #ifndef __MONO_SIGNAL_HANDLER_H__ #define __MONO_SIGNAL_HANDLER_H__ #include "config.h" #include <glib.h> /* * When a signal is delivered to a thread on a Krait Android device * that's in the middle of skipping over an "IT" block, such as this * one: * * 0x40184ef0 <dlfree+1308>: ldr r1, [r3, #0] * 0x40184ef2 <dlfree+1310>: add.w r5, r12, r2, lsl #3 * 0x40184ef6 <dlfree+1314>: lsls.w r2, r0, r2 * 0x40184efa <dlfree+1318>: tst r2, r1 * ### this is the IT instruction * 0x40184efc <dlfree+1320>: itt eq * 0x40184efe <dlfree+1322>: orreq r2, r1 * ### signal arrives here * 0x40184f00 <dlfree+1324>: streq r2, [r3, #0] * 0x40184f02 <dlfree+1326>: beq.n 0x40184f1a <dlfree+1350> * 0x40184f04 <dlfree+1328>: ldr r2, [r5, #8] * 0x40184f06 <dlfree+1330>: ldr r3, [r3, #16] * * then the first few (at most four, one would assume) instructions of * the signal handler (!) might be skipped. They happen to be the * push of the frame pointer and return address, so once the signal * handler has done its work, it returns into a SIGSEGV. */ #if defined (TARGET_ARM) && defined (HAVE_ARMV7) && defined (TARGET_ANDROID) #define KRAIT_IT_BUG_WORKAROUND 1 #endif #ifdef KRAIT_IT_BUG_WORKAROUND #define MONO_SIGNAL_HANDLER_FUNC(access, name, arglist) \ static void __krait_ ## name arglist; \ __attribute__ ((__naked__)) access void \ name arglist \ { \ asm volatile ( \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "b __krait_" # name \ "\n\t"); \ } \ static __attribute__ ((__used__)) void __krait_ ## name arglist #endif /* Don't use this */ #ifndef MONO_SIGNAL_HANDLER_FUNC #define MONO_SIGNAL_HANDLER_FUNC(access, name, arglist) access void name arglist #endif /* * Macros to work around signal handler differences on various platforms. * * To declare a signal handler function: * void MONO_SIG_HANDLER_SIGNATURE (handler_func) * To define a signal handler function: * MONO_SIG_HANDLER_FUNC(access, name) * To call another signal handler function: * handler_func (MONO_SIG_HANDLER_PARAMS); * To obtain the signal number: * int signo = MONO_SIG_HANDLER_GET_SIGNO (); * To obtain the signal context: * MONO_SIG_HANDLER_GET_CONTEXT (). * This will define a variable name 'ctx'. */ #ifdef HOST_WIN32 #include <windows.h> #define MONO_SIG_HANDLER_INFO_TYPE MonoWindowsSigHandlerInfo typedef struct { /* Set to FALSE to indicate chained signal handler needs run. * With vectored exceptions Windows does that for us by returning * EXCEPTION_CONTINUE_SEARCH from handler */ gboolean handled; EXCEPTION_POINTERS* ep; } MonoWindowsSigHandlerInfo; /* seh_vectored_exception_handler () passes in a CONTEXT* */ #elif defined(HOST_WASM) #define MONO_SIG_HANDLER_INFO_TYPE int #else /* sigaction */ #define MONO_SIG_HANDLER_INFO_TYPE siginfo_t #endif #define MONO_SIG_HANDLER_SIGNATURE(ftn) ftn (int _dummy, MONO_SIG_HANDLER_INFO_TYPE *_info, void *context) #define MONO_SIG_HANDLER_FUNC(access, ftn) MONO_SIGNAL_HANDLER_FUNC (access, ftn, (int _dummy, MONO_SIG_HANDLER_INFO_TYPE *_info, void *context)) #define MONO_SIG_HANDLER_PARAMS _dummy, _info, context #define MONO_SIG_HANDLER_GET_SIGNO() (_dummy) #define MONO_SIG_HANDLER_GET_INFO() (_info) #define MONO_SIG_HANDLER_GET_CONTEXT void *ctx = context; void mono_load_signames (void); const char * mono_get_signame (int signo); #endif // __MONO_SIGNAL_HANDLER_H__
/** * \file * Handle signal handler differences across platforms * * Copyright (C) 2013 Xamarin Inc * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #ifndef __MONO_SIGNAL_HANDLER_H__ #define __MONO_SIGNAL_HANDLER_H__ #include "config.h" #include <glib.h> /* * When a signal is delivered to a thread on a Krait Android device * that's in the middle of skipping over an "IT" block, such as this * one: * * 0x40184ef0 <dlfree+1308>: ldr r1, [r3, #0] * 0x40184ef2 <dlfree+1310>: add.w r5, r12, r2, lsl #3 * 0x40184ef6 <dlfree+1314>: lsls.w r2, r0, r2 * 0x40184efa <dlfree+1318>: tst r2, r1 * ### this is the IT instruction * 0x40184efc <dlfree+1320>: itt eq * 0x40184efe <dlfree+1322>: orreq r2, r1 * ### signal arrives here * 0x40184f00 <dlfree+1324>: streq r2, [r3, #0] * 0x40184f02 <dlfree+1326>: beq.n 0x40184f1a <dlfree+1350> * 0x40184f04 <dlfree+1328>: ldr r2, [r5, #8] * 0x40184f06 <dlfree+1330>: ldr r3, [r3, #16] * * then the first few (at most four, one would assume) instructions of * the signal handler (!) might be skipped. They happen to be the * push of the frame pointer and return address, so once the signal * handler has done its work, it returns into a SIGSEGV. */ #if defined (TARGET_ARM) && defined (HAVE_ARMV7) && defined (TARGET_ANDROID) #define KRAIT_IT_BUG_WORKAROUND 1 #endif #ifdef KRAIT_IT_BUG_WORKAROUND #define MONO_SIGNAL_HANDLER_FUNC(access, name, arglist) \ static void __krait_ ## name arglist; \ __attribute__ ((__naked__)) access void \ name arglist \ { \ asm volatile ( \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "mov r0, r0\n\t" \ "b __krait_" # name \ "\n\t"); \ } \ static __attribute__ ((__used__)) void __krait_ ## name arglist #endif /* Don't use this */ #ifndef MONO_SIGNAL_HANDLER_FUNC #define MONO_SIGNAL_HANDLER_FUNC(access, name, arglist) access void name arglist #endif /* * Macros to work around signal handler differences on various platforms. * * To declare a signal handler function: * void MONO_SIG_HANDLER_SIGNATURE (handler_func) * To define a signal handler function: * MONO_SIG_HANDLER_FUNC(access, name) * To call another signal handler function: * handler_func (MONO_SIG_HANDLER_PARAMS); * To obtain the signal number: * int signo = MONO_SIG_HANDLER_GET_SIGNO (); * To obtain the signal context: * MONO_SIG_HANDLER_GET_CONTEXT (). * This will define a variable name 'ctx'. */ #ifdef HOST_WIN32 #include <windows.h> #define MONO_SIG_HANDLER_INFO_TYPE MonoWindowsSigHandlerInfo typedef struct { /* Set to FALSE to indicate chained signal handler needs run. * With vectored exceptions Windows does that for us by returning * EXCEPTION_CONTINUE_SEARCH from handler */ gboolean handled; EXCEPTION_POINTERS* ep; } MonoWindowsSigHandlerInfo; /* seh_vectored_exception_handler () passes in a CONTEXT* */ #elif defined(HOST_WASM) #define MONO_SIG_HANDLER_INFO_TYPE int #else /* sigaction */ #define MONO_SIG_HANDLER_INFO_TYPE siginfo_t #endif #define MONO_SIG_HANDLER_SIGNATURE(ftn) ftn (int _dummy, MONO_SIG_HANDLER_INFO_TYPE *_info, void *context) #define MONO_SIG_HANDLER_FUNC(access, ftn) MONO_SIGNAL_HANDLER_FUNC (access, ftn, (int _dummy, MONO_SIG_HANDLER_INFO_TYPE *_info, void *context)) #define MONO_SIG_HANDLER_PARAMS _dummy, _info, context #define MONO_SIG_HANDLER_GET_SIGNO() (_dummy) #define MONO_SIG_HANDLER_GET_INFO() (_info) #define MONO_SIG_HANDLER_GET_CONTEXT void *ctx = context; void mono_load_signames (void); const char * mono_get_signame (int signo); #endif // __MONO_SIGNAL_HANDLER_H__
-1
dotnet/runtime
65,899
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync
I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
adamsitnik
"2022-02-25T17:17:25Z"
"2022-03-01T13:15:47Z"
097d9ea3c1584eb8745bd0a72ebf9cd3a31f1618
3cbed4b71800709a8121e50e964ae5b02bd80b94
add missing GC.SuppressFinalize(this) to FileStream.DisposeAsync. I was unable to repro #65835 locally for a few hours, but I believe that the it's caused by lack of `GC.SuppressFinalize(this)` in `FileStream.DisposeAsync` and my recent changes from #64997 have just exposed the problem by adding the finalizer to `FileStream` itself. Explanation: the default `Stream.DisposeAsync` calls `Dispose`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L176-L180 which calls `Close` which calls `GC.SuppressFinalize(this)`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Stream.cs#L158-L167 `FileStream` was overriding `DisposeAsync`, but not calling `GC.SuppressFinalize(this)`. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L500 In #65835 we can see that the buffer was actually written to disk twice. I guess (I was not able to repro it) that it's caused by a flush implemented for finalizer. I am not 100% sure because the finally block sets write position to 0: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L115-L121 So after `DisposeAsync` the flushing should in theory see that there is nothing to flush. Moreover, it should also observe that the handle was already closed. https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/Strategies/BufferedFileStreamStrategy.cs#L125-L130 I've tried really hard to write a failing unit test, but I've failed. The reason for that is that all custom types deriving from `FileStream` are calling `BaseDisposeAsync`: https://github.com/dotnet/runtime/blob/95f7f7a026d74e0720d0dfdf2de799933b832df2/src/libraries/System.Private.CoreLib/src/System/IO/FileStream.cs#L579 which does call the base impl and is bug free.
./src/libraries/System.IO.FileSystem.Primitives/ref/System.IO.FileSystem.Primitives.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // ------------------------------------------------------------------------------ // Changes to this file must follow the https://aka.ms/api-review process. // ------------------------------------------------------------------------------
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // ------------------------------------------------------------------------------ // Changes to this file must follow the https://aka.ms/api-review process. // ------------------------------------------------------------------------------
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./eng/codeOptimization.targets
<Project> <PropertyGroup Condition="'$(IsEligibleForNgenOptimization)' == ''"> <IsEligibleForNgenOptimization>true</IsEligibleForNgenOptimization> <IsEligibleForNgenOptimization Condition="'$(IsReferenceAssembly)' == 'true'">false</IsEligibleForNgenOptimization> <IsEligibleForNgenOptimization Condition="'$(GeneratePlatformNotSupportedAssembly)' == 'true' or '$(GeneratePlatformNotSupportedAssemblyMessage)' != ''">false</IsEligibleForNgenOptimization> <!-- There's an issue causing IBCMerge failures because of mismatched MVIDs across many of our assemblies on Mac, so disable IBCMerge optimizations on Mac for now to unblock the offical build. See issue https://github.com/dotnet/runtime/issues/33303 --> <IsEligibleForNgenOptimization Condition="'$(TargetOS)' == 'OSX' or '$(TargetsMobile)' == 'true'">false</IsEligibleForNgenOptimization> </PropertyGroup> <Target Name="SetApplyNgenOptimization" Condition="'$(IsEligibleForNgenOptimization)' == 'true'" BeforeTargets="CoreCompile"> <PropertyGroup> <IbcOptimizationDataDir Condition="'$(TargetOS)' == 'Unix' or '$(TargetOS)' == 'Linux'">$(IbcOptimizationDataDir)Linux\</IbcOptimizationDataDir> <IbcOptimizationDataDir Condition="'$(TargetOS)' == 'windows'">$(IbcOptimizationDataDir)Windows\</IbcOptimizationDataDir> </PropertyGroup> <ItemGroup> <_optimizationDataAssembly Include="$(IbcOptimizationDataDir)**\$(TargetFileName)" /> </ItemGroup> <PropertyGroup> <ApplyNgenOptimization Condition="'@(_optimizationDataAssembly)' != ''">full</ApplyNgenOptimization> </PropertyGroup> </Target> </Project>
<Project> <PropertyGroup Condition="'$(IsEligibleForNgenOptimization)' == ''"> <IsEligibleForNgenOptimization>true</IsEligibleForNgenOptimization> <IsEligibleForNgenOptimization Condition="'$(IsReferenceAssemblyProject)' == 'true'">false</IsEligibleForNgenOptimization> <IsEligibleForNgenOptimization Condition="'$(GeneratePlatformNotSupportedAssembly)' == 'true' or '$(GeneratePlatformNotSupportedAssemblyMessage)' != ''">false</IsEligibleForNgenOptimization> <!-- There's an issue causing IBCMerge failures because of mismatched MVIDs across many of our assemblies on Mac, so disable IBCMerge optimizations on Mac for now to unblock the offical build. See issue https://github.com/dotnet/runtime/issues/33303 --> <IsEligibleForNgenOptimization Condition="'$(TargetOS)' == 'OSX' or '$(TargetsMobile)' == 'true'">false</IsEligibleForNgenOptimization> </PropertyGroup> <Target Name="SetApplyNgenOptimization" Condition="'$(IsEligibleForNgenOptimization)' == 'true'" BeforeTargets="CoreCompile"> <PropertyGroup> <IbcOptimizationDataDir Condition="'$(TargetOS)' == 'Unix' or '$(TargetOS)' == 'Linux'">$(IbcOptimizationDataDir)Linux\</IbcOptimizationDataDir> <IbcOptimizationDataDir Condition="'$(TargetOS)' == 'windows'">$(IbcOptimizationDataDir)Windows\</IbcOptimizationDataDir> </PropertyGroup> <ItemGroup> <_optimizationDataAssembly Include="$(IbcOptimizationDataDir)**\$(TargetFileName)" /> </ItemGroup> <PropertyGroup> <ApplyNgenOptimization Condition="'@(_optimizationDataAssembly)' != ''">full</ApplyNgenOptimization> </PropertyGroup> </Target> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./eng/generators.targets
<Project> <PropertyGroup> <EnableDllImportGenerator Condition="'$(EnableDllImportGenerator)' == '' and '$(MSBuildProjectName)' == 'System.Private.CoreLib'">true</EnableDllImportGenerator> <IncludeDllImportGeneratorSources Condition="'$(IncludeDllImportGeneratorSources)' == ''">true</IncludeDllImportGeneratorSources> </PropertyGroup> <ItemGroup> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == 'true'" /> <!-- If the current project is not System.Private.CoreLib, we enable the DllImportGenerator source generator when the project is a C# source project that either: - references System.Private.CoreLib, or - references System.Runtime.InteropServices --> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == '' and '$(IsRuntimeAssembly)' == 'true' and '$(MSBuildProjectExtension)' == '.csproj' and ( ('@(Reference)' != '' and @(Reference->AnyHaveMetadataValue('Identity', 'System.Runtime.InteropServices'))) or ('@(ProjectReference)' != '' and @(ProjectReference->AnyHaveMetadataValue('Identity', '$(CoreLibProject)'))) or ('$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)' and '$(DisableImplicitAssemblyReferences)' == 'false'))" /> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == '' and '$(IsRuntimeAssembly)' == 'true' and '$(MSBuildProjectExtension)' == '.csproj' and ('$(TargetFrameworkIdentifier)' == '.NETStandard' or '$(TargetFrameworkIdentifier)' == '.NETFramework' or ('$(TargetFrameworkIdentifier)' == '.NETCoreApp' and $([MSBuild]::VersionLessThan($(TargetFrameworkVersion), '$(NetCoreAppCurrentVersion)'))))" /> </ItemGroup> <!-- Use this complex ItemGroup-based filtering to add the ProjectReference to make sure dotnet/runtime stays compatible with NuGet Static Graph Restore. --> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator'))"> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\gen\DllImportGenerator\DllImportGenerator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\gen\Microsoft.Interop.SourceGeneration\Microsoft.Interop.SourceGeneration.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> </ItemGroup> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator')) and '$(IncludeDllImportGeneratorSources)' == 'true'"> <Compile Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\GeneratedDllImportAttribute.cs" /> <Compile Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\StringMarshalling.cs" /> <!-- Only add the following files if we are on the latest TFM (that is, net7). --> <Compile Condition="'$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)'" Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\GeneratedMarshallingAttribute.cs" /> <!-- Only add the following files if we are on the latest TFM (that is, net7) and the project is SPCL or has references to System.Runtime.CompilerServices.Unsafe and System.Memory --> <Compile Condition="'$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)' and ( '$(MSBuildProjectName)' == 'System.Private.CoreLib' or '$(EnableDllImportGenerator)' == 'true' or ('@(Reference)' != '' and @(Reference->AnyHaveMetadataValue('Identity', 'System.Memory'))) or ('@(ProjectReference)' != '' and @(ProjectReference->AnyHaveMetadataValue('Identity', '$(CoreLibProject)'))))" Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\ArrayMarshaller.cs" /> </ItemGroup> <ItemGroup> <EnabledGenerators Include="RegexGenerator" Condition="'$(EnableRegexGenerator)' == 'true'" /> </ItemGroup> <!-- Use this complex ItemGroup-based filtering to add the ProjectReference to make sure dotnet/runtime stays compatible with NuGet Static Graph Restore. --> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'RegexGenerator'))"> <ProjectReference Include="$(LibrariesProjectRoot)System.Text.RegularExpressions/gen/System.Text.RegularExpressions.Generator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> </ItemGroup> <Target Name="ConfigureGenerators" DependsOnTargets="ConfigureDllImportGenerator" BeforeTargets="CoreCompile" /> <!-- Microsoft.Interop.DllImportGenerator --> <Target Name="ConfigureDllImportGenerator" Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator'))" DependsOnTargets="ResolveProjectReferences" BeforeTargets="GenerateMSBuildEditorConfigFileShouldRun"> <PropertyGroup> <DllImportGenerator_UseMarshalType>true</DllImportGenerator_UseMarshalType> </PropertyGroup> </Target> <Import Project="$(LibrariesProjectRoot)System.Runtime.InteropServices/gen/DllImportGenerator/Microsoft.Interop.DllImportGenerator.props" /> </Project>
<Project> <PropertyGroup> <EnableDllImportGenerator Condition="'$(EnableDllImportGenerator)' == '' and '$(MSBuildProjectName)' == 'System.Private.CoreLib'">true</EnableDllImportGenerator> <IncludeDllImportGeneratorSources Condition="'$(IncludeDllImportGeneratorSources)' == ''">true</IncludeDllImportGeneratorSources> </PropertyGroup> <ItemGroup> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == 'true'" /> <!-- If the current project is not System.Private.CoreLib, we enable the DllImportGenerator source generator when the project is a C# source project that either: - references System.Private.CoreLib, or - references System.Runtime.InteropServices --> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == '' and '$(IsSourceProject)' == 'true' and '$(MSBuildProjectExtension)' == '.csproj' and ( ('@(Reference)' != '' and @(Reference->AnyHaveMetadataValue('Identity', 'System.Runtime.InteropServices'))) or ('@(ProjectReference)' != '' and @(ProjectReference->AnyHaveMetadataValue('Identity', '$(CoreLibProject)'))) or ('$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)' and '$(DisableImplicitAssemblyReferences)' == 'false'))" /> <EnabledGenerators Include="DllImportGenerator" Condition="'$(EnableDllImportGenerator)' == '' and '$(IsSourceProject)' == 'true' and '$(MSBuildProjectExtension)' == '.csproj' and ('$(TargetFrameworkIdentifier)' == '.NETStandard' or '$(TargetFrameworkIdentifier)' == '.NETFramework' or ('$(TargetFrameworkIdentifier)' == '.NETCoreApp' and $([MSBuild]::VersionLessThan($(TargetFrameworkVersion), '$(NetCoreAppCurrentVersion)'))))" /> </ItemGroup> <!-- Use this complex ItemGroup-based filtering to add the ProjectReference to make sure dotnet/runtime stays compatible with NuGet Static Graph Restore. --> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator'))"> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\gen\DllImportGenerator\DllImportGenerator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\gen\Microsoft.Interop.SourceGeneration\Microsoft.Interop.SourceGeneration.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> </ItemGroup> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator')) and '$(IncludeDllImportGeneratorSources)' == 'true'"> <Compile Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\GeneratedDllImportAttribute.cs" /> <Compile Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\StringMarshalling.cs" /> <!-- Only add the following files if we are on the latest TFM (that is, net7). --> <Compile Condition="'$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)'" Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\GeneratedMarshallingAttribute.cs" /> <!-- Only add the following files if we are on the latest TFM (that is, net7) and the project is SPCL or has references to System.Runtime.CompilerServices.Unsafe and System.Memory --> <Compile Condition="'$(NetCoreAppCurrentTargetFrameworkMoniker)' == '$(TargetFrameworkMoniker)' and ( '$(MSBuildProjectName)' == 'System.Private.CoreLib' or '$(EnableDllImportGenerator)' == 'true' or ('@(Reference)' != '' and @(Reference->AnyHaveMetadataValue('Identity', 'System.Memory'))) or ('@(ProjectReference)' != '' and @(ProjectReference->AnyHaveMetadataValue('Identity', '$(CoreLibProject)'))))" Include="$(LibrariesProjectRoot)Common\src\System\Runtime\InteropServices\ArrayMarshaller.cs" /> </ItemGroup> <ItemGroup> <EnabledGenerators Include="RegexGenerator" Condition="'$(EnableRegexGenerator)' == 'true'" /> </ItemGroup> <!-- Use this complex ItemGroup-based filtering to add the ProjectReference to make sure dotnet/runtime stays compatible with NuGet Static Graph Restore. --> <ItemGroup Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'RegexGenerator'))"> <ProjectReference Include="$(LibrariesProjectRoot)System.Text.RegularExpressions/gen/System.Text.RegularExpressions.Generator.csproj" OutputItemType="Analyzer" ReferenceOutputAssembly="false" /> </ItemGroup> <Target Name="ConfigureGenerators" DependsOnTargets="ConfigureDllImportGenerator" BeforeTargets="CoreCompile" /> <!-- Microsoft.Interop.DllImportGenerator --> <Target Name="ConfigureDllImportGenerator" Condition="'@(EnabledGenerators)' != '' and @(EnabledGenerators->AnyHaveMetadataValue('Identity', 'DllImportGenerator'))" DependsOnTargets="ResolveProjectReferences" BeforeTargets="GenerateMSBuildEditorConfigFileShouldRun"> <PropertyGroup> <DllImportGenerator_UseMarshalType>true</DllImportGenerator_UseMarshalType> </PropertyGroup> </Target> <Import Project="$(LibrariesProjectRoot)System.Runtime.InteropServices/gen/DllImportGenerator/Microsoft.Interop.DllImportGenerator.props" /> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./eng/references.targets
<Project> <PropertyGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'"> <!-- Disable RAR from transitively discovering dependencies for references. This is required as we don't copy dependencies over into the output directory which means RAR can't resolve them. --> <_FindDependencies>false</_FindDependencies> </PropertyGroup> <!-- Project references shouldn't be copied to the output for non test apps. --> <ItemDefinitionGroup Condition="'$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true' and '$(IsGeneratorProject)' != 'true'"> <ProjectReference> <Private>false</Private> </ProjectReference> </ItemDefinitionGroup> <ItemGroup Condition="'@(ProjectReference)' != ''"> <_coreLibProjectReference Include="@(ProjectReference->WithMetadataValue('Identity', '$(CoreLibProject)'))" /> <ProjectReference Update="@(_coreLibProjectReference)" Private="false"> <SetConfiguration Condition="'$(RuntimeFlavor)' == 'CoreCLR' and '$(Configuration)' != '$(CoreCLRConfiguration)'">Configuration=$(CoreCLRConfiguration)</SetConfiguration> <SetConfiguration Condition="'$(RuntimeFlavor)' == 'Mono' and '$(Configuration)' != '$(MonoConfiguration)'">Configuration=$(MonoConfiguration)</SetConfiguration> </ProjectReference> <!-- If a CoreLib ProjectReference is present, make all P2P assets non transitive. --> <ProjectReference Update="@(ProjectReference->WithMetadataValue('PrivateAssets', ''))" PrivateAssets="all" Condition="'$(IsSourceProject)' == 'true' and '@(_coreLibProjectReference)' != ''" /> </ItemGroup> <!-- Disable TargetArchitectureMismatch warning when we reference CoreLib as it is platform specific. --> <Target Name="DisableProjectReferenceArchitectureMismatchWarningForCoreLib" Condition="'@(_coreLibProjectReference)' != ''" BeforeTargets="ResolveAssemblyReferences"> <PropertyGroup> <ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>None</ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch> </PropertyGroup> </Target> <!-- Filter out transitive P2Ps which should be excluded. --> <Target Name="FilterTransitiveProjectReferences" AfterTargets="IncludeTransitiveProjectReferences" Condition="'$(DisableTransitiveProjectReferences)' != 'true' and '@(DefaultReferenceExclusion)' != ''"> <ItemGroup> <_transitiveProjectReferenceWithProjectName Include="@(ProjectReference->Metadata('NuGetPackageId'))" OriginalIdentity="%(Identity)" /> <_transitiveIncludedProjectReferenceWithProjectName Include="@(_transitiveProjectReferenceWithProjectName)" Exclude="@(DefaultReferenceExclusion)" /> <_transitiveExcludedProjectReferenceWithProjectName Include="@(_transitiveProjectReferenceWithProjectName)" Exclude="@(_transitiveIncludedProjectReferenceWithProjectName)" /> <ProjectReference Remove="@(_transitiveExcludedProjectReferenceWithProjectName->Metadata('OriginalIdentity'))" /> </ItemGroup> </Target> <!-- Make shared framework assemblies not app-local (non private). --> <Target Name="UpdateProjectReferencesWithPrivateAttribute" AfterTargets="AssignProjectConfiguration" BeforeTargets="PrepareProjectReferences" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and ('$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true') and '@(ProjectReferenceWithConfiguration)' != ''"> <ItemGroup> <ProjectReferenceWithConfiguration PrivateAssets="all" Private="false" Condition="$(NetCoreAppLibrary.Contains('%(Filename);'))" /> </ItemGroup> </Target> <ItemDefinitionGroup> <TargetPathWithTargetPlatformMoniker> <IsReferenceAssembly>$(IsReferenceAssembly)</IsReferenceAssembly> </TargetPathWithTargetPlatformMoniker> </ItemDefinitionGroup> <Target Name="ValidateReferenceAssemblyProjectReferences" AfterTargets="ResolveReferences" Condition="'$(IsReferenceAssembly)' == 'true' and '$(SkipValidateReferenceAssemblyProjectReferences)' != 'true'"> <Error Condition="'%(ReferencePath.ReferenceSourceTarget)' == 'ProjectReference' and '%(ReferencePath.IsReferenceAssembly)' != 'true' and '%(ReferencePath.ReferenceAssembly)' == ''" Text="Reference assemblies must only reference other reference assemblies and '%(ReferencePath.ProjectReferenceOriginalItemSpec)' is not a reference assembly project and does not set 'ProduceReferenceAssembly'." /> </Target> <!-- An opt-in target to trim out private assemblies from the ref assembly ReferencePath. --> <Target Name="TrimOutPrivateAssembliesFromReferencePath" Condition="'$(CompileUsingReferenceAssemblies)' == 'true' and '$(TrimOutPrivateAssembliesFromReferencePath)' == 'true'" AfterTargets="FindReferenceAssembliesForReferences"> <ItemGroup> <ReferencePathWithRefAssemblies Remove="@(ReferencePathWithRefAssemblies)" Condition="$(NetCoreAppLibraryNoReference.Contains('%(Filename);'))" /> </ItemGroup> </Target> </Project>
<Project> <PropertyGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'"> <!-- Disable RAR from transitively discovering dependencies for references. This is required as we don't copy dependencies over into the output directory which means RAR can't resolve them. --> <_FindDependencies>false</_FindDependencies> </PropertyGroup> <!-- Project references shouldn't be copied to the output for reference or source projects. --> <ItemDefinitionGroup Condition="'$(IsSourceProject)' == 'true' or '$(IsReferenceAssemblyProject)' == 'true'"> <ProjectReference> <Private>false</Private> </ProjectReference> </ItemDefinitionGroup> <ItemGroup Condition="'@(ProjectReference)' != ''"> <_coreLibProjectReference Include="@(ProjectReference->WithMetadataValue('Identity', '$(CoreLibProject)'))" /> <ProjectReference Update="@(_coreLibProjectReference)" Private="false"> <SetConfiguration Condition="'$(RuntimeFlavor)' == 'CoreCLR' and '$(Configuration)' != '$(CoreCLRConfiguration)'">Configuration=$(CoreCLRConfiguration)</SetConfiguration> <SetConfiguration Condition="'$(RuntimeFlavor)' == 'Mono' and '$(Configuration)' != '$(MonoConfiguration)'">Configuration=$(MonoConfiguration)</SetConfiguration> </ProjectReference> <!-- If a CoreLib ProjectReference is present, make all P2P assets non transitive. --> <ProjectReference Update="@(ProjectReference->WithMetadataValue('PrivateAssets', ''))" PrivateAssets="all" Condition="'$(IsSourceProject)' == 'true' and '@(_coreLibProjectReference)' != ''" /> </ItemGroup> <!-- Disable TargetArchitectureMismatch warning when we reference CoreLib as it is platform specific. --> <Target Name="DisableProjectReferenceArchitectureMismatchWarningForCoreLib" Condition="'@(_coreLibProjectReference)' != ''" BeforeTargets="ResolveAssemblyReferences"> <PropertyGroup> <ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch>None</ResolveAssemblyWarnOrErrorOnTargetArchitectureMismatch> </PropertyGroup> </Target> <!-- Filter out transitive P2Ps which should be excluded. --> <Target Name="FilterTransitiveProjectReferences" AfterTargets="IncludeTransitiveProjectReferences" Condition="'$(DisableTransitiveProjectReferences)' != 'true' and '@(DefaultReferenceExclusion)' != ''"> <ItemGroup> <_transitiveProjectReferenceWithProjectName Include="@(ProjectReference->Metadata('NuGetPackageId'))" OriginalIdentity="%(Identity)" /> <_transitiveIncludedProjectReferenceWithProjectName Include="@(_transitiveProjectReferenceWithProjectName)" Exclude="@(DefaultReferenceExclusion)" /> <_transitiveExcludedProjectReferenceWithProjectName Include="@(_transitiveProjectReferenceWithProjectName)" Exclude="@(_transitiveIncludedProjectReferenceWithProjectName)" /> <ProjectReference Remove="@(_transitiveExcludedProjectReferenceWithProjectName->Metadata('OriginalIdentity'))" /> </ItemGroup> </Target> <!-- Make shared framework assemblies not app-local (non private). --> <Target Name="UpdateProjectReferencesWithPrivateAttribute" AfterTargets="AssignProjectConfiguration" BeforeTargets="PrepareProjectReferences" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and ('$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true') and '@(ProjectReferenceWithConfiguration)' != ''"> <ItemGroup> <ProjectReferenceWithConfiguration PrivateAssets="all" Private="false" Condition="$(NetCoreAppLibrary.Contains('%(Filename);'))" /> </ItemGroup> </Target> <ItemDefinitionGroup> <TargetPathWithTargetPlatformMoniker> <IsReferenceAssemblyProject>$(IsReferenceAssemblyProject)</IsReferenceAssemblyProject> </TargetPathWithTargetPlatformMoniker> </ItemDefinitionGroup> <Target Name="ValidateReferenceAssemblyProjectReferences" AfterTargets="ResolveReferences" Condition="'$(IsReferenceAssemblyProject)' == 'true' and '$(SkipValidateReferenceAssemblyProjectReferences)' != 'true'"> <Error Condition="'%(ReferencePath.ReferenceSourceTarget)' == 'ProjectReference' and '%(ReferencePath.IsReferenceAssemblyProject)' != 'true' and '%(ReferencePath.ReferenceAssembly)' == ''" Text="Reference assemblies must only reference other reference assemblies and '%(ReferencePath.ProjectReferenceOriginalItemSpec)' is not a reference assembly project and does not set 'ProduceReferenceAssembly'." /> </Target> <!-- An opt-in target to trim out private assemblies from the ref assembly ReferencePath. --> <Target Name="TrimOutPrivateAssembliesFromReferencePath" Condition="'$(CompileUsingReferenceAssemblies)' == 'true' and '$(TrimOutPrivateAssembliesFromReferencePath)' == 'true'" AfterTargets="FindReferenceAssembliesForReferences"> <ItemGroup> <ReferencePathWithRefAssemblies Remove="@(ReferencePathWithRefAssemblies)" Condition="$(NetCoreAppLibraryNoReference.Contains('%(Filename);'))" /> </ItemGroup> </Target> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./eng/slngen.targets
<Project> <PropertyGroup> <SlnGenSolutionFolder Condition="'$(IsGeneratorProject)' == 'true'">gen</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(IsReferenceAssembly)' == 'true'">ref</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'">tests</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(SlnGenSolutionFolder)' == ''">src</SlnGenSolutionFolder> <!-- Don't include reference projects which compose the microsoft.netcore.app targeting pack (except the current leaf's reference project) as those are referenced by the source project via named references and hence don't need to be part of the solution file (only P2Ps need to). Include the reference project in the solution file if it targets more than just NetCoreAppCurrent as other frameworks like .NETFramework, .NETStandard or older .NETCoreApp ones require it. --> <IncludeInSolutionFile Condition="'$(IsNETCoreAppRef)' == 'true' and '$(MSBuildProjectName)' != '$(SlnGenMainProject)' and '$(TargetFramework)' == '$(NetCoreAppCurrent)' and ('$(TargetFrameworks)' == '' or '$(TargetFrameworks)' == '$(NetCoreAppCurrent)')">false</IncludeInSolutionFile> </PropertyGroup> </Project>
<Project> <PropertyGroup> <SlnGenSolutionFolder Condition="'$(IsGeneratorProject)' == 'true'">gen</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(IsReferenceAssemblyProject)' == 'true'">ref</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(IsTestProject)' == 'true' or '$(IsTrimmingTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'">tests</SlnGenSolutionFolder> <SlnGenSolutionFolder Condition="'$(SlnGenSolutionFolder)' == ''">src</SlnGenSolutionFolder> <!-- Don't include reference projects which compose the microsoft.netcore.app targeting pack (except the current leaf's reference project) as those are referenced by the source project via named references and hence don't need to be part of the solution file (only P2Ps need to). Include the reference project in the solution file if it targets more than just NetCoreAppCurrent as other frameworks like .NETFramework, .NETStandard or older .NETCoreApp ones require it. --> <IncludeInSolutionFile Condition="'$(IsNETCoreAppRef)' == 'true' and '$(MSBuildProjectName)' != '$(SlnGenMainProject)' and '$(TargetFramework)' == '$(NetCoreAppCurrent)' and ('$(TargetFrameworks)' == '' or '$(TargetFrameworks)' == '$(NetCoreAppCurrent)')">false</IncludeInSolutionFile> </PropertyGroup> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Directory.Build.props
<Project TreatAsLocalProperty="TargetOS"> <PropertyGroup> <SkipInferTargetOSName>true</SkipInferTargetOSName> <DisableArcadeTestFramework>true</DisableArcadeTestFramework> <_projectDirName>$([System.IO.Path]::GetFileName('$(MSBuildProjectDirectory)'))</_projectDirName> <IsReferenceAssembly Condition="'$(_projectDirName)' == 'ref'">true</IsReferenceAssembly> <IsSourceProject Condition="'$(_projectDirName)' == 'src'">true</IsSourceProject> <IsGeneratorProject Condition="'$(_projectDirName)' == 'gen'">true</IsGeneratorProject> <!-- Set OutDirName to change BaseOutputPath and BaseIntermediateOutputPath to include the ref subfolder. --> <OutDirName Condition="'$(IsReferenceAssembly)' == 'true'">$(MSBuildProjectName)$([System.IO.Path]::DirectorySeparatorChar)ref</OutDirName> </PropertyGroup> <Import Project="..\..\Directory.Build.props" /> <PropertyGroup> <BeforeTargetFrameworkInferenceTargets>$(RepositoryEngineeringDir)BeforeTargetFrameworkInference.targets</BeforeTargetFrameworkInferenceTargets> <RuntimeGraph>$(LibrariesProjectRoot)OSGroups.json</RuntimeGraph> <ShouldUnsetParentConfigurationAndPlatform>false</ShouldUnsetParentConfigurationAndPlatform> <GeneratePlatformNotSupportedAssemblyHeaderFile>$(RepositoryEngineeringDir)LicenseHeader.txt</GeneratePlatformNotSupportedAssemblyHeaderFile> <!-- Build all .NET Framework configurations when net48 is passed in. This is for convenience. --> <AdditionalBuildTargetFrameworks Condition="'$(BuildTargetFramework)' == 'net48'">net462;net47;net471;net472</AdditionalBuildTargetFrameworks> </PropertyGroup> <!-- Unique assembly versions increases(3x) the compiler throughput during reference package updates. --> <PropertyGroup Condition="'$(IsGeneratorProject)' == 'true'"> <AutoGenerateAssemblyVersion>true</AutoGenerateAssemblyVersion> <!-- To suppress warnings about reseting the assembly version.--> <AssemblyVersion /> </PropertyGroup> <!-- Define test projects and companions --> <PropertyGroup Condition="'$(IsSourceProject)' != 'true'"> <IsTestProject Condition="'$(IsTestProject)' == ''">false</IsTestProject> <IsTestProject Condition="$(MSBuildProjectName.EndsWith('.UnitTests')) or $(MSBuildProjectName.EndsWith('.Tests'))">true</IsTestProject> <IsTestSupportProject>false</IsTestSupportProject> <IsTestSupportProject Condition="($(MSBuildProjectFullPath.Contains('\tests\')) or $(MSBuildProjectFullPath.Contains('/tests/'))) and '$(IsTestProject)' != 'true'">true</IsTestSupportProject> <IsTrimmingTestProject Condition="$(MSBuildProjectName.EndsWith('.TrimmingTests'))">true</IsTrimmingTestProject> <!-- Treat test assemblies as non-shipping (do not publish or sign them). --> <IsShipping Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'">false</IsShipping> </PropertyGroup> <!-- Warnings that should be disabled in our test projects. --> <PropertyGroup Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'"> <!-- don't warn on usage of BinaryFormatter from test projects --> <NoWarn>$(NoWarn);SYSLIB0011</NoWarn> <!-- allow nullable annotated files to be incorporated into tests without warning --> <Nullable Condition="'$(Nullable)' == '' and '$(Language)' == 'C#'">annotations</Nullable> </PropertyGroup> <PropertyGroup> <RunApiCompatForSrc>$([MSBuild]::ValueOrDefault('$(IsSourceProject)', 'false'))</RunApiCompatForSrc> <RunMatchingRefApiCompat>$([MSBuild]::ValueOrDefault('$(IsSourceProject)', 'false'))</RunMatchingRefApiCompat> <ApiCompatEnforceOptionalRules>true</ApiCompatEnforceOptionalRules> <ApiCompatExcludeAttributeList>$(RepositoryEngineeringDir)DefaultGenApiDocIds.txt,$(RepositoryEngineeringDir)ApiCompatExcludeAttributes.txt</ApiCompatExcludeAttributeList> </PropertyGroup> <ItemGroup> <!-- Projects which are manually built. --> <ProjectExclusions Include="$(CommonTestPath)System\Net\Prerequisites\**\*.csproj" /> </ItemGroup> <Import Project="NetCoreAppLibrary.props" /> <Import Project="$(RepositoryEngineeringDir)referenceAssemblies.props" Condition="'$(IsReferenceAssembly)' == 'true'" /> <PropertyGroup> <!-- Default any assembly not specifying a key to use the Open Key --> <StrongNameKeyId>Open</StrongNameKeyId> <!-- Microsoft.Extensions projects have a separate StrongNameKeyId --> <StrongNameKeyId Condition="$(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">MicrosoftAspNetCore</StrongNameKeyId> <!-- We can't generate an apphost without restoring the targeting pack. --> <UseAppHost>false</UseAppHost> <EnableDefaultItems>false</EnableDefaultItems> </PropertyGroup> <!-- Language configuration --> <PropertyGroup> <GenFacadesIgnoreBuildAndRevisionMismatch>true</GenFacadesIgnoreBuildAndRevisionMismatch> <!-- Disable analyzers for tests and unsupported projects --> <RunAnalyzers Condition="'$(IsTestProject)' != 'true' and '$(IsSourceProject)' != 'true'">false</RunAnalyzers> <!-- Enable documentation file generation by the compiler for all libraries except for vbproj. --> <GenerateDocumentationFile Condition="'$(IsSourceProject)' == 'true' and '$(MSBuildProjectExtension)' != '.vbproj'">true</GenerateDocumentationFile> <CLSCompliant Condition="'$(CLSCompliant)' == '' and '$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true'">true</CLSCompliant> </PropertyGroup> <ItemGroup Condition="'$(IsTestProject)' == 'true'"> <EditorConfigFiles Remove="$(RepositoryEngineeringDir)CodeAnalysis.src.globalconfig" /> <EditorConfigFiles Include="$(RepositoryEngineeringDir)CodeAnalysis.test.globalconfig" /> </ItemGroup> <!-- Set up common paths --> <PropertyGroup> <!-- Helix properties --> <OSPlatformConfig>$(TargetOS).$(Platform).$(Configuration)</OSPlatformConfig> <AnyOSPlatformConfig>AnyOS.AnyCPU.$(Configuration)</AnyOSPlatformConfig> <UnixPlatformConfig>Unix.$(Platform).$(Configuration)</UnixPlatformConfig> <TestArchiveRoot>$(ArtifactsDir)helix/</TestArchiveRoot> <TestArchiveTestsRoot Condition="$(IsFunctionalTest) != true">$(TestArchiveRoot)tests/</TestArchiveTestsRoot> <TestArchiveTestsRoot Condition="$(IsFunctionalTest) == true">$(TestArchiveRoot)runonly/</TestArchiveTestsRoot> <TestArchiveTestsRoot Condition="'$(Scenario)' == 'BuildWasmApps'">$(TestArchiveRoot)buildwasmapps/</TestArchiveTestsRoot> <TestArchiveTestsDir>$(TestArchiveTestsRoot)$(OSPlatformConfig)/</TestArchiveTestsDir> <TestArchiveRuntimeRoot>$(TestArchiveRoot)runtime/</TestArchiveRuntimeRoot> <UseAppBundleRootForBuildingTests Condition="'$(ArchiveTests)' == 'true' and '$(BuildTestsOnHelix)' != 'true' and '$(TargetsAppleMobile)' == 'true'">true</UseAppBundleRootForBuildingTests> <AppBundleRoot Condition="'$(UseAppBundleRootForBuildingTests)' == 'true'">$(ArtifactsDir)bundles\</AppBundleRoot> <CommonPathRoot>$([MSBuild]::NormalizeDirectory('$(LibrariesProjectRoot)', 'Common'))</CommonPathRoot> <CommonPath>$([MSBuild]::NormalizeDirectory('$(CommonPathRoot)', 'src'))</CommonPath> <CommonTestPath>$([MSBuild]::NormalizeDirectory('$(CommonPathRoot)', 'tests'))</CommonTestPath> </PropertyGroup> <ItemGroup Condition="'$(IsTestProject)' == 'true' and '$(SkipTestUtilitiesReference)' != 'true'"> <ProjectReference Include="$(CommonTestPath)TestUtilities\TestUtilities.csproj" /> </ItemGroup> <PropertyGroup Condition="'$(IsTestProject)' == 'true'"> <EnableTestSupport>true</EnableTestSupport> <!-- TODO: Remove these conditions when VSTest is used in CI. --> <EnableRunSettingsSupport Condition="'$(ContinuousIntegrationBuild)' != 'true'">true</EnableRunSettingsSupport> <EnableCoverageSupport Condition="'$(ContinuousIntegrationBuild)' != 'true'">true</EnableCoverageSupport> </PropertyGroup> <!-- To enable the interpreter for mono desktop, we need to pass an env switch --> <PropertyGroup> <MonoEnvOptions Condition="'$(MonoEnvOptions)' == '' and '$(TargetsMobile)' != 'true' and '$(MonoForceInterpreter)' == 'true'">--interpreter</MonoEnvOptions> </PropertyGroup> <PropertyGroup Condition="'$(TargetsMobile)' == 'true'"> <SdkWithNoWorkloadForTestingPath>$(ArtifactsBinDir)sdk-no-workload\</SdkWithNoWorkloadForTestingPath> <SdkWithNoWorkloadForTestingPath>$([MSBuild]::NormalizeDirectory($(SdkWithNoWorkloadForTestingPath)))</SdkWithNoWorkloadForTestingPath> <SdkWithNoWorkloadStampPath>$(SdkWithNoWorkloadForTestingPath)version-$(SdkVersionForWorkloadTesting).stamp</SdkWithNoWorkloadStampPath> <SdkWithNoWorkload_WorkloadStampPath>$(SdkWithNoWorkloadForTestingPath)workload.stamp</SdkWithNoWorkload_WorkloadStampPath> <SdkWithWorkloadForTestingPath>$(ArtifactsBinDir)dotnet-workload\</SdkWithWorkloadForTestingPath> <SdkWithWorkloadForTestingPath>$([MSBuild]::NormalizeDirectory($(SdkWithWorkloadForTestingPath)))</SdkWithWorkloadForTestingPath> <SdkWithWorkloadStampPath>$(SdkWithWorkloadForTestingPath)version-$(SdkVersionForWorkloadTesting).stamp</SdkWithWorkloadStampPath> <SdkWithWorkload_WorkloadStampPath>$(SdkWithWorkloadForTestingPath)workload.stamp</SdkWithWorkload_WorkloadStampPath> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)testing\tests.props" Condition="'$(EnableTestSupport)' == 'true'" /> <!-- Use msbuild path functions as that property is used in bash scripts. --> <ItemGroup> <CoverageExcludeByFile Include="$([MSBuild]::NormalizePath('$(LibrariesProjectRoot)', 'Common', 'src', 'System', 'SR.*'))" /> <CoverageExcludeByFile Include="$([MSBuild]::NormalizePath('$(LibrariesProjectRoot)', 'Common', 'src', 'System', 'NotImplemented.cs'))" /> <!-- Link to the testhost folder to probe additional assemblies. --> <CoverageIncludeDirectory Include="shared\Microsoft.NETCore.App\$(ProductVersion)" /> </ItemGroup> </Project>
<Project> <PropertyGroup> <SkipInferTargetOSName>true</SkipInferTargetOSName> <DisableArcadeTestFramework>true</DisableArcadeTestFramework> <!-- Set OutDirName to change the BaseOutputPath and BaseIntermediateOutputPath properties to include the ref subfolder. --> <_projectDirName>$([System.IO.Path]::GetFileName('$(MSBuildProjectDirectory)'))</_projectDirName> <IsReferenceAssemblyProject Condition="'$(_projectDirName)' == 'ref'">true</IsReferenceAssemblyProject> <OutDirName Condition="'$(IsReferenceAssemblyProject)' == 'true'">$(MSBuildProjectName)$([System.IO.Path]::DirectorySeparatorChar)ref</OutDirName> </PropertyGroup> <Import Project="..\..\Directory.Build.props" /> <PropertyGroup> <BeforeTargetFrameworkInferenceTargets>$(RepositoryEngineeringDir)BeforeTargetFrameworkInference.targets</BeforeTargetFrameworkInferenceTargets> <RuntimeGraph>$(LibrariesProjectRoot)OSGroups.json</RuntimeGraph> <ShouldUnsetParentConfigurationAndPlatform>false</ShouldUnsetParentConfigurationAndPlatform> <GeneratePlatformNotSupportedAssemblyHeaderFile>$(RepositoryEngineeringDir)LicenseHeader.txt</GeneratePlatformNotSupportedAssemblyHeaderFile> </PropertyGroup> <!-- Define test projects and companions --> <PropertyGroup Condition="$(MSBuildProjectFullPath.Contains('$([System.IO.Path]::DirectorySeparatorChar)tests$([System.IO.Path]::DirectorySeparatorChar)'))"> <IsTestProject Condition="$(MSBuildProjectName.EndsWith('.UnitTests')) or $(MSBuildProjectName.EndsWith('.Tests'))">true</IsTestProject> <IsTrimmingTestProject Condition="$(MSBuildProjectName.EndsWith('.TrimmingTests'))">true</IsTrimmingTestProject> <IsTestSupportProject Condition="'$(IsTestProject)' != 'true' and '$(IsTrimmingTestProject)' != 'true'">true</IsTestSupportProject> <!-- Treat test assemblies as non-shipping (do not publish or sign them). --> <IsShipping Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true' or '$(IsTrimmingTestProject)' == 'true'">false</IsShipping> </PropertyGroup> <PropertyGroup> <!-- Treat as a generator project if either the parent or the parent parent directory is named gen. --> <IsGeneratorProject Condition="'$(_projectDirName)' == 'gen' or $([System.IO.Path]::GetFileName('$([System.IO.Path]::GetFullPath('$(MSBuildProjectDirectory)\..'))')) == 'gen'">true</IsGeneratorProject> <IsSourceProject Condition="'$(IsSourceProject)' == '' and '$(IsReferenceAssemblyProject)' != 'true' and '$(IsGeneratorProject)' != 'true' and '$(IsTestProject)' != 'true' and '$(IsTrimmingTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true' and '$(UsingMicrosoftNoTargetsSdk)' != 'true' and '$(UsingMicrosoftTraversalSdk)' != 'true'">true</IsSourceProject> </PropertyGroup> <!-- Warnings that should be disabled in our test projects. --> <PropertyGroup Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'"> <!-- don't warn on usage of BinaryFormatter from test projects --> <NoWarn>$(NoWarn);SYSLIB0011</NoWarn> <!-- allow nullable annotated files to be incorporated into tests without warning --> <Nullable Condition="'$(Nullable)' == '' and '$(Language)' == 'C#'">annotations</Nullable> </PropertyGroup> <!-- Unique assembly versions increases(3x) the compiler throughput during reference package updates. --> <PropertyGroup Condition="'$(IsGeneratorProject)' == 'true'"> <AutoGenerateAssemblyVersion>true</AutoGenerateAssemblyVersion> <!-- To suppress warnings about reseting the assembly version.--> <AssemblyVersion /> </PropertyGroup> <PropertyGroup> <RunApiCompatForSrc>$([MSBuild]::ValueOrDefault('$(IsSourceProject)', 'false'))</RunApiCompatForSrc> <RunMatchingRefApiCompat>$([MSBuild]::ValueOrDefault('$(IsSourceProject)', 'false'))</RunMatchingRefApiCompat> <ApiCompatEnforceOptionalRules>true</ApiCompatEnforceOptionalRules> <ApiCompatExcludeAttributeList>$(RepositoryEngineeringDir)DefaultGenApiDocIds.txt,$(RepositoryEngineeringDir)ApiCompatExcludeAttributes.txt</ApiCompatExcludeAttributeList> </PropertyGroup> <ItemGroup> <!-- Projects which are manually built. --> <ProjectExclusions Include="$(CommonTestPath)System\Net\Prerequisites\**\*.csproj" /> </ItemGroup> <Import Project="NetCoreAppLibrary.props" /> <Import Project="$(RepositoryEngineeringDir)referenceAssemblies.props" Condition="'$(IsReferenceAssemblyProject)' == 'true'" /> <PropertyGroup> <!-- Default any assembly not specifying a key to use the Open Key --> <StrongNameKeyId>Open</StrongNameKeyId> <!-- Microsoft.Extensions projects have a separate StrongNameKeyId --> <StrongNameKeyId Condition="$(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">MicrosoftAspNetCore</StrongNameKeyId> <!-- We can't generate an apphost without restoring the targeting pack. --> <UseAppHost>false</UseAppHost> <EnableDefaultItems>false</EnableDefaultItems> </PropertyGroup> <!-- Language configuration --> <PropertyGroup> <GenFacadesIgnoreBuildAndRevisionMismatch>true</GenFacadesIgnoreBuildAndRevisionMismatch> <!-- Disable analyzers for tests and unsupported projects --> <RunAnalyzers Condition="'$(IsTestProject)' != 'true' and '$(IsSourceProject)' != 'true'">false</RunAnalyzers> <!-- Enable documentation file generation by the compiler for all libraries except for vbproj. --> <GenerateDocumentationFile Condition="'$(IsSourceProject)' == 'true' and '$(MSBuildProjectExtension)' != '.vbproj'">true</GenerateDocumentationFile> <CLSCompliant Condition="'$(CLSCompliant)' == '' and '$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true'">true</CLSCompliant> </PropertyGroup> <ItemGroup Condition="'$(IsTestProject)' == 'true'"> <EditorConfigFiles Remove="$(RepositoryEngineeringDir)CodeAnalysis.src.globalconfig" /> <EditorConfigFiles Include="$(RepositoryEngineeringDir)CodeAnalysis.test.globalconfig" /> </ItemGroup> <!-- Set up common paths --> <PropertyGroup> <!-- Helix properties --> <OSPlatformConfig>$(TargetOS).$(Platform).$(Configuration)</OSPlatformConfig> <TestArchiveRoot>$(ArtifactsDir)helix/</TestArchiveRoot> <TestArchiveTestsRoot Condition="$(IsFunctionalTest) != true">$(TestArchiveRoot)tests/</TestArchiveTestsRoot> <TestArchiveTestsRoot Condition="$(IsFunctionalTest) == true">$(TestArchiveRoot)runonly/</TestArchiveTestsRoot> <TestArchiveTestsRoot Condition="'$(Scenario)' == 'BuildWasmApps'">$(TestArchiveRoot)buildwasmapps/</TestArchiveTestsRoot> <TestArchiveTestsDir>$(TestArchiveTestsRoot)$(OSPlatformConfig)/</TestArchiveTestsDir> <TestArchiveRuntimeRoot>$(TestArchiveRoot)runtime/</TestArchiveRuntimeRoot> <UseAppBundleRootForBuildingTests Condition="'$(ArchiveTests)' == 'true' and '$(BuildTestsOnHelix)' != 'true' and '$(TargetsAppleMobile)' == 'true'">true</UseAppBundleRootForBuildingTests> <AppBundleRoot Condition="'$(UseAppBundleRootForBuildingTests)' == 'true'">$(ArtifactsDir)bundles\</AppBundleRoot> <CommonPathRoot>$([MSBuild]::NormalizeDirectory('$(LibrariesProjectRoot)', 'Common'))</CommonPathRoot> <CommonPath>$([MSBuild]::NormalizeDirectory('$(CommonPathRoot)', 'src'))</CommonPath> <CommonTestPath>$([MSBuild]::NormalizeDirectory('$(CommonPathRoot)', 'tests'))</CommonTestPath> </PropertyGroup> <ItemGroup Condition="'$(IsTestProject)' == 'true' and '$(SkipTestUtilitiesReference)' != 'true'"> <ProjectReference Include="$(CommonTestPath)TestUtilities\TestUtilities.csproj" /> </ItemGroup> <PropertyGroup Condition="'$(IsTestProject)' == 'true'"> <EnableTestSupport>true</EnableTestSupport> <!-- TODO: Remove these conditions when VSTest is used in CI. --> <EnableRunSettingsSupport Condition="'$(ContinuousIntegrationBuild)' != 'true'">true</EnableRunSettingsSupport> <EnableCoverageSupport Condition="'$(ContinuousIntegrationBuild)' != 'true'">true</EnableCoverageSupport> </PropertyGroup> <!-- To enable the interpreter for mono desktop, we need to pass an env switch --> <PropertyGroup> <MonoEnvOptions Condition="'$(MonoEnvOptions)' == '' and '$(TargetsMobile)' != 'true' and '$(MonoForceInterpreter)' == 'true'">--interpreter</MonoEnvOptions> </PropertyGroup> <PropertyGroup Condition="'$(TargetsMobile)' == 'true'"> <SdkWithNoWorkloadForTestingPath>$(ArtifactsBinDir)sdk-no-workload\</SdkWithNoWorkloadForTestingPath> <SdkWithNoWorkloadForTestingPath>$([MSBuild]::NormalizeDirectory($(SdkWithNoWorkloadForTestingPath)))</SdkWithNoWorkloadForTestingPath> <SdkWithNoWorkloadStampPath>$(SdkWithNoWorkloadForTestingPath)version-$(SdkVersionForWorkloadTesting).stamp</SdkWithNoWorkloadStampPath> <SdkWithNoWorkload_WorkloadStampPath>$(SdkWithNoWorkloadForTestingPath)workload.stamp</SdkWithNoWorkload_WorkloadStampPath> <SdkWithWorkloadForTestingPath>$(ArtifactsBinDir)dotnet-workload\</SdkWithWorkloadForTestingPath> <SdkWithWorkloadForTestingPath>$([MSBuild]::NormalizeDirectory($(SdkWithWorkloadForTestingPath)))</SdkWithWorkloadForTestingPath> <SdkWithWorkloadStampPath>$(SdkWithWorkloadForTestingPath)version-$(SdkVersionForWorkloadTesting).stamp</SdkWithWorkloadStampPath> <SdkWithWorkload_WorkloadStampPath>$(SdkWithWorkloadForTestingPath)workload.stamp</SdkWithWorkload_WorkloadStampPath> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)testing\tests.props" Condition="'$(EnableTestSupport)' == 'true'" /> <!-- Use msbuild path functions as that property is used in bash scripts. --> <ItemGroup> <CoverageExcludeByFile Include="$([MSBuild]::NormalizePath('$(LibrariesProjectRoot)', 'Common', 'src', 'System', 'SR.*'))" /> <CoverageExcludeByFile Include="$([MSBuild]::NormalizePath('$(LibrariesProjectRoot)', 'Common', 'src', 'System', 'NotImplemented.cs'))" /> <!-- Link to the testhost folder to probe additional assemblies. --> <CoverageIncludeDirectory Include="shared\Microsoft.NETCore.App\$(ProductVersion)" /> </ItemGroup> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Directory.Build.targets
<Project> <PropertyGroup> <!-- Override strong name key to default to Open for test projects, Tests which wish to control this should set TestStrongNameKeyId. --> <TestStrongNameKeyId Condition="'$(TestStrongNameKeyId)' == '' and $(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">MicrosoftAspNetCore</TestStrongNameKeyId> <TestStrongNameKeyId Condition="'$(TestStrongNameKeyId)' == ''">Open</TestStrongNameKeyId> <StrongNameKeyId Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'">$(TestStrongNameKeyId)</StrongNameKeyId> </PropertyGroup> <!-- resources.targets need to be imported before the Arcade SDK. --> <Import Project="$(RepositoryEngineeringDir)resources.targets" /> <Import Project="..\..\Directory.Build.targets" /> <PropertyGroup> <NetCoreAppCurrentBuildSettings>$(NetCoreAppCurrent)-$(TargetOS)-$(Configuration)-$(TargetArchitecture)</NetCoreAppCurrentBuildSettings> <NativeBinDir>$([MSBuild]::NormalizeDirectory('$(ArtifactsBinDir)', 'native', '$(NetCoreAppCurrentBuildSettings)'))</NativeBinDir> <NetCoreAppCurrentTestHostPath>$([MSBuild]::NormalizeDirectory('$(ArtifactsBinDir)', 'testhost', '$(NetCoreAppCurrentBuildSettings)'))</NetCoreAppCurrentTestHostPath> <NetCoreAppCurrentTestHostSharedFrameworkPath>$([MSBuild]::NormalizeDirectory('$(NetCoreAppCurrentTestHostPath)', 'shared', '$(MicrosoftNetCoreAppFrameworkName)', '$(ProductVersion)'))</NetCoreAppCurrentTestHostSharedFrameworkPath> <NETStandard21RefPath>$([MSBuild]::NormalizeDirectory('$(NuGetPackageRoot)', 'netstandard.library.ref', '$(NETStandardLibraryRefVersion)', 'ref', 'netstandard2.1'))</NETStandard21RefPath> <NoWarn Condition="'$(TargetFrameworkIdentifier)' != '.NETCoreApp'">$(NoWarn);nullable</NoWarn> <NoWarn Condition="'$(GeneratePlatformNotSupportedAssembly)' == 'true' or '$(GeneratePlatformNotSupportedAssemblyMessage)' != ''">$(NoWarn);nullable;CA1052</NoWarn> <!-- Ignore Obsolete errors within the generated shims that type-forward types. SYSLIB0003: Code Access Security (CAS). SYSLIB0004: Constrained Execution Region (CER). SYSLIB0017: Strong name signing. SYSLIB0021: Derived cryptographic types. SYSLIB0022: Rijndael types. SYSLIB0023: RNGCryptoServiceProvider. SYSLIB0025: SuppressIldasmAttribute. SYSLIB0032: HandleProcessCorruptedStateExceptionsAttribute. SYSLIB0036: Regex.CompileToAssembly --> <NoWarn Condition="'$(IsPartialFacadeAssembly)' == 'true'">$(NoWarn);SYSLIB0003;SYSLIB0004;SYSLIB0015;SYSLIB0017;SYSLIB0021;SYSLIB0022;SYSLIB0023;SYSLIB0025;SYSLIB0032;SYSLIB0036</NoWarn> <!-- Reset these properties back to blank, since they are defaulted by Microsoft.NET.Sdk --> <WarningsAsErrors Condition="'$(WarningsAsErrors)' == 'NU1605'" /> <IsRuntimeAssembly Condition="'$(IsRuntimeAssembly)' == '' and '$(IsReferenceAssembly)' != 'true' and '$(IsGeneratorProject)' != 'true' and '$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true' and '$(UsingMicrosoftNoTargetsSdk)' != 'true' and '$(UsingMicrosoftTraversalSdk)' != 'true'">true</IsRuntimeAssembly> <IsRuntimeAndReferenceAssembly Condition="'$(IsRuntimeAndReferenceAssembly)' == '' and '$(IsRuntimeAssembly)' == 'true' and Exists('$(LibrariesProjectRoot)$(MSBuildProjectName)') and !Exists('$(LibrariesProjectRoot)$(MSBuildProjectName)/ref') and !$(MSBuildProjectName.StartsWith('System.Private'))">true</IsRuntimeAndReferenceAssembly> <!-- The source of truth for these IsNETCoreApp* properties is NetCoreAppLibrary.props. --> <IsNETCoreAppSrc Condition="('$(IsRuntimeAssembly)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true') and $(NetCoreAppLibrary.Contains('$(AssemblyName);'))">true</IsNETCoreAppSrc> <IsNETCoreAppRef Condition="('$(IsReferenceAssembly)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true') and $(NetCoreAppLibrary.Contains('$(AssemblyName);')) and !$(NetCoreAppLibraryNoReference.Contains('$(AssemblyName);'))">true</IsNETCoreAppRef> <!-- By default, disable implicit framework references for NetCoreAppCurrent libraries. --> <DisableImplicitFrameworkReferences Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and $([MSBuild]::VersionGreaterThanOrEquals($(TargetFrameworkVersion), '$(NETCoreAppCurrentVersion)')) and ('$(IsNETCoreAppRef)' == 'true' or '$(IsNETCoreAppSrc)' == 'true')">true</DisableImplicitFrameworkReferences> <!-- Disable implicit assembly references for .NETCoreApp refs and sources. --> <DisableImplicitAssemblyReferences Condition="'$(DisableImplicitAssemblyReferences)' == '' and '$(DisableImplicitFrameworkReferences)' != 'true' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp' and ('$(IsReferenceAssembly)' == 'true' or '$(IsSourceProject)' == 'true')">true</DisableImplicitAssemblyReferences> <!-- Enable trimming for any source project that's part of the shared framework. Don't attempt to trim PNSE assemblies which are generated from the reference source. --> <ILLinkTrimAssembly Condition="'$(ILLinkTrimAssembly)' == '' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(IsNETCoreAppSrc)' == 'true' and '$(GeneratePlatformNotSupportedAssembly)' != 'true' and '$(GeneratePlatformNotSupportedAssemblyMessage)' == ''">true</ILLinkTrimAssembly> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)versioning.targets" /> <!-- Libraries-specific binplacing properties --> <PropertyGroup> <BinPlaceRef Condition="'$(BinPlaceRef)' == '' and ('$(IsReferenceAssembly)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true')">true</BinPlaceRef> <BinPlaceRuntime Condition="'$(BinPlaceRuntime)' == '' and ('$(IsRuntimeAssembly)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true')">true</BinPlaceRuntime> <BinPlaceForTargetVertical Condition="'$(BinPlaceForTargetVertical)' == ''">true</BinPlaceForTargetVertical> <GetBinPlaceItemsDependsOn Condition="$(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">$(GetBinPlaceItemsDependsOn);AddDocumentationFileAsBinPlaceItemForExtensionsProjects</GetBinPlaceItemsDependsOn> </PropertyGroup> <Target Name="AddDocumentationFileAsBinPlaceItemForExtensionsProjects" Condition="Exists('$(DocumentationFile)')"> <ItemGroup> <!-- Microsoft.Extensions are not yet using the doc-file package --> <BinPlaceItem Include="$(DocumentationFile)" /> </ItemGroup> </Target> <ItemGroup Condition="'@(BinPlaceTargetFrameworks)' == ''"> <!-- Used by the runtime tests to prepare the CORE_ROOT layout. Don't use in libraries. --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)" Condition="'$(BinPlaceForTargetVertical)' == 'true'"> <NativePath>$(LibrariesAllBinArtifactsPath)</NativePath> <RefPath>$(LibrariesAllRefArtifactsPath)</RefPath> <RuntimePath>$(LibrariesAllBinArtifactsPath)</RuntimePath> </BinPlaceTargetFrameworks> <BinPlaceDir Include="$(MicrosoftNetCoreAppRefPackDir)analyzers\dotnet\$(AnalyzerLanguage)" Condition="'$(IsNETCoreAppAnalyzer)' == 'true'" /> <!-- Setup the shared framework directory for testing --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)"> <NativePath>$(NetCoreAppCurrentTestHostSharedFrameworkPath)</NativePath> <RuntimePath Condition="'$(IsNETCoreAppSrc)' == 'true'">$(NetCoreAppCurrentTestHostSharedFrameworkPath)</RuntimePath> </BinPlaceTargetFrameworks> <!-- Microsoft.NetCore.App.Ref and Microsoft.NetCore.App.Runtime targeting packs --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)"> <NativePath>$(MicrosoftNetCoreAppRuntimePackNativeDir)</NativePath> <RefPath Condition="'$(IsNETCoreAppRef)' == 'true'">$(MicrosoftNetCoreAppRefPackRefDir)</RefPath> <RuntimePath Condition="'$(IsNETCoreAppSrc)' == 'true'">$(MicrosoftNetCoreAppRuntimePackRidLibTfmDir)</RuntimePath> </BinPlaceTargetFrameworks> <BinPlaceTargetFrameworks Include="@(AdditionalBinPlaceTargetFrameworks)" /> </ItemGroup> <Import Project="$(RepositoryEngineeringDir)targetingpacks.targets" /> <PropertyGroup> <!-- Libraries non test projects shouldn't reference compat shims. --> <SkipTargetingPackShimReferences Condition="'$(UseLocalTargetingRuntimePack)' == 'true' and '$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true'">true</SkipTargetingPackShimReferences> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)codeOptimization.targets" /> <Import Project="$(RepositoryEngineeringDir)references.targets" /> <Import Project="$(RepositoryEngineeringDir)resolveContract.targets" /> <Import Project="$(RepositoryEngineeringDir)testing\tests.targets" Condition="'$(EnableTestSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\linker\trimmingTests.targets" Condition="'$(IsTrimmingTestProject)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\runtimeConfiguration.targets" /> <Import Project="$(RepositoryEngineeringDir)testing\runsettings.targets" Condition="'$(EnableRunSettingsSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\coverage.targets" Condition="'$(EnableRunSettingsSupport)' == 'true' or '$(EnableCoverageSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)slngen.targets" Condition="'$(IsSlnGen)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)illink.targets" Condition="'$(IsSourceProject)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)AvoidRestoreCycleOnSelfReference.targets" Condition="'$(AvoidRestoreCycleOnSelfReference)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)packaging.targets" Condition="'$(IsPackable)' == 'true'" /> <ItemGroup Condition="'$(UseTargetFrameworkPackage)' != 'false'"> <PackageReference Include="Microsoft.DotNet.Build.Tasks.TargetFramework" Version="$(MicrosoftDotNetBuildTasksTargetFrameworkVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> </ItemGroup> <ItemGroup Condition="'$(IsSourceProject)' == 'true' or '$(IsReferenceAssembly)' == 'true' or '$(IsPartialFacadeAssembly)' == 'true'"> <PackageReference Include="Microsoft.DotNet.ApiCompat" Condition="'$(DotNetBuildFromSource)' != 'true'" Version="$(MicrosoftDotNetApiCompatVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> <PackageReference Include="Microsoft.DotNet.GenAPI" Condition="'$(DotNetBuildFromSource)' != 'true'" Version="$(MicrosoftDotNetGenApiVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> <PackageReference Include="Microsoft.DotNet.GenFacades" Version="$(MicrosoftDotNetGenFacadesVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> </ItemGroup> <!-- Do not clean binplace assets in the ref targeting pack to avoid incremental build failures when the SDK tries to resolve the assets from the FrameworkList. --> <Target Name="RemoveTargetingPackIncrementalClean" Condition="'@(AdditionalCleanDirectories)' != ''" BeforeTargets="IncrementalCleanAdditionalDirectories; CleanAdditionalDirectories"> <ItemGroup> <AdditionalCleanDirectories Remove="@(AdditionalCleanDirectories)" Condition="'%(Identity)' == '$(MicrosoftNetCoreAppRefPackRefDir)'" /> </ItemGroup> </Target> <!-- Adds Nullable annotation attributes to non .NETCoreApp builds. --> <ItemGroup Condition="'$(Nullable)' != '' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\NullableAttributes.cs" Link="System\Diagnostics\CodeAnalysis\NullableAttributes.cs" /> </ItemGroup> <!-- If a tfm doesn't target .NETCoreApp but uses the platform support attributes, then we include the System.Runtime.Versioning*Platform* annotation attribute classes in the project as internal. If a project has specified assembly-level SupportedOSPlatforms or UnsupportedOSPlatforms, we can infer the need without having IncludePlatformAttributes set. --> <PropertyGroup> <IncludePlatformAttributes Condition="'$(IncludePlatformAttributes)' == '' and ('$(SupportedOSPlatforms)' != '' or '$(UnsupportedOSPlatforms)' != '')">true</IncludePlatformAttributes> </PropertyGroup> <ItemGroup Condition="'$(IncludePlatformAttributes)' == 'true' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Runtime\Versioning\PlatformAttributes.cs" Link="System\Runtime\Versioning\PlatformAttributes.cs" /> </ItemGroup> <!-- Adds ObsoleteAttribute to projects that need to apply downlevel Obsoletions with DiagnosticId and UrlFormat --> <Choose> <When Condition="'$(IncludeInternalObsoleteAttribute)' == 'true' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <ItemGroup> <Compile Include="$(CoreLibSharedDir)System\ObsoleteAttribute.cs" Link="System\ObsoleteAttribute.cs" /> </ItemGroup> <PropertyGroup> <!-- Suppress CS0436 to allow ObsoleteAttribute to be internally defined and used in netstandard --> <NoWarn>$(NoWarn);CS0436</NoWarn> </PropertyGroup> </When> </Choose> <PropertyGroup> <SkipLocalsInit Condition="'$(SkipLocalsInit)' == '' and '$(MSBuildProjectExtension)' == '.csproj' and '$(IsNETCoreAppSrc)' == 'true' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp'">true</SkipLocalsInit> </PropertyGroup> <!--Instructs compiler not to emit .locals init, using SkipLocalsInitAttribute.--> <Choose> <When Condition="'$(SkipLocalsInit)' == 'true'"> <PropertyGroup > <!-- This is needed to use the SkipLocalsInitAttribute. --> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(CommonPath)SkipLocalsInit.cs" Link="Common\SkipLocalsInit.cs" /> </ItemGroup> </When> </Choose> <PropertyGroup> <BuildAnalyzerReferences>$(BuildProjectReferences)</BuildAnalyzerReferences> <BuildAnalyzerReferences Condition="'$(BuildingInsideVisualStudio)' == 'true'">false</BuildAnalyzerReferences> </PropertyGroup> <ItemGroup> <!-- Ensure AnalyzerReference items are restored and built The target framework of Analyzers has no relationship to that of the refrencing project, so we don't apply TargetFramework filters nor do we pass in TargetFramework. When BuildProjectReferences=false we make sure to set BuildReference=false to make sure not to try to call GetTargetPath in the outerbuild of the analyzer project. --> <ProjectReference Include="@(AnalyzerReference)" SkipGetTargetFrameworkProperties="true" UndefineProperties="TargetFramework" ReferenceOutputAssembly="false" PrivateAssets="all" BuildReference="$(BuildAnalyzerReferences)" /> </ItemGroup> <Target Name="GetAnalyzerPackFiles" DependsOnTargets="$(GenerateNuspecDependsOn)" Returns="@(_AnalyzerPackFile)"> <PropertyGroup> <_analyzerPath>analyzers/dotnet</_analyzerPath> <_analyzerPath Condition="'$(AnalyzerRoslynVersion)' != ''">$(_analyzerPath)/roslyn$(AnalyzerRoslynVersion)</_analyzerPath> <_analyzerPath Condition="'$(AnalyzerLanguage)' != ''">$(_analyzerPath)/$(AnalyzerLanguage)</_analyzerPath> </PropertyGroup> <ItemGroup> <_AnalyzerPackFile Include="@(_BuildOutputInPackage)" IsSymbol="false" /> <_AnalyzerPackFile Include="@(_TargetPathsToSymbols)" IsSymbol="true" /> <_AnalyzerPackFile PackagePath="$(_analyzerPath)/%(TargetPath)" /> </ItemGroup> <Error Condition="'%(_AnalyzerPackFile.TargetFramework)' != 'netstandard2.0'" Text="Analyzers must only target netstandard2.0 since they run in the compiler which targets netstandard2.0. The following files were found to target '%(_AnalyzerPackFile.TargetFramework)': @(_AnalyzerPackFile)" /> </Target> </Project>
<Project> <PropertyGroup> <!-- Override strong name key to default to Open for test projects, Tests which wish to control this should set TestStrongNameKeyId. --> <TestStrongNameKeyId Condition="'$(TestStrongNameKeyId)' == '' and $(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">MicrosoftAspNetCore</TestStrongNameKeyId> <TestStrongNameKeyId Condition="'$(TestStrongNameKeyId)' == ''">Open</TestStrongNameKeyId> <StrongNameKeyId Condition="'$(IsTestProject)' == 'true' or '$(IsTestSupportProject)' == 'true'">$(TestStrongNameKeyId)</StrongNameKeyId> </PropertyGroup> <!-- resources.targets need to be imported before the Arcade SDK. --> <Import Project="$(RepositoryEngineeringDir)resources.targets" /> <Import Project="..\..\Directory.Build.targets" /> <PropertyGroup> <NetCoreAppCurrentBuildSettings>$(NetCoreAppCurrent)-$(TargetOS)-$(Configuration)-$(TargetArchitecture)</NetCoreAppCurrentBuildSettings> <NativeBinDir>$([MSBuild]::NormalizeDirectory('$(ArtifactsBinDir)', 'native', '$(NetCoreAppCurrentBuildSettings)'))</NativeBinDir> <NetCoreAppCurrentTestHostPath>$([MSBuild]::NormalizeDirectory('$(ArtifactsBinDir)', 'testhost', '$(NetCoreAppCurrentBuildSettings)'))</NetCoreAppCurrentTestHostPath> <NetCoreAppCurrentTestHostSharedFrameworkPath>$([MSBuild]::NormalizeDirectory('$(NetCoreAppCurrentTestHostPath)', 'shared', '$(MicrosoftNetCoreAppFrameworkName)', '$(ProductVersion)'))</NetCoreAppCurrentTestHostSharedFrameworkPath> <NETStandard21RefPath>$([MSBuild]::NormalizeDirectory('$(NuGetPackageRoot)', 'netstandard.library.ref', '$(NETStandardLibraryRefVersion)', 'ref', 'netstandard2.1'))</NETStandard21RefPath> <NoWarn Condition="'$(TargetFrameworkIdentifier)' != '.NETCoreApp'">$(NoWarn);nullable</NoWarn> <NoWarn Condition="'$(GeneratePlatformNotSupportedAssembly)' == 'true' or '$(GeneratePlatformNotSupportedAssemblyMessage)' != ''">$(NoWarn);nullable;CA1052</NoWarn> <!-- Ignore Obsolete errors within the generated shims that type-forward types. SYSLIB0003: Code Access Security (CAS). SYSLIB0004: Constrained Execution Region (CER). SYSLIB0017: Strong name signing. SYSLIB0021: Derived cryptographic types. SYSLIB0022: Rijndael types. SYSLIB0023: RNGCryptoServiceProvider. SYSLIB0025: SuppressIldasmAttribute. SYSLIB0032: HandleProcessCorruptedStateExceptionsAttribute. SYSLIB0036: Regex.CompileToAssembly --> <NoWarn Condition="'$(IsPartialFacadeAssembly)' == 'true'">$(NoWarn);SYSLIB0003;SYSLIB0004;SYSLIB0015;SYSLIB0017;SYSLIB0021;SYSLIB0022;SYSLIB0023;SYSLIB0025;SYSLIB0032;SYSLIB0036</NoWarn> <!-- Reset these properties back to blank, since they are defaulted by Microsoft.NET.Sdk --> <WarningsAsErrors Condition="'$(WarningsAsErrors)' == 'NU1605'" /> <IsRuntimeAndReferenceAssembly Condition="'$(IsRuntimeAndReferenceAssembly)' == '' and '$(IsSourceProject)' == 'true' and Exists('$(LibrariesProjectRoot)$(MSBuildProjectName)') and !Exists('$(LibrariesProjectRoot)$(MSBuildProjectName)$([System.IO.Path]::DirectorySeparatorChar)ref') and !$(MSBuildProjectName.StartsWith('System.Private'))">true</IsRuntimeAndReferenceAssembly> <!-- The source of truth for these IsNETCoreApp* properties is NetCoreAppLibrary.props. --> <IsNETCoreAppSrc Condition="('$(IsSourceProject)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true') and $(NetCoreAppLibrary.Contains('$(AssemblyName);'))">true</IsNETCoreAppSrc> <IsNETCoreAppRef Condition="('$(IsReferenceAssemblyProject)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true') and $(NetCoreAppLibrary.Contains('$(AssemblyName);')) and !$(NetCoreAppLibraryNoReference.Contains('$(AssemblyName);'))">true</IsNETCoreAppRef> <!-- By default, disable implicit framework references for NetCoreAppCurrent libraries. --> <DisableImplicitFrameworkReferences Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and $([MSBuild]::VersionGreaterThanOrEquals($(TargetFrameworkVersion), '$(NETCoreAppCurrentVersion)')) and ('$(IsNETCoreAppRef)' == 'true' or '$(IsNETCoreAppSrc)' == 'true')">true</DisableImplicitFrameworkReferences> <!-- Disable implicit assembly references for .NETCoreApp refs and sources. --> <DisableImplicitAssemblyReferences Condition="'$(DisableImplicitAssemblyReferences)' == '' and '$(DisableImplicitFrameworkReferences)' != 'true' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp' and ('$(IsReferenceAssemblyProject)' == 'true' or '$(IsSourceProject)' == 'true')">true</DisableImplicitAssemblyReferences> <!-- Enable trimming for any source project that's part of the shared framework. Don't attempt to trim PNSE assemblies which are generated from the reference source. --> <ILLinkTrimAssembly Condition="'$(ILLinkTrimAssembly)' == '' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(IsNETCoreAppSrc)' == 'true' and '$(GeneratePlatformNotSupportedAssembly)' != 'true' and '$(GeneratePlatformNotSupportedAssemblyMessage)' == ''">true</ILLinkTrimAssembly> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)versioning.targets" /> <!-- Libraries-specific binplacing properties --> <PropertyGroup> <BinPlaceRef Condition="'$(BinPlaceRef)' == '' and ('$(IsReferenceAssemblyProject)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true')">true</BinPlaceRef> <BinPlaceRuntime Condition="'$(BinPlaceRuntime)' == '' and ('$(IsSourceProject)' == 'true' or '$(IsRuntimeAndReferenceAssembly)' == 'true')">true</BinPlaceRuntime> <BinPlaceForTargetVertical Condition="'$(BinPlaceForTargetVertical)' == ''">true</BinPlaceForTargetVertical> <GetBinPlaceItemsDependsOn Condition="$(MSBuildProjectName.StartsWith('Microsoft.Extensions.'))">$(GetBinPlaceItemsDependsOn);AddDocumentationFileAsBinPlaceItemForExtensionsProjects</GetBinPlaceItemsDependsOn> </PropertyGroup> <Target Name="AddDocumentationFileAsBinPlaceItemForExtensionsProjects" Condition="Exists('$(DocumentationFile)')"> <ItemGroup> <!-- Microsoft.Extensions are not yet using the doc-file package --> <BinPlaceItem Include="$(DocumentationFile)" /> </ItemGroup> </Target> <ItemGroup Condition="'@(BinPlaceTargetFrameworks)' == ''"> <!-- Used by the runtime tests to prepare the CORE_ROOT layout. Don't use in libraries. --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)" Condition="'$(BinPlaceForTargetVertical)' == 'true'"> <NativePath>$(LibrariesAllBinArtifactsPath)</NativePath> <RefPath>$(LibrariesAllRefArtifactsPath)</RefPath> <RuntimePath>$(LibrariesAllBinArtifactsPath)</RuntimePath> </BinPlaceTargetFrameworks> <BinPlaceDir Include="$(MicrosoftNetCoreAppRefPackDir)analyzers\dotnet\$(AnalyzerLanguage)" Condition="'$(IsNETCoreAppAnalyzer)' == 'true'" /> <!-- Setup the shared framework directory for testing --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)"> <NativePath>$(NetCoreAppCurrentTestHostSharedFrameworkPath)</NativePath> <RuntimePath Condition="'$(IsNETCoreAppSrc)' == 'true'">$(NetCoreAppCurrentTestHostSharedFrameworkPath)</RuntimePath> </BinPlaceTargetFrameworks> <!-- Microsoft.NetCore.App.Ref and Microsoft.NetCore.App.Runtime targeting packs --> <BinPlaceTargetFrameworks Include="$(NetCoreAppCurrent)-$(TargetOS)"> <NativePath>$(MicrosoftNetCoreAppRuntimePackNativeDir)</NativePath> <RefPath Condition="'$(IsNETCoreAppRef)' == 'true'">$(MicrosoftNetCoreAppRefPackRefDir)</RefPath> <RuntimePath Condition="'$(IsNETCoreAppSrc)' == 'true'">$(MicrosoftNetCoreAppRuntimePackRidLibTfmDir)</RuntimePath> </BinPlaceTargetFrameworks> <BinPlaceTargetFrameworks Include="@(AdditionalBinPlaceTargetFrameworks)" /> </ItemGroup> <Import Project="$(RepositoryEngineeringDir)targetingpacks.targets" /> <PropertyGroup> <!-- Libraries non test projects shouldn't reference compat shims. --> <SkipTargetingPackShimReferences Condition="'$(UseLocalTargetingRuntimePack)' == 'true' and '$(IsTestProject)' != 'true' and '$(IsTestSupportProject)' != 'true'">true</SkipTargetingPackShimReferences> </PropertyGroup> <Import Project="$(RepositoryEngineeringDir)codeOptimization.targets" /> <Import Project="$(RepositoryEngineeringDir)references.targets" /> <Import Project="$(RepositoryEngineeringDir)resolveContract.targets" /> <Import Project="$(RepositoryEngineeringDir)testing\tests.targets" Condition="'$(EnableTestSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\linker\trimmingTests.targets" Condition="'$(IsTrimmingTestProject)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\runtimeConfiguration.targets" /> <Import Project="$(RepositoryEngineeringDir)testing\runsettings.targets" Condition="'$(EnableRunSettingsSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)testing\coverage.targets" Condition="'$(EnableRunSettingsSupport)' == 'true' or '$(EnableCoverageSupport)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)slngen.targets" Condition="'$(IsSlnGen)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)illink.targets" Condition="'$(IsSourceProject)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)AvoidRestoreCycleOnSelfReference.targets" Condition="'$(AvoidRestoreCycleOnSelfReference)' == 'true'" /> <Import Project="$(RepositoryEngineeringDir)packaging.targets" Condition="'$(IsPackable)' == 'true'" /> <ItemGroup Condition="'$(UseTargetFrameworkPackage)' != 'false'"> <PackageReference Include="Microsoft.DotNet.Build.Tasks.TargetFramework" Version="$(MicrosoftDotNetBuildTasksTargetFrameworkVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> </ItemGroup> <ItemGroup Condition="'$(IsSourceProject)' == 'true' or '$(IsReferenceAssemblyProject)' == 'true' or '$(IsPartialFacadeAssembly)' == 'true'"> <PackageReference Include="Microsoft.DotNet.ApiCompat" Condition="'$(DotNetBuildFromSource)' != 'true'" Version="$(MicrosoftDotNetApiCompatVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> <PackageReference Include="Microsoft.DotNet.GenAPI" Condition="'$(DotNetBuildFromSource)' != 'true'" Version="$(MicrosoftDotNetGenApiVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> <PackageReference Include="Microsoft.DotNet.GenFacades" Version="$(MicrosoftDotNetGenFacadesVersion)" PrivateAssets="all" IsImplicitlyDefined="true" /> </ItemGroup> <!-- Do not clean binplace assets in the ref targeting pack to avoid incremental build failures when the SDK tries to resolve the assets from the FrameworkList. --> <Target Name="RemoveTargetingPackIncrementalClean" Condition="'@(AdditionalCleanDirectories)' != ''" BeforeTargets="IncrementalCleanAdditionalDirectories; CleanAdditionalDirectories"> <ItemGroup> <AdditionalCleanDirectories Remove="@(AdditionalCleanDirectories)" Condition="'%(Identity)' == '$(MicrosoftNetCoreAppRefPackRefDir)'" /> </ItemGroup> </Target> <!-- Adds Nullable annotation attributes to non .NETCoreApp builds. --> <ItemGroup Condition="'$(Nullable)' != '' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\NullableAttributes.cs" Link="System\Diagnostics\CodeAnalysis\NullableAttributes.cs" /> </ItemGroup> <!-- If a tfm doesn't target .NETCoreApp but uses the platform support attributes, then we include the System.Runtime.Versioning*Platform* annotation attribute classes in the project as internal. If a project has specified assembly-level SupportedOSPlatforms or UnsupportedOSPlatforms, we can infer the need without having IncludePlatformAttributes set. --> <PropertyGroup> <IncludePlatformAttributes Condition="'$(IncludePlatformAttributes)' == '' and ('$(SupportedOSPlatforms)' != '' or '$(UnsupportedOSPlatforms)' != '')">true</IncludePlatformAttributes> </PropertyGroup> <ItemGroup Condition="'$(IncludePlatformAttributes)' == 'true' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Runtime\Versioning\PlatformAttributes.cs" Link="System\Runtime\Versioning\PlatformAttributes.cs" /> </ItemGroup> <!-- Adds ObsoleteAttribute to projects that need to apply downlevel Obsoletions with DiagnosticId and UrlFormat --> <Choose> <When Condition="'$(IncludeInternalObsoleteAttribute)' == 'true' and '$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <ItemGroup> <Compile Include="$(CoreLibSharedDir)System\ObsoleteAttribute.cs" Link="System\ObsoleteAttribute.cs" /> </ItemGroup> <PropertyGroup> <!-- Suppress CS0436 to allow ObsoleteAttribute to be internally defined and used in netstandard --> <NoWarn>$(NoWarn);CS0436</NoWarn> </PropertyGroup> </When> </Choose> <PropertyGroup> <SkipLocalsInit Condition="'$(SkipLocalsInit)' == '' and '$(MSBuildProjectExtension)' == '.csproj' and '$(IsNETCoreAppSrc)' == 'true' and '$(TargetFrameworkIdentifier)' == '.NETCoreApp'">true</SkipLocalsInit> </PropertyGroup> <!--Instructs compiler not to emit .locals init, using SkipLocalsInitAttribute.--> <Choose> <When Condition="'$(SkipLocalsInit)' == 'true'"> <PropertyGroup > <!-- This is needed to use the SkipLocalsInitAttribute. --> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(CommonPath)SkipLocalsInit.cs" Link="Common\SkipLocalsInit.cs" /> </ItemGroup> </When> </Choose> <PropertyGroup> <BuildAnalyzerReferences>$(BuildProjectReferences)</BuildAnalyzerReferences> <BuildAnalyzerReferences Condition="'$(BuildingInsideVisualStudio)' == 'true'">false</BuildAnalyzerReferences> </PropertyGroup> <ItemGroup> <!-- Ensure AnalyzerReference items are restored and built The target framework of Analyzers has no relationship to that of the refrencing project, so we don't apply TargetFramework filters nor do we pass in TargetFramework. When BuildProjectReferences=false we make sure to set BuildReference=false to make sure not to try to call GetTargetPath in the outerbuild of the analyzer project. --> <ProjectReference Include="@(AnalyzerReference)" SkipGetTargetFrameworkProperties="true" UndefineProperties="TargetFramework" ReferenceOutputAssembly="false" PrivateAssets="all" BuildReference="$(BuildAnalyzerReferences)" /> </ItemGroup> <Target Name="GetAnalyzerPackFiles" DependsOnTargets="$(GenerateNuspecDependsOn)" Returns="@(_AnalyzerPackFile)"> <PropertyGroup> <_analyzerPath>analyzers/dotnet</_analyzerPath> <_analyzerPath Condition="'$(AnalyzerRoslynVersion)' != ''">$(_analyzerPath)/roslyn$(AnalyzerRoslynVersion)</_analyzerPath> <_analyzerPath Condition="'$(AnalyzerLanguage)' != ''">$(_analyzerPath)/$(AnalyzerLanguage)</_analyzerPath> </PropertyGroup> <ItemGroup> <_AnalyzerPackFile Include="@(_BuildOutputInPackage)" IsSymbol="false" /> <_AnalyzerPackFile Include="@(_TargetPathsToSymbols)" IsSymbol="true" /> <_AnalyzerPackFile PackagePath="$(_analyzerPath)/%(TargetPath)" /> </ItemGroup> <Error Condition="'%(_AnalyzerPackFile.TargetFramework)' != 'netstandard2.0'" Text="Analyzers must only target netstandard2.0 since they run in the compiler which targets netstandard2.0. The following files were found to target '%(_AnalyzerPackFile.TargetFramework)': @(_AnalyzerPackFile)" /> </Target> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFrameworks>$(NetCoreAppToolCurrent);$(NetFrameworkToolCurrent)</TargetFrameworks> <EnableBinPlacing>false</EnableBinPlacing> <!-- This project should not build against the live built .NETCoreApp targeting pack as it contributes to the build itself. --> <UseLocalTargetingRuntimePack>false</UseLocalTargetingRuntimePack> <AssemblyName>Microsoft.NETCore.Platforms.BuildTasks</AssemblyName> <IsSourceProject>false</IsSourceProject> <IncludeBuildOutput>false</IncludeBuildOutput> <IncludeSymbols>false</IncludeSymbols> <IsPackable>true</IsPackable> <PackageId>$(MSBuildProjectName)</PackageId> <SuppressDependenciesWhenPacking>true</SuppressDependenciesWhenPacking> <PackageDescription>Provides runtime information required to resolve target framework, platform, and runtime specific implementations of .NETCore packages.</PackageDescription> <NoWarn>$(NoWarn);NU5128</NoWarn> <!-- No Dependencies--> <AvoidRestoreCycleOnSelfReference>true</AvoidRestoreCycleOnSelfReference> <!-- TODO: Remove with AvoidRestoreCycleOnSelfReference hack. --> <PackageValidationBaselineName>$(MSBuildProjectName)</PackageValidationBaselineName> <BeforePack>GenerateRuntimeJson;UpdateRuntimeJson;$(BeforePack)</BeforePack> <_generateRuntimeGraphTargetFramework Condition="'$(MSBuildRuntimeType)' == 'core'">$(NetCoreAppToolCurrent)</_generateRuntimeGraphTargetFramework> <_generateRuntimeGraphTargetFramework Condition="'$(MSBuildRuntimeType)' != 'core'">net472</_generateRuntimeGraphTargetFramework> <_generateRuntimeGraphTask>$([MSBuild]::NormalizePath('$(BaseOutputPath)', $(Configuration), '$(_generateRuntimeGraphTargetFramework)', '$(AssemblyName).dll'))</_generateRuntimeGraphTask> <!-- When building from source, ensure the RID we're building for is part of the RID graph --> <AdditionalRuntimeIdentifiers Condition="'$(DotNetBuildFromSource)' == 'true'">$(AdditionalRuntimeIdentifiers);$(OutputRID)</AdditionalRuntimeIdentifiers> </PropertyGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'"> <Compile Include="BuildTask.Desktop.cs" /> <Compile Include="AssemblyResolver.cs" /> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\UnconditionalSuppressMessageAttribute.cs" /> </ItemGroup> <ItemGroup> <Compile Include="BuildTask.cs" /> <Compile Include="Extensions.cs" /> <Compile Include="GenerateRuntimeGraph.cs" /> <Compile Include="RID.cs" /> <Compile Include="RuntimeGroupCollection.cs" /> <Compile Include="RuntimeGroup.cs" /> <Compile Include="RuntimeVersion.cs" /> </ItemGroup> <ItemGroup> <Content Condition="'$(AdditionalRuntimeIdentifiers)' == ''" Include="runtime.json" PackagePath="/" /> <Content Condition="'$(AdditionalRuntimeIdentifiers)' != ''" Include="$(IntermediateOutputPath)runtime.json" PackagePath="/" /> <Content Include="$(PlaceholderFile)" PackagePath="lib/netstandard1.0" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.Build.Tasks.Core" Version="$(MicrosoftBuildTasksCoreVersion)" /> <PackageReference Include="NuGet.ProjectModel" Version="$(NugetProjectModelVersion)" /> <PackageReference Include="Newtonsoft.Json" Version="$(NewtonsoftJsonVersion)" /> </ItemGroup> <Import Project="runtimeGroups.props" /> <UsingTask TaskName="GenerateRuntimeGraph" AssemblyFile="$(_generateRuntimeGraphTask)"/> <Target Name="GenerateRuntimeJson" Condition="'$(AdditionalRuntimeIdentifiers)' != ''"> <MakeDir Directories="$(IntermediateOutputPath)" /> <GenerateRuntimeGraph RuntimeGroups="@(RuntimeGroupWithQualifiers)" AdditionalRuntimeIdentifiers="$(AdditionalRuntimeIdentifiers)" AdditionalRuntimeIdentifierParent="$(AdditionalRuntimeIdentifierParent)" RuntimeJson="$(IntermediateOutputPath)runtime.json" UpdateRuntimeFiles="True" /> </Target> <Target Name="UpdateRuntimeJson"> <!-- Generates a Runtime graph using RuntimeGroups and diffs it with the graph described by runtime.json and runtime.compatibility.json Specifying UpdateRuntimeFiles=true skips the diff and updates those files. The graph can be visualized using the generated dmgl --> <MakeDir Directories="$(OutputPath)" /> <GenerateRuntimeGraph RuntimeGroups="@(RuntimeGroupWithQualifiers)" RuntimeJson="runtime.json" CompatibilityMap="runtime.compatibility.json" RuntimeDirectedGraph="$(OutputPath)runtime.json.dgml" UpdateRuntimeFiles="$(UpdateRuntimeFiles)" /> </Target> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFrameworks>$(NetCoreAppToolCurrent);$(NetFrameworkToolCurrent)</TargetFrameworks> <EnableBinPlacing>false</EnableBinPlacing> <!-- This project should not build against the live built .NETCoreApp targeting pack as it contributes to the build itself. --> <UseLocalTargetingRuntimePack>false</UseLocalTargetingRuntimePack> <!-- Use targeting pack references instead of granular ones in the project file. --> <DisableImplicitAssemblyReferences>false</DisableImplicitAssemblyReferences> <AssemblyName>Microsoft.NETCore.Platforms.BuildTasks</AssemblyName> <IncludeBuildOutput>false</IncludeBuildOutput> <IncludeSymbols>false</IncludeSymbols> <IsPackable>true</IsPackable> <PackageId>$(MSBuildProjectName)</PackageId> <SuppressDependenciesWhenPacking>true</SuppressDependenciesWhenPacking> <PackageDescription>Provides runtime information required to resolve target framework, platform, and runtime specific implementations of .NETCore packages.</PackageDescription> <NoWarn>$(NoWarn);NU5128</NoWarn> <!-- No Dependencies--> <AvoidRestoreCycleOnSelfReference>true</AvoidRestoreCycleOnSelfReference> <!-- TODO: Remove with AvoidRestoreCycleOnSelfReference hack. --> <PackageValidationBaselineName>$(MSBuildProjectName)</PackageValidationBaselineName> <BeforePack>GenerateRuntimeJson;UpdateRuntimeJson;$(BeforePack)</BeforePack> <_generateRuntimeGraphTargetFramework Condition="'$(MSBuildRuntimeType)' == 'core'">$(NetCoreAppToolCurrent)</_generateRuntimeGraphTargetFramework> <_generateRuntimeGraphTargetFramework Condition="'$(MSBuildRuntimeType)' != 'core'">net472</_generateRuntimeGraphTargetFramework> <_generateRuntimeGraphTask>$([MSBuild]::NormalizePath('$(BaseOutputPath)', $(Configuration), '$(_generateRuntimeGraphTargetFramework)', '$(AssemblyName).dll'))</_generateRuntimeGraphTask> <!-- When building from source, ensure the RID we're building for is part of the RID graph --> <AdditionalRuntimeIdentifiers Condition="'$(DotNetBuildFromSource)' == 'true'">$(AdditionalRuntimeIdentifiers);$(OutputRID)</AdditionalRuntimeIdentifiers> </PropertyGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'"> <Compile Include="BuildTask.Desktop.cs" /> <Compile Include="AssemblyResolver.cs" /> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\UnconditionalSuppressMessageAttribute.cs" /> </ItemGroup> <ItemGroup> <Compile Include="BuildTask.cs" /> <Compile Include="Extensions.cs" /> <Compile Include="GenerateRuntimeGraph.cs" /> <Compile Include="RID.cs" /> <Compile Include="RuntimeGroupCollection.cs" /> <Compile Include="RuntimeGroup.cs" /> <Compile Include="RuntimeVersion.cs" /> </ItemGroup> <ItemGroup> <Content Condition="'$(AdditionalRuntimeIdentifiers)' == ''" Include="runtime.json" PackagePath="/" /> <Content Condition="'$(AdditionalRuntimeIdentifiers)' != ''" Include="$(IntermediateOutputPath)runtime.json" PackagePath="/" /> <Content Include="$(PlaceholderFile)" PackagePath="lib/netstandard1.0" /> </ItemGroup> <ItemGroup> <PackageReference Include="Microsoft.Build.Tasks.Core" Version="$(MicrosoftBuildTasksCoreVersion)" /> <PackageReference Include="NuGet.ProjectModel" Version="$(NugetProjectModelVersion)" /> <PackageReference Include="Newtonsoft.Json" Version="$(NewtonsoftJsonVersion)" /> </ItemGroup> <Import Project="runtimeGroups.props" /> <UsingTask TaskName="GenerateRuntimeGraph" AssemblyFile="$(_generateRuntimeGraphTask)"/> <Target Name="GenerateRuntimeJson" Condition="'$(AdditionalRuntimeIdentifiers)' != ''"> <MakeDir Directories="$(IntermediateOutputPath)" /> <GenerateRuntimeGraph RuntimeGroups="@(RuntimeGroupWithQualifiers)" AdditionalRuntimeIdentifiers="$(AdditionalRuntimeIdentifiers)" AdditionalRuntimeIdentifierParent="$(AdditionalRuntimeIdentifierParent)" RuntimeJson="$(IntermediateOutputPath)runtime.json" UpdateRuntimeFiles="True" /> </Target> <Target Name="UpdateRuntimeJson"> <!-- Generates a Runtime graph using RuntimeGroups and diffs it with the graph described by runtime.json and runtime.compatibility.json Specifying UpdateRuntimeFiles=true skips the diff and updates those files. The graph can be visualized using the generated dmgl --> <MakeDir Directories="$(OutputPath)" /> <GenerateRuntimeGraph RuntimeGroups="@(RuntimeGroupWithQualifiers)" RuntimeJson="runtime.json" CompatibilityMap="runtime.compatibility.json" RuntimeDirectedGraph="$(OutputPath)runtime.json.dgml" UpdateRuntimeFiles="$(UpdateRuntimeFiles)" /> </Target> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Runtime.InteropServices/gen/Directory.Build.props
<Project> <Import Project="..\Directory.Build.props" /> <PropertyGroup> <IsShipping>false</IsShipping> <!-- We manually enable DllImportGenerator for projects in this folder as part of testing. --> <EnableDllImportGenerator>false</EnableDllImportGenerator> <EnableDefaultItems>true</EnableDefaultItems> <CLSCompliant>false</CLSCompliant> <ILLinkTrimAssembly>false</ILLinkTrimAssembly> <IsGeneratorProject>true</IsGeneratorProject> </PropertyGroup> </Project>
<Project> <Import Project="..\Directory.Build.props" /> <PropertyGroup> <IsShipping>false</IsShipping> <!-- We manually enable DllImportGenerator for projects in this folder as part of testing. --> <EnableDllImportGenerator>false</EnableDllImportGenerator> <EnableDefaultItems>true</EnableDefaultItems> <CLSCompliant>false</CLSCompliant> <ILLinkTrimAssembly>false</ILLinkTrimAssembly> </PropertyGroup> </Project>
1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/valuetypes/nullable/castclass/null/castclass-null024.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass-null024.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass-null024.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/valuetypes/nullable/castclass/castclass/castclass039.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass039.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass039.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/Runtime_34170/Runtime_34170.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> <AllowUnsafeBlocks>True</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> <AllowUnsafeBlocks>True</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/CLR-x86-JIT/V1-M11-Beta1/b40496/b40496.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/hfa/main/testC/hfa_sd0C_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="hfa_testC.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\dll\common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f64_common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f64_managed.csproj" /> <ProjectReference Include="..\dll\CMakelists.txt" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="hfa_testC.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\dll\common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f64_common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f64_managed.csproj" /> <ProjectReference Include="..\dll\CMakelists.txt" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/valuetypes/nullable/castclass/castclass/castclass020.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass020.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="castclass020.cs" /> <Compile Include="..\structdef.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_6.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/mul3_ro.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="mul3.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="mul3.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/DoublinkList/doublinknoleak.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <GCStressIncompatible>true</GCStressIncompatible> <IsLongRunningGCTest>true</IsLongRunningGCTest> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="doublinknoleak.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="DoubLink.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <GCStressIncompatible>true</GCStressIncompatible> <IsLongRunningGCTest>true</IsLongRunningGCTest> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="doublinknoleak.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="DoubLink.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Net.Mail/src/System.Net.Mail.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)-Browser;$(NetCoreAppCurrent)-tvOS;$(NetCoreAppCurrent)</TargetFrameworks> <Nullable>enable</Nullable> </PropertyGroup> <!-- DesignTimeBuild requires all the TargetFramework Derived Properties to not be present in the first property group. --> <PropertyGroup> <TargetPlatformIdentifier>$([MSBuild]::GetTargetPlatformIdentifier('$(TargetFramework)'))</TargetPlatformIdentifier> <GeneratePlatformNotSupportedAssemblyMessage Condition="'$(TargetPlatformIdentifier)' == ''">SR.PlatformNotSupported_NetMail</GeneratePlatformNotSupportedAssemblyMessage> <DefineConstants Condition="'$(TargetPlatformIdentifier)' == 'tvOS'">$(DefineConstants);NO_NTAUTHENTICATION</DefineConstants> </PropertyGroup> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != ''"> <Compile Include="System\Net\Base64Stream.cs" /> <Compile Include="System\Net\Mime\MimePart.cs" /> <Compile Include="System\Net\Mime\Base64WriteStateInfo.cs" /> <Compile Include="System\Net\Mime\QuotedPrintableStream.cs" /> <Compile Include="System\Net\CloseableStream.cs" /> <Compile Include="System\Net\Mime\EightBitStream.cs" /> <Compile Include="System\Net\Mime\EncodedStreamFactory.cs" /> <Compile Include="System\Net\Mime\IEncodableStream.cs" /> <Compile Include="System\Net\Mime\QEncodedStream.cs" /> <Compile Include="System\Net\Mime\WriteStateInfoBase.cs" /> <Compile Include="System\Net\Mime\BaseWriter.cs" /> <Compile Include="System\Net\Mime\TransferEncoding.cs" /> <Compile Include="System\Net\Mime\ContentDisposition.cs" /> <Compile Include="System\Net\Mime\ContentType.cs" /> <Compile Include="System\Net\Mime\DispositionTypeNames.cs" /> <Compile Include="System\Net\Mime\HeaderCollection.cs" /> <Compile Include="System\Net\Mime\MediaTypeNames.cs" /> <Compile Include="System\Net\Mime\MimeBasePart.cs" /> <Compile Include="System\Net\Mime\SmtpDateTime.cs" /> <Compile Include="System\Net\Mime\MultiAsyncResult.cs" /> <Compile Include="System\Net\Mime\ByteEncoder.cs" /> <Compile Include="System\Net\Mime\Base64Encoder.cs" /> <Compile Include="System\Net\Mime\IByteEncoder.cs" /> <Compile Include="System\Net\Mime\QEncoder.cs" /> <Compile Include="System\Net\TrackingStringDictionary.cs" /> <Compile Include="System\Net\TrackingValidationObjectDictionary.cs" /> <Compile Include="System\Net\Mail\MailHeaderID.cs" /> <Compile Include="System\Net\Mail\MailHeaderInfo.cs" /> <Compile Include="System\Net\BufferBuilder.cs" /> <Compile Include="System\Net\Mail\AlternateView.cs" /> <Compile Include="System\Net\Mail\AlternateViewCollection.cs" /> <Compile Include="System\Net\Mail\Attachment.cs" /> <Compile Include="System\Net\Mail\AttachmentCollection.cs" /> <Compile Include="System\Net\BufferedReadStream.cs" /> <Compile Include="System\Net\Mail\LinkedResource.cs" /> <Compile Include="System\Net\Mail\LinkedResourceCollection.cs" /> <Compile Include="System\Net\Mail\DomainLiteralReader.cs" /> <Compile Include="System\Net\Mail\DotAtomReader.cs" /> <Compile Include="System\Net\Mail\MailAddress.cs" /> <Compile Include="System\Net\Mail\MailAddressCollection.cs" /> <Compile Include="System\Net\Mail\MailAddressParser.cs" /> <Compile Include="System\Net\Mail\MailBnfHelper.cs" /> <Compile Include="System\Net\Mail\MailMessage.cs" /> <Compile Include="System\Net\Mail\MailPriority.cs" /> <Compile Include="System\Net\Mail\QuotedPairReader.cs" /> <Compile Include="System\Net\Mail\QuotedStringFormatReader.cs" /> <Compile Include="System\Net\Mail\WhitespaceReader.cs" /> <Compile Include="System\Net\Mime\MimeMultiPart.cs" /> <Compile Include="System\Net\Mime\MimeMultiPartType.cs" /> <Compile Include="System\Net\Mime\MimeWriter.cs" /> <Compile Include="System\Net\Mail\SmtpException.cs" /> <Compile Include="System\Net\Mail\SmtpFailedRecipientException.cs" /> <Compile Include="System\Net\Mail\SmtpFailedRecipientsException.cs" /> <Compile Include="System\Net\Mail\SmtpStatusCode.cs" /> <Compile Include="System\Net\DelegatedStream.cs" /> <Compile Include="$(CommonPath)DisableRuntimeMarshalling.cs" Link="Common\DisableRuntimeMarshalling.cs" /> <Compile Include="$(CommonPath)System\Net\LazyAsyncResult.cs" Link="Common\System\Net\LazyAsyncResult.cs" /> <Compile Include="$(CommonPath)System\Net\Logging\NetEventSource.Common.cs" Link="Common\System\Net\Logging\NetEventSource.Common.cs" /> <Compile Include="$(CommonPath)System\StringExtensions.cs" Link="Common\System\StringExtensions.cs" /> <Compile Include="$(CommonPath)System\HexConverter.cs" Link="Common\System\HexConverter.cs" /> </ItemGroup> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Browser'"> <Compile Include="System\Net\Mail\SmtpClient.Browser.cs" /> </ItemGroup> <!-- Non Browser specific files - internal and security --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != '' and '$(TargetPlatformIdentifier)' != 'Browser'"> <Compile Include="System\Net\Mail\SmtpClient.cs" /> <Compile Include="System\Net\Mail\ISmtpAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\SmtpAuthenticationManager.cs" /> <Compile Include="System\Net\Mail\SmtpCommands.cs" /> <Compile Include="System\Net\Mail\SmtpConnection.cs" /> <Compile Include="System\Net\Mail\SmtpConnection.Auth.cs" /> <Compile Include="System\Net\Mail\SmtpReplyReader.cs" /> <Compile Include="System\Net\Mail\SmtpReplyReaderFactory.cs" /> <Compile Include="System\Net\Mail\SmtpTransport.cs" /> <Compile Include="System\Net\Mail\SmtpLoginAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\MailWriter.cs" /> <Compile Include="System\Net\Mail\NetEventSource.Mail.cs" /> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.cs" Link="Common\System\Net\ContextAwareResult.cs" /> <Compile Include="$(CommonPath)System\Net\DebugSafeHandleZeroOrMinusOneIsInvalid.cs" Link="Common\System\Net\DebugSafeHandleZeroOrMinusOneIsInvalid.cs" /> <Compile Include="$(CommonPath)System\Net\DebugSafeHandle.cs" Link="Common\System\Net\DebugSafeHandle.cs" /> <Compile Include="$(CommonPath)System\Net\TlsStream.cs" Link="Common\System\Net\TlsStream.cs" /> <Compile Include="$(CommonPath)System\Net\InternalException.cs" Link="Common\System\Net\InternalException.cs" /> <Compile Include="$(CommonPath)System\Net\ExceptionCheck.cs" Link="Common\System\Net\ExceptionCheck.cs" /> <Compile Include="$(CommonPath)System\Collections\Generic\BidirectionalDictionary.cs" Link="Common\System\Collections\Generic\BidirectionalDictionary.cs" /> <Compile Include="$(CommonPath)System\NotImplemented.cs" Link="Common\System\NotImplemented.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NetEventSource.Security.cs" Link="Common\System\Net\Security\NetEventSource.Security.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityProtocol.cs" Link="Common\System\Net\SecurityProtocol.cs" /> </ItemGroup> <!-- NT authentication specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != '' and '$(TargetPlatformIdentifier)' != 'Browser' and '$(TargetPlatformIdentifier)' != 'tvOS'"> <Compile Include="$(CommonPath)System\Net\ContextFlagsPal.cs" Link="Common\System\Net\ContextFlagsPal.cs" /> <Compile Include="$(CommonPath)System\Net\NegotiationInfoClass.cs" Link="Common\System\Net\NegotiationInfoClass.cs" /> <Compile Include="$(CommonPath)System\Net\NTAuthentication.Common.cs" Link="Common\System\Net\NTAuthentication.Common.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityStatusPal.cs" Link="Common\System\Net\SecurityStatusPal.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SafeCredentialReference.cs" Link="Common\System\Net\Security\SafeCredentialReference.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SSPIHandleCache.cs" Link="Common\System\Net\Security\SSPIHandleCache.cs" /> <Compile Include="System\Net\Mail\SmtpNegotiateAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\SmtpNtlmAuthenticationModule.cs" /> </ItemGroup> <!-- Unix specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix' or '$(TargetPlatformIdentifier)' == 'tvOS'"> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.Unix.cs" Link="Common\System\Net\ContextAwareResult.Unix.cs" /> </ItemGroup> <!-- Unix specific files (NT Authentication) --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'"> <Compile Include="$(CommonPath)Interop\Unix\Interop.Libraries.cs" Link="Common\Interop\Unix\Interop.Libraries.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.GssBuffer.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.GssBuffer.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.GssApiException.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.GssApiException.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.GssFlags.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.GssFlags.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.Status.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.Status.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.IsNtlmInstalled.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.IsNtlmInstalled.cs" /> <Compile Include="$(CommonPath)Microsoft\Win32\SafeHandles\GssSafeHandles.cs" Link="Common\Microsoft\Win32\SafeHandles\GssSafeHandles.cs" /> <Compile Include="$(CommonPath)System\Net\ContextFlagsAdapterPal.Unix.cs" Link="Common\System\Net\ContextFlagsAdapterPal.Unix.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SecChannelBindings.cs" Link="Common\System\Net\Security\Unix\SecChannelBindings.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeDeleteContext.cs" Link="Common\System\Net\Security\Unix\SafeDeleteContext.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeDeleteNegoContext.cs" Link="Common\System\Net\Security\Unix\SafeDeleteNegoContext.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeFreeCredentials.cs" Link="Common\System\Net\Security\Unix\SafeFreeCredentials.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeFreeNegoCredentials.cs" Link="Common\System\Net\Security\Unix\SafeFreeNegoCredentials.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NegotiateStreamPal.Unix.cs" Link="Common\System\Net\Security\NegotiateStreamPal.Unix.cs" /> </ItemGroup> <!-- Windows specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'"> <Compile Include="$(CommonPath)System\Net\Security\SecurityBuffer.Windows.cs" Link="Common\System\Net\Security\SecurityBuffer.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SecurityBufferType.Windows.cs" Link="Common\System\Net\Security\SecurityBufferType.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SecurityContextTokenHandle.cs" Link="Common\System\Net\Security\SecurityContextTokenHandle.cs" /> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.Windows.cs" Link="Common\System\Net\ContextAwareResult.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityStatusAdapterPal.Windows.cs" Link="Common\System\Net\SecurityStatusAdapterPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\ContextFlagsAdapterPal.Windows.cs" Link="Common\System\Net\ContextFlagsAdapterPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NegotiateStreamPal.Windows.cs" Link="Common\System\Net\Security\NegotiateStreamPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NetEventSource.Security.Windows.cs" Link="Common\System\Net\Security\NetEventSource.Security.Windows.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_CONTEXT.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_CONTEXT.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CertFreeCertificateContext.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CertFreeCertificateContext.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_INFO.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_INFO.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_PUBLIC_KEY_INFO.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_PUBLIC_KEY_INFO.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CRYPT_ALGORITHM_IDENTIFIER.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CRYPT_ALGORITHM_IDENTIFIER.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CRYPT_BIT_BLOB.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CRYPT_BIT_BLOB.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.DATA_BLOB.cs" Link="Common\Interop\Windows\Crypt32\Interop.DATA_BLOB.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.MsgEncodingType.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.MsgEncodingType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.BOOL.cs" Link="Common\Interop\Windows\Interop.BOOL.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.UNICODE_STRING.cs" Link="Common\Interop\Windows\Interop.UNICODE_STRING.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.Libraries.cs" Link="Common\Interop\Windows\Interop.Libraries.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_Bindings.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_Bindings.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\Interop.SECURITY_STATUS.cs" Link="Common\Interop\Windows\SChannel\Interop.SECURITY_STATUS.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CloseHandle.cs" Link="Common\Interop\Windows\Kernel32\Interop.CloseHandle.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_StreamSizes.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_StreamSizes.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_NegotiationInfoW.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_NegotiationInfoW.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\NegotiationInfoClass.cs" Link="Common\Interop\Windows\SspiCli\NegotiationInfoClass.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\SecPkgContext_ConnectionInfo.cs" Link="Common\Interop\Windows\SChannel\SecPkgContext_ConnectionInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\SecPkgContext_CipherInfo.cs" Link="Common\Interop\Windows\SChannel\SecPkgContext_CipherInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPISecureChannelType.cs" Link="Common\Interop\Windows\SspiCli\SSPISecureChannelType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\ISSPIInterface.cs" Link="Common\Interop\Windows\SspiCli\ISSPIInterface.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPIAuthType.cs" Link="Common\Interop\Windows\SspiCli\SSPIAuthType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecurityPackageInfoClass.cs" Link="Common\Interop\Windows\SspiCli\SecurityPackageInfoClass.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecurityPackageInfo.cs" Link="Common\Interop\Windows\SspiCli\SecurityPackageInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_Sizes.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_Sizes.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SafeDeleteContext.cs" Link="Common\Interop\Windows\SspiCli\SafeDeleteContext.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\GlobalSSPI.cs" Link="Common\Interop\Windows\SspiCli\GlobalSSPI.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\Interop.SSPI.cs" Link="Common\Interop\Windows\SspiCli\Interop.SSPI.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecuritySafeHandles.cs" Link="Common\Interop\Windows\SspiCli\SecuritySafeHandles.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPIWrapper.cs" Link="Common\Interop\Windows\SspiCli\SSPIWrapper.cs" /> </ItemGroup> <ItemGroup> <Reference Include="Microsoft.Win32.Primitives" /> <Reference Include="System.Collections" /> <Reference Include="System.Collections.NonGeneric" /> <Reference Include="System.Collections.Specialized" /> <Reference Include="System.ComponentModel.EventBasedAsync" /> <Reference Include="System.Diagnostics.Tracing" /> <Reference Include="System.IO.FileSystem" /> <Reference Include="System.Memory" /> <Reference Include="System.Net.NetworkInformation" /> <Reference Include="System.Net.Primitives" /> <Reference Include="System.Net.Requests" /> <Reference Include="System.Net.Security" /> <Reference Include="System.Net.ServicePoint" /> <Reference Include="System.Net.Sockets" /> <Reference Include="System.Runtime" /> <Reference Include="System.Runtime.CompilerServices.Unsafe" /> <Reference Include="System.Runtime.Extensions" /> <Reference Include="System.Runtime.InteropServices" /> <Reference Include="System.Security.Claims" /> <Reference Include="System.Security.Cryptography" /> <Reference Include="System.Security.Principal.Windows" /> <Reference Include="System.Threading" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)-Browser;$(NetCoreAppCurrent)-tvOS;$(NetCoreAppCurrent)</TargetFrameworks> <Nullable>enable</Nullable> </PropertyGroup> <!-- DesignTimeBuild requires all the TargetFramework Derived Properties to not be present in the first property group. --> <PropertyGroup> <TargetPlatformIdentifier>$([MSBuild]::GetTargetPlatformIdentifier('$(TargetFramework)'))</TargetPlatformIdentifier> <GeneratePlatformNotSupportedAssemblyMessage Condition="'$(TargetPlatformIdentifier)' == ''">SR.PlatformNotSupported_NetMail</GeneratePlatformNotSupportedAssemblyMessage> <DefineConstants Condition="'$(TargetPlatformIdentifier)' == 'tvOS'">$(DefineConstants);NO_NTAUTHENTICATION</DefineConstants> </PropertyGroup> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != ''"> <Compile Include="System\Net\Base64Stream.cs" /> <Compile Include="System\Net\Mime\MimePart.cs" /> <Compile Include="System\Net\Mime\Base64WriteStateInfo.cs" /> <Compile Include="System\Net\Mime\QuotedPrintableStream.cs" /> <Compile Include="System\Net\CloseableStream.cs" /> <Compile Include="System\Net\Mime\EightBitStream.cs" /> <Compile Include="System\Net\Mime\EncodedStreamFactory.cs" /> <Compile Include="System\Net\Mime\IEncodableStream.cs" /> <Compile Include="System\Net\Mime\QEncodedStream.cs" /> <Compile Include="System\Net\Mime\WriteStateInfoBase.cs" /> <Compile Include="System\Net\Mime\BaseWriter.cs" /> <Compile Include="System\Net\Mime\TransferEncoding.cs" /> <Compile Include="System\Net\Mime\ContentDisposition.cs" /> <Compile Include="System\Net\Mime\ContentType.cs" /> <Compile Include="System\Net\Mime\DispositionTypeNames.cs" /> <Compile Include="System\Net\Mime\HeaderCollection.cs" /> <Compile Include="System\Net\Mime\MediaTypeNames.cs" /> <Compile Include="System\Net\Mime\MimeBasePart.cs" /> <Compile Include="System\Net\Mime\SmtpDateTime.cs" /> <Compile Include="System\Net\Mime\MultiAsyncResult.cs" /> <Compile Include="System\Net\Mime\ByteEncoder.cs" /> <Compile Include="System\Net\Mime\Base64Encoder.cs" /> <Compile Include="System\Net\Mime\IByteEncoder.cs" /> <Compile Include="System\Net\Mime\QEncoder.cs" /> <Compile Include="System\Net\TrackingStringDictionary.cs" /> <Compile Include="System\Net\TrackingValidationObjectDictionary.cs" /> <Compile Include="System\Net\Mail\MailHeaderID.cs" /> <Compile Include="System\Net\Mail\MailHeaderInfo.cs" /> <Compile Include="System\Net\BufferBuilder.cs" /> <Compile Include="System\Net\Mail\AlternateView.cs" /> <Compile Include="System\Net\Mail\AlternateViewCollection.cs" /> <Compile Include="System\Net\Mail\Attachment.cs" /> <Compile Include="System\Net\Mail\AttachmentCollection.cs" /> <Compile Include="System\Net\BufferedReadStream.cs" /> <Compile Include="System\Net\Mail\LinkedResource.cs" /> <Compile Include="System\Net\Mail\LinkedResourceCollection.cs" /> <Compile Include="System\Net\Mail\DomainLiteralReader.cs" /> <Compile Include="System\Net\Mail\DotAtomReader.cs" /> <Compile Include="System\Net\Mail\MailAddress.cs" /> <Compile Include="System\Net\Mail\MailAddressCollection.cs" /> <Compile Include="System\Net\Mail\MailAddressParser.cs" /> <Compile Include="System\Net\Mail\MailBnfHelper.cs" /> <Compile Include="System\Net\Mail\MailMessage.cs" /> <Compile Include="System\Net\Mail\MailPriority.cs" /> <Compile Include="System\Net\Mail\QuotedPairReader.cs" /> <Compile Include="System\Net\Mail\QuotedStringFormatReader.cs" /> <Compile Include="System\Net\Mail\WhitespaceReader.cs" /> <Compile Include="System\Net\Mime\MimeMultiPart.cs" /> <Compile Include="System\Net\Mime\MimeMultiPartType.cs" /> <Compile Include="System\Net\Mime\MimeWriter.cs" /> <Compile Include="System\Net\Mail\SmtpException.cs" /> <Compile Include="System\Net\Mail\SmtpFailedRecipientException.cs" /> <Compile Include="System\Net\Mail\SmtpFailedRecipientsException.cs" /> <Compile Include="System\Net\Mail\SmtpStatusCode.cs" /> <Compile Include="System\Net\DelegatedStream.cs" /> <Compile Include="$(CommonPath)DisableRuntimeMarshalling.cs" Link="Common\DisableRuntimeMarshalling.cs" /> <Compile Include="$(CommonPath)System\Net\LazyAsyncResult.cs" Link="Common\System\Net\LazyAsyncResult.cs" /> <Compile Include="$(CommonPath)System\Net\Logging\NetEventSource.Common.cs" Link="Common\System\Net\Logging\NetEventSource.Common.cs" /> <Compile Include="$(CommonPath)System\StringExtensions.cs" Link="Common\System\StringExtensions.cs" /> <Compile Include="$(CommonPath)System\HexConverter.cs" Link="Common\System\HexConverter.cs" /> </ItemGroup> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Browser'"> <Compile Include="System\Net\Mail\SmtpClient.Browser.cs" /> </ItemGroup> <!-- Non Browser specific files - internal and security --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != '' and '$(TargetPlatformIdentifier)' != 'Browser'"> <Compile Include="System\Net\Mail\SmtpClient.cs" /> <Compile Include="System\Net\Mail\ISmtpAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\SmtpAuthenticationManager.cs" /> <Compile Include="System\Net\Mail\SmtpCommands.cs" /> <Compile Include="System\Net\Mail\SmtpConnection.cs" /> <Compile Include="System\Net\Mail\SmtpConnection.Auth.cs" /> <Compile Include="System\Net\Mail\SmtpReplyReader.cs" /> <Compile Include="System\Net\Mail\SmtpReplyReaderFactory.cs" /> <Compile Include="System\Net\Mail\SmtpTransport.cs" /> <Compile Include="System\Net\Mail\SmtpLoginAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\MailWriter.cs" /> <Compile Include="System\Net\Mail\NetEventSource.Mail.cs" /> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.cs" Link="Common\System\Net\ContextAwareResult.cs" /> <Compile Include="$(CommonPath)System\Net\DebugSafeHandleZeroOrMinusOneIsInvalid.cs" Link="Common\System\Net\DebugSafeHandleZeroOrMinusOneIsInvalid.cs" /> <Compile Include="$(CommonPath)System\Net\DebugSafeHandle.cs" Link="Common\System\Net\DebugSafeHandle.cs" /> <Compile Include="$(CommonPath)System\Net\TlsStream.cs" Link="Common\System\Net\TlsStream.cs" /> <Compile Include="$(CommonPath)System\Net\InternalException.cs" Link="Common\System\Net\InternalException.cs" /> <Compile Include="$(CommonPath)System\Net\ExceptionCheck.cs" Link="Common\System\Net\ExceptionCheck.cs" /> <Compile Include="$(CommonPath)System\Collections\Generic\BidirectionalDictionary.cs" Link="Common\System\Collections\Generic\BidirectionalDictionary.cs" /> <Compile Include="$(CommonPath)System\NotImplemented.cs" Link="Common\System\NotImplemented.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NetEventSource.Security.cs" Link="Common\System\Net\Security\NetEventSource.Security.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityProtocol.cs" Link="Common\System\Net\SecurityProtocol.cs" /> </ItemGroup> <!-- NT authentication specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' != '' and '$(TargetPlatformIdentifier)' != 'Browser' and '$(TargetPlatformIdentifier)' != 'tvOS'"> <Compile Include="$(CommonPath)System\Net\ContextFlagsPal.cs" Link="Common\System\Net\ContextFlagsPal.cs" /> <Compile Include="$(CommonPath)System\Net\NegotiationInfoClass.cs" Link="Common\System\Net\NegotiationInfoClass.cs" /> <Compile Include="$(CommonPath)System\Net\NTAuthentication.Common.cs" Link="Common\System\Net\NTAuthentication.Common.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityStatusPal.cs" Link="Common\System\Net\SecurityStatusPal.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SafeCredentialReference.cs" Link="Common\System\Net\Security\SafeCredentialReference.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SSPIHandleCache.cs" Link="Common\System\Net\Security\SSPIHandleCache.cs" /> <Compile Include="System\Net\Mail\SmtpNegotiateAuthenticationModule.cs" /> <Compile Include="System\Net\Mail\SmtpNtlmAuthenticationModule.cs" /> </ItemGroup> <!-- Unix specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix' or '$(TargetPlatformIdentifier)' == 'tvOS'"> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.Unix.cs" Link="Common\System\Net\ContextAwareResult.Unix.cs" /> </ItemGroup> <!-- Unix specific files (NT Authentication) --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'Unix'"> <Compile Include="$(CommonPath)Interop\Unix\Interop.Libraries.cs" Link="Common\Interop\Unix\Interop.Libraries.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.GssBuffer.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.GssBuffer.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.GssApiException.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.GssApiException.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.GssFlags.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.GssFlags.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.Status.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.Status.cs" /> <Compile Include="$(CommonPath)Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.IsNtlmInstalled.cs" Link="Common\Interop\Unix\System.Net.Security.Native\Interop.NetSecurityNative.IsNtlmInstalled.cs" /> <Compile Include="$(CommonPath)Microsoft\Win32\SafeHandles\GssSafeHandles.cs" Link="Common\Microsoft\Win32\SafeHandles\GssSafeHandles.cs" /> <Compile Include="$(CommonPath)System\Net\ContextFlagsAdapterPal.Unix.cs" Link="Common\System\Net\ContextFlagsAdapterPal.Unix.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SecChannelBindings.cs" Link="Common\System\Net\Security\Unix\SecChannelBindings.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeDeleteContext.cs" Link="Common\System\Net\Security\Unix\SafeDeleteContext.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeDeleteNegoContext.cs" Link="Common\System\Net\Security\Unix\SafeDeleteNegoContext.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeFreeCredentials.cs" Link="Common\System\Net\Security\Unix\SafeFreeCredentials.cs" /> <Compile Include="$(CommonPath)System\Net\Security\Unix\SafeFreeNegoCredentials.cs" Link="Common\System\Net\Security\Unix\SafeFreeNegoCredentials.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NegotiateStreamPal.Unix.cs" Link="Common\System\Net\Security\NegotiateStreamPal.Unix.cs" /> </ItemGroup> <!-- Windows specific files --> <ItemGroup Condition="'$(TargetPlatformIdentifier)' == 'windows'"> <Compile Include="$(CommonPath)System\Net\Security\SecurityBuffer.Windows.cs" Link="Common\System\Net\Security\SecurityBuffer.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SecurityBufferType.Windows.cs" Link="Common\System\Net\Security\SecurityBufferType.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\SecurityContextTokenHandle.cs" Link="Common\System\Net\Security\SecurityContextTokenHandle.cs" /> <Compile Include="$(CommonPath)System\Net\ContextAwareResult.Windows.cs" Link="Common\System\Net\ContextAwareResult.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\SecurityStatusAdapterPal.Windows.cs" Link="Common\System\Net\SecurityStatusAdapterPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\ContextFlagsAdapterPal.Windows.cs" Link="Common\System\Net\ContextFlagsAdapterPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NegotiateStreamPal.Windows.cs" Link="Common\System\Net\Security\NegotiateStreamPal.Windows.cs" /> <Compile Include="$(CommonPath)System\Net\Security\NetEventSource.Security.Windows.cs" Link="Common\System\Net\Security\NetEventSource.Security.Windows.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_CONTEXT.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_CONTEXT.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CertFreeCertificateContext.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CertFreeCertificateContext.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_INFO.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_INFO.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CERT_PUBLIC_KEY_INFO.cs" Link="Common\Interop\Windows\Crypt32\Interop.CERT_PUBLIC_KEY_INFO.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CRYPT_ALGORITHM_IDENTIFIER.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CRYPT_ALGORITHM_IDENTIFIER.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.CRYPT_BIT_BLOB.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.CRYPT_BIT_BLOB.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.DATA_BLOB.cs" Link="Common\Interop\Windows\Crypt32\Interop.DATA_BLOB.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Crypt32\Interop.MsgEncodingType.cs" Link="Common\Interop\Windows\Crypt32\Interop.Interop.MsgEncodingType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.BOOL.cs" Link="Common\Interop\Windows\Interop.BOOL.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.UNICODE_STRING.cs" Link="Common\Interop\Windows\Interop.UNICODE_STRING.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Interop.Libraries.cs" Link="Common\Interop\Windows\Interop.Libraries.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_Bindings.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_Bindings.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\Interop.SECURITY_STATUS.cs" Link="Common\Interop\Windows\SChannel\Interop.SECURITY_STATUS.cs" /> <Compile Include="$(CommonPath)Interop\Windows\Kernel32\Interop.CloseHandle.cs" Link="Common\Interop\Windows\Kernel32\Interop.CloseHandle.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_StreamSizes.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_StreamSizes.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_NegotiationInfoW.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_NegotiationInfoW.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\NegotiationInfoClass.cs" Link="Common\Interop\Windows\SspiCli\NegotiationInfoClass.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\SecPkgContext_ConnectionInfo.cs" Link="Common\Interop\Windows\SChannel\SecPkgContext_ConnectionInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SChannel\SecPkgContext_CipherInfo.cs" Link="Common\Interop\Windows\SChannel\SecPkgContext_CipherInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPISecureChannelType.cs" Link="Common\Interop\Windows\SspiCli\SSPISecureChannelType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\ISSPIInterface.cs" Link="Common\Interop\Windows\SspiCli\ISSPIInterface.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPIAuthType.cs" Link="Common\Interop\Windows\SspiCli\SSPIAuthType.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecurityPackageInfoClass.cs" Link="Common\Interop\Windows\SspiCli\SecurityPackageInfoClass.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecurityPackageInfo.cs" Link="Common\Interop\Windows\SspiCli\SecurityPackageInfo.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecPkgContext_Sizes.cs" Link="Common\Interop\Windows\SspiCli\SecPkgContext_Sizes.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SafeDeleteContext.cs" Link="Common\Interop\Windows\SspiCli\SafeDeleteContext.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\GlobalSSPI.cs" Link="Common\Interop\Windows\SspiCli\GlobalSSPI.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\Interop.SSPI.cs" Link="Common\Interop\Windows\SspiCli\Interop.SSPI.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SecuritySafeHandles.cs" Link="Common\Interop\Windows\SspiCli\SecuritySafeHandles.cs" /> <Compile Include="$(CommonPath)Interop\Windows\SspiCli\SSPIWrapper.cs" Link="Common\Interop\Windows\SspiCli\SSPIWrapper.cs" /> </ItemGroup> <ItemGroup> <Reference Include="Microsoft.Win32.Primitives" /> <Reference Include="System.Collections" /> <Reference Include="System.Collections.NonGeneric" /> <Reference Include="System.Collections.Specialized" /> <Reference Include="System.ComponentModel.EventBasedAsync" /> <Reference Include="System.Diagnostics.Tracing" /> <Reference Include="System.IO.FileSystem" /> <Reference Include="System.Memory" /> <Reference Include="System.Net.NetworkInformation" /> <Reference Include="System.Net.Primitives" /> <Reference Include="System.Net.Requests" /> <Reference Include="System.Net.Security" /> <Reference Include="System.Net.ServicePoint" /> <Reference Include="System.Net.Sockets" /> <Reference Include="System.Runtime" /> <Reference Include="System.Runtime.CompilerServices.Unsafe" /> <Reference Include="System.Runtime.Extensions" /> <Reference Include="System.Runtime.InteropServices" /> <Reference Include="System.Security.Claims" /> <Reference Include="System.Security.Cryptography" /> <Reference Include="System.Security.Principal.Windows" /> <Reference Include="System.Threading" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/StructReturn_do.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="StructReturn.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="StructReturn.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/FactorialRec_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="FactorialRec.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="FactorialRec.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_206.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 17 -dc 20000 -sdc 6000 -lt 2 -f -dp 0.0 -dw 0.4</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 17 -dc 20000 -sdc 6000 -lt 2 -f -dp 0.0 -dw 0.4</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/Runtime_31673/Runtime_31673.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Methodical/eh/leaves/oponerror_ro.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="oponerror.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\..\common\eh_common.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="oponerror.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\..\common\eh_common.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/DblCall2_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DblCall2.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DblCall2.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/GitHub_25134/GitHub_25134.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Runtime.InteropServices.RuntimeInformation/tests/System.Runtime.InteropServices.RuntimeInformation.Tests.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <IncludeRemoteExecutor>true</IncludeRemoteExecutor> <TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)-Browser</TargetFrameworks> </PropertyGroup> <ItemGroup> <Compile Include="CheckArchitectureTests.cs" /> <Compile Include="CheckPlatformTests.cs" /> <Compile Include="RuntimeIdentifierTests.cs" /> <Compile Include="DescriptionNameTests.cs" /> <Compile Include="$(CommonPath)Interop\Linux\cgroups\Interop.cgroups.cs" Link="Common\Interop\Linux\Interop.cgroups.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <IncludeRemoteExecutor>true</IncludeRemoteExecutor> <TargetFrameworks>$(NetCoreAppCurrent)-windows;$(NetCoreAppCurrent)-Unix;$(NetCoreAppCurrent)-Browser</TargetFrameworks> </PropertyGroup> <ItemGroup> <Compile Include="CheckArchitectureTests.cs" /> <Compile Include="CheckPlatformTests.cs" /> <Compile Include="RuntimeIdentifierTests.cs" /> <Compile Include="DescriptionNameTests.cs" /> <Compile Include="$(CommonPath)Interop\Linux\cgroups\Interop.cgroups.cs" Link="Common\Interop\Linux\Interop.cgroups.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/coreclr/tools/Directory.Build.props
<Project> <Import Project="../Directory.Build.props" /> <PropertyGroup> <IsShipping>false</IsShipping> <SignAssembly>false</SignAssembly> <RunAnalyzers>false</RunAnalyzers> </PropertyGroup> <!-- MSBuild doesn't understand the Checked configuration --> <PropertyGroup Condition="'$(Configuration)' == 'Checked'"> <Optimize Condition="'$(Optimize)' == ''">true</Optimize> <DefineConstants>DEBUG;$(DefineConstants)</DefineConstants> </PropertyGroup> </Project>
<Project> <Import Project="../Directory.Build.props" /> <PropertyGroup> <IsShipping>false</IsShipping> <SignAssembly>false</SignAssembly> <RunAnalyzers>false</RunAnalyzers> </PropertyGroup> <!-- MSBuild doesn't understand the Checked configuration --> <PropertyGroup Condition="'$(Configuration)' == 'Checked'"> <Optimize Condition="'$(Optimize)' == ''">true</Optimize> <DefineConstants>DEBUG;$(DefineConstants)</DefineConstants> </PropertyGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/gc/misc/struct2.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="struct2.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="struct2.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_159.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8517 -dc 10000 -sdc 5000 -lt 4 -f -dp 0.4 -dw 0.8</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8517 -dc 10000 -sdc 5000 -lt 4 -f -dp 0.4 -dw 0.8</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/nativeaot/SmokeTests/Reflection/Reflection_ReflectedOnly.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestKind>BuildAndRun</CLRTestKind> <CLRTestPriority>0</CLRTestPriority> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <!-- There's just too many of these warnings --> <SuppressTrimAnalysisWarnings>true</SuppressTrimAnalysisWarnings> <NoWarn>$(NoWarn);IL3050</NoWarn> <!-- Look for MULTIMODULE_BUILD #define for the more specific incompatible parts --> <CLRTestTargetUnsupported Condition="'$(IlcMultiModule)' == 'true'">true</CLRTestTargetUnsupported> </PropertyGroup> <ItemGroup> <Compile Include="Reflection.cs" /> </ItemGroup> <ItemGroup> <IlcArg Include="--reflectedonly" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestKind>BuildAndRun</CLRTestKind> <CLRTestPriority>0</CLRTestPriority> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <!-- There's just too many of these warnings --> <SuppressTrimAnalysisWarnings>true</SuppressTrimAnalysisWarnings> <NoWarn>$(NoWarn);IL3050</NoWarn> <!-- Look for MULTIMODULE_BUILD #define for the more specific incompatible parts --> <CLRTestTargetUnsupported Condition="'$(IlcMultiModule)' == 'true'">true</CLRTestTargetUnsupported> </PropertyGroup> <ItemGroup> <Compile Include="Reflection.cs" /> </ItemGroup> <ItemGroup> <IlcArg Include="--reflectedonly" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Net.NetworkInformation/ref/System.Net.NetworkInformation.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>$(NetCoreAppCurrent)</TargetFramework> <Nullable>enable</Nullable> </PropertyGroup> <ItemGroup> <Compile Include="System.Net.NetworkInformation.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\Microsoft.Win32.Primitives\ref\Microsoft.Win32.Primitives.csproj" /> <ProjectReference Include="..\..\System.Net.Primitives\ref\System.Net.Primitives.csproj" /> <ProjectReference Include="..\..\System.Runtime\ref\System.Runtime.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>$(NetCoreAppCurrent)</TargetFramework> <Nullable>enable</Nullable> </PropertyGroup> <ItemGroup> <Compile Include="System.Net.NetworkInformation.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\..\Microsoft.Win32.Primitives\ref\Microsoft.Win32.Primitives.csproj" /> <ProjectReference Include="..\..\System.Net.Primitives\ref\System.Net.Primitives.csproj" /> <ProjectReference Include="..\..\System.Runtime\ref\System.Runtime.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/opt/OSR/mainlooptry.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <CLRTestBatchPreCommands><![CDATA[ $(CLRTestBatchPreCommands) set COMPlus_TieredCompilation=1 set COMPlus_TC_QuickJitForLoops=1 set COMPlus_TC_OnStackReplacement=1 ]]></CLRTestBatchPreCommands> <BashCLRTestPreCommands><![CDATA[ $(BashCLRTestPreCommands) export COMPlus_TieredCompilation=1 export COMPlus_TC_QuickJitForLoops=1 export COMPlus_TC_OnStackReplacement=1 ]]></BashCLRTestPreCommands> </PropertyGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <CLRTestBatchPreCommands><![CDATA[ $(CLRTestBatchPreCommands) set COMPlus_TieredCompilation=1 set COMPlus_TC_QuickJitForLoops=1 set COMPlus_TC_OnStackReplacement=1 ]]></CLRTestBatchPreCommands> <BashCLRTestPreCommands><![CDATA[ $(BashCLRTestPreCommands) export COMPlus_TieredCompilation=1 export COMPlus_TC_QuickJitForLoops=1 export COMPlus_TC_OnStackReplacement=1 ]]></BashCLRTestPreCommands> </PropertyGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/DblMul_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DblMul.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DblMul.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/FibLoop_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="FibLoop.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="FibLoop.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/opt/virtualstubdispatch/manyintf/ctest_cs_ro.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="ctest.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="itest1.csproj" /> <ProjectReference Include="itest2.csproj" /> <ProjectReference Include="itest3.csproj" /> <ProjectReference Include="itest4.csproj" /> <ProjectReference Include="itest5.csproj" /> <ProjectReference Include="itest6.csproj" /> <ProjectReference Include="itest7.csproj" /> <ProjectReference Include="itest8.csproj" /> <ProjectReference Include="itest9.csproj" /> <ProjectReference Include="itest10.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="ctest.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="itest1.csproj" /> <ProjectReference Include="itest2.csproj" /> <ProjectReference Include="itest3.csproj" /> <ProjectReference Include="itest4.csproj" /> <ProjectReference Include="itest5.csproj" /> <ProjectReference Include="itest6.csproj" /> <ProjectReference Include="itest7.csproj" /> <ProjectReference Include="itest8.csproj" /> <ProjectReference Include="itest9.csproj" /> <ProjectReference Include="itest10.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./eng/testing/outerBuild.targets
<Project> <Target Name="Test" DependsOnTargets="GetProjectWithBestTargetFrameworks"> <MSBuild Projects="@(InnerBuildProjectsWithBestTargetFramework)" Targets="Test"> </MSBuild> </Target> <Target Name="VSTest" DependsOnTargets="GetProjectWithBestTargetFrameworks"> <MSBuild Projects="@(InnerBuildProjectsWithBestTargetFramework)" Targets="VSTest"> </MSBuild> </Target> </Project>
<Project> <Target Name="Test" DependsOnTargets="GetProjectWithBestTargetFrameworks"> <MSBuild Projects="@(InnerBuildProjectsWithBestTargetFramework)" Targets="Test"> </MSBuild> </Target> <Target Name="VSTest" DependsOnTargets="GetProjectWithBestTargetFrameworks"> <MSBuild Projects="@(InnerBuildProjectsWithBestTargetFramework)" Targets="VSTest"> </MSBuild> </Target> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/GitHub_8599/GitHub_8599.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType /> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_249.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -f -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/CoreMangLib/system/type/TypeEquals1.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="typeequals1.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="$(TestSourceDir)Common/CoreCLRTestLibrary/CoreCLRTestLibrary.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="typeequals1.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="$(TestSourceDir)Common/CoreCLRTestLibrary/CoreCLRTestLibrary.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Performance/CodeQuality/Burgers/Burgers.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>pdbonly</DebugType> <Optimize>true</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <ProjectAssetsFile>$(JitPackagesConfigFileDirectory)benchmark\obj\project.assets.json</ProjectAssetsFile> </PropertyGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>pdbonly</DebugType> <Optimize>true</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <ProjectAssetsFile>$(JitPackagesConfigFileDirectory)benchmark\obj\project.assets.json</ProjectAssetsFile> </PropertyGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/jit64/hfa/main/testE/hfa_sf0E_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="hfa_testE.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\dll\common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f32_common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f32_managed.csproj" /> <ProjectReference Include="..\dll\CMakelists.txt" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="hfa_testE.cs" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\dll\common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f32_common.csproj" /> <ProjectReference Include="..\dll\hfa_simple_f32_managed.csproj" /> <ProjectReference Include="..\dll\CMakelists.txt" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/CLR-x86-JIT/V1-M13-RTM/b89277/b89277.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Methodical/int64/arrays/lcs_ulong_do.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="lcs_ulong.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="lcs_ulong.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/Eq1_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="Eq1.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="Eq1.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Performance/CodeQuality/Benchstones/MDBenchF/MDSqMtx/MDSqMtx.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>pdbonly</DebugType> <Optimize>true</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <ProjectAssetsFile>$(JitPackagesConfigFileDirectory)benchmark\obj\project.assets.json</ProjectAssetsFile> </PropertyGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>pdbonly</DebugType> <Optimize>true</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> <PropertyGroup> <ProjectAssetsFile>$(JitPackagesConfigFileDirectory)benchmark\obj\project.assets.json</ProjectAssetsFile> </PropertyGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Methodical/Invoke/25params/25param3a_cs_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="25param3a.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="25param3a.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/GitHub_19550/GitHub_19550.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <DebugType>None</DebugType> <Optimize>True</Optimize> <AllowUnsafeBlocks>True</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <DebugType>None</DebugType> <Optimize>True</Optimize> <AllowUnsafeBlocks>True</AllowUnsafeBlocks> </PropertyGroup> <ItemGroup> <Compile Include="$(MSBuildProjectName).cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Directed/shift/uint64_d.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="uint64.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>full</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="uint64.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_60.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 3 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 3 -dp 0.8 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 3 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 3 -dp 0.8 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Threading.ThreadPool/Directory.Build.props
<Project> <Import Project="..\Directory.Build.props" /> <PropertyGroup> <StrongNameKeyId>Microsoft</StrongNameKeyId> <IncludePlatformAttributes>true</IncludePlatformAttributes> </PropertyGroup> </Project>
<Project> <Import Project="..\Directory.Build.props" /> <PropertyGroup> <StrongNameKeyId>Microsoft</StrongNameKeyId> <IncludePlatformAttributes>true</IncludePlatformAttributes> </PropertyGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/CodeGenBringUpTests/StaticValueField_ro.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="StaticValueField.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>PdbOnly</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="StaticValueField.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/XmlSchema/XmlSchemaValidatorApi/System.Xml.XmlSchema.XmlSchemaValidatorApi.Tests.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>$(NetCoreAppCurrent)</TargetFramework> </PropertyGroup> <ItemGroup> <Compile Include="Constructor_AddSchema.cs" /> <Compile Include="CustomImplementations.cs" /> <Compile Include="ExceptionVerifier.cs" /> <Compile Include="GetExpectedAttributes.cs" /> <Compile Include="GetExpectedParticles.cs" /> <Compile Include="Initialize_EndValidation.cs" /> <Compile Include="PropertiesTests.cs" /> <Compile Include="ValidateAttribute.cs" /> <Compile Include="ValidateAttribute_String.cs" /> <Compile Include="ValidateElement.cs" /> <Compile Include="ValidateMisc.cs" /> <Compile Include="ValidateText_EndElement.cs" /> <Compile Include="ValidateText_String.cs" /> <Compile Include="ValidateWhitespace_String.cs" /> <Compile Include="ValidatorModule.cs" /> <Compile Include="XmlTestSettings.cs" /> <Compile Include="CXmlTestResolver.cs" /> <Compile Include="$(CommonTestPath)System\IO\TempDirectory.cs" Link="Common\System\IO\TempDirectory.cs" /> </ItemGroup> <ItemGroup> <Content Include="..\TestFiles\**\*" Link="TestFiles\%(RecursiveDir)%(Filename)%(Extension)" CopyToOutputDirectory="PreserveNewest" Visible="false" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>$(NetCoreAppCurrent)</TargetFramework> </PropertyGroup> <ItemGroup> <Compile Include="Constructor_AddSchema.cs" /> <Compile Include="CustomImplementations.cs" /> <Compile Include="ExceptionVerifier.cs" /> <Compile Include="GetExpectedAttributes.cs" /> <Compile Include="GetExpectedParticles.cs" /> <Compile Include="Initialize_EndValidation.cs" /> <Compile Include="PropertiesTests.cs" /> <Compile Include="ValidateAttribute.cs" /> <Compile Include="ValidateAttribute_String.cs" /> <Compile Include="ValidateElement.cs" /> <Compile Include="ValidateMisc.cs" /> <Compile Include="ValidateText_EndElement.cs" /> <Compile Include="ValidateText_String.cs" /> <Compile Include="ValidateWhitespace_String.cs" /> <Compile Include="ValidatorModule.cs" /> <Compile Include="XmlTestSettings.cs" /> <Compile Include="CXmlTestResolver.cs" /> <Compile Include="$(CommonTestPath)System\IO\TempDirectory.cs" Link="Common\System\IO\TempDirectory.cs" /> </ItemGroup> <ItemGroup> <Content Include="..\TestFiles\**\*" Link="TestFiles\%(RecursiveDir)%(Filename)%(Extension)" CopyToOutputDirectory="PreserveNewest" Visible="false" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/readytorun/r2rdump/BasicTests/files/HelloWorld.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>library</OutputType> <CLRTestKind>SharedLibrary</CLRTestKind> </PropertyGroup> <ItemGroup> <Compile Include="HelloWorld.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>library</OutputType> <CLRTestKind>SharedLibrary</CLRTestKind> </PropertyGroup> <ItemGroup> <Compile Include="HelloWorld.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/Loader/classloader/v1/M10/Acceptance/Case4.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Case4.cool" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Case4.cool" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/GC/Scenarios/GCSimulator/GCSimulator_111.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <AllowUnsafeBlocks>true</AllowUnsafeBlocks> <GCStressIncompatible>true</GCStressIncompatible> <CLRTestExecutionArguments>-t 1 -tp 0 -dz 17 -sdz 8500 -dc 10000 -sdc 5000 -lt 5 -dp 0.0 -dw 0.0</CLRTestExecutionArguments> <IsGCSimulatorTest>true</IsGCSimulatorTest> <CLRTestProjectToRun>GCSimulator.csproj</CLRTestProjectToRun> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="GCSimulator.cs" /> <Compile Include="lifetimefx.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/DevDiv_794631/DevDiv_794631_r.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DevDiv_794631.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <PropertyGroup> <DebugType>None</DebugType> <Optimize>False</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DevDiv_794631.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Performance/CodeQuality/Bytemark/nnet.dat
5 7 8 26 0 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 0 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 0 1 1 0 0 1 0 0 0 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 1 1 1 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 0 1 1 1 1 0 1 0 1 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 0 1 0 1 0 0 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 1 0 0 1 1 1 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 1 1 0 1 0 1 1 0 1 0
5 7 8 26 0 0 1 0 0 0 1 0 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 1 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 1 0 0 0 1 0 0 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 1 0 0 0 0 1 0 0 0 0 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 1 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 0 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 0 0 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 1 0 1 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 0 1 0 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 1 0 0 1 1 0 0 1 0 0 0 1 1 1 0 1 1 1 0 1 0 1 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 0 1 1 0 1 1 0 0 0 1 1 1 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 1 1 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 0 1 1 1 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 0 1 1 0 1 1 1 1 0 1 0 1 0 0 0 1 1 1 1 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 1 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 1 1 1 1 1 0 0 0 0 1 0 0 0 0 0 1 1 1 0 0 0 0 0 1 0 0 0 0 1 1 1 1 1 0 0 1 0 1 0 0 1 1 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 1 1 1 0 0 1 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1 0 1 0 1 0 0 0 1 0 1 0 1 1 0 0 0 1 0 0 0 1 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 1 1 0 0 1 1 1 1 1 1 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 1 1 1 1 1 0 1 0 1 1 0 1 0
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/HardwareIntrinsics/Arm/AdvSimd/AddPairwiseWideningAndAddScalar.Vector64.Int32.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics.Arm\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; using System.Runtime.Intrinsics.Arm; namespace JIT.HardwareIntrinsics.Arm { public static partial class Program { private static void AddPairwiseWideningAndAddScalar_Vector64_Int32() { var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); if (test.IsSupported) { // Validates basic functionality works, using Unsafe.Read test.RunBasicScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates basic functionality works, using Load test.RunBasicScenario_Load(); } // Validates calling via reflection works, using Unsafe.Read test.RunReflectionScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates calling via reflection works, using Load test.RunReflectionScenario_Load(); } // Validates passing a static member works test.RunClsVarScenario(); if (AdvSimd.IsSupported) { // Validates passing a static member works, using pinning and Load test.RunClsVarScenario_Load(); } // Validates passing a local works, using Unsafe.Read test.RunLclVarScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates passing a local works, using Load test.RunLclVarScenario_Load(); } // Validates passing the field of a local class works test.RunClassLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local class works, using pinning and Load test.RunClassLclFldScenario_Load(); } // Validates passing an instance member of a class works test.RunClassFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a class works, using pinning and Load test.RunClassFldScenario_Load(); } // Validates passing the field of a local struct works test.RunStructLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local struct works, using pinning and Load test.RunStructLclFldScenario_Load(); } // Validates passing an instance member of a struct works test.RunStructFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a struct works, using pinning and Load test.RunStructFldScenario_Load(); } } else { // Validates we throw on unsupported hardware test.RunUnsupportedScenario(); } if (!test.Succeeded) { throw new Exception("One or more scenarios did not complete as expected."); } } } public sealed unsafe class SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 { private struct DataTable { private byte[] inArray1; private byte[] inArray2; private byte[] outArray; private GCHandle inHandle1; private GCHandle inHandle2; private GCHandle outHandle; private ulong alignment; public DataTable(Int64[] inArray1, Int32[] inArray2, Int64[] outArray, int alignment) { int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Int64>(); int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Int32>(); int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<Int64>(); if ((alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2 || (alignment * 2) < sizeOfoutArray) { throw new ArgumentException("Invalid value of alignment"); } this.inArray1 = new byte[alignment * 2]; this.inArray2 = new byte[alignment * 2]; this.outArray = new byte[alignment * 2]; this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned); this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned); this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned); this.alignment = (ulong)alignment; Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Int64, byte>(ref inArray1[0]), (uint)sizeOfinArray1); Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Int32, byte>(ref inArray2[0]), (uint)sizeOfinArray2); } public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment); public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment); public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment); public void Dispose() { inHandle1.Free(); inHandle2.Free(); outHandle.Free(); } private static unsafe void* Align(byte* buffer, ulong expectedAlignment) { return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1)); } } private struct TestStruct { public Vector64<Int64> _fld1; public Vector64<Int32> _fld2; public static TestStruct Create() { var testStruct = new TestStruct(); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref testStruct._fld1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref testStruct._fld2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); return testStruct; } public void RunStructFldScenario(SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 testClass) { var result = AdvSimd.AddPairwiseWideningAndAddScalar(_fld1, _fld2); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } public void RunStructFldScenario_Load(SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 testClass) { fixed (Vector64<Int64>* pFld1 = &_fld1) fixed (Vector64<Int32>* pFld2 = &_fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } } } private static readonly int LargestVectorSize = 8; private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector64<Int64>>() / sizeof(Int64); private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector64<Int32>>() / sizeof(Int32); private static readonly int RetElementCount = Unsafe.SizeOf<Vector64<Int64>>() / sizeof(Int64); private static Int64[] _data1 = new Int64[Op1ElementCount]; private static Int32[] _data2 = new Int32[Op2ElementCount]; private static Vector64<Int64> _clsVar1; private static Vector64<Int32> _clsVar2; private Vector64<Int64> _fld1; private Vector64<Int32> _fld2; private DataTable _dataTable; static SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32() { for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref _clsVar1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref _clsVar2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); } public SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32() { Succeeded = true; for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref _fld1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref _fld2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } _dataTable = new DataTable(_data1, _data2, new Int64[RetElementCount], LargestVectorSize); } public bool IsSupported => AdvSimd.IsSupported; public bool Succeeded { get; set; } public void RunBasicScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunBasicScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_Load)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead)); var result = typeof(AdvSimd).GetMethod(nameof(AdvSimd.AddPairwiseWideningAndAddScalar), new Type[] { typeof(Vector64<Int64>), typeof(Vector64<Int32>) }) .Invoke(null, new object[] { Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Int64>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_Load)); var result = typeof(AdvSimd).GetMethod(nameof(AdvSimd.AddPairwiseWideningAndAddScalar), new Type[] { typeof(Vector64<Int64>), typeof(Vector64<Int32>) }) .Invoke(null, new object[] { AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Int64>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunClsVarScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( _clsVar1, _clsVar2 ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } public void RunClsVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario_Load)); fixed (Vector64<Int64>* pClsVar1 = &_clsVar1) fixed (Vector64<Int32>* pClsVar2 = &_clsVar2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pClsVar1)), AdvSimd.LoadVector64((Int32*)(pClsVar2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } } public void RunLclVarScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead)); var op1 = Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr); var op2 = Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr); var result = AdvSimd.AddPairwiseWideningAndAddScalar(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunLclVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_Load)); var op1 = AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)); var op2 = AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)); var result = AdvSimd.AddPairwiseWideningAndAddScalar(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario)); var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); var result = AdvSimd.AddPairwiseWideningAndAddScalar(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario_Load)); var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); fixed (Vector64<Int64>* pFld1 = &test._fld1) fixed (Vector64<Int32>* pFld2 = &test._fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } } public void RunClassFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario)); var result = AdvSimd.AddPairwiseWideningAndAddScalar(_fld1, _fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } public void RunClassFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario_Load)); fixed (Vector64<Int64>* pFld1 = &_fld1) fixed (Vector64<Int32>* pFld2 = &_fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } } public void RunStructLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario)); var test = TestStruct.Create(); var result = AdvSimd.AddPairwiseWideningAndAddScalar(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario_Load)); var test = TestStruct.Create(); var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(&test._fld1)), AdvSimd.LoadVector64((Int32*)(&test._fld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario)); var test = TestStruct.Create(); test.RunStructFldScenario(this); } public void RunStructFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario_Load)); var test = TestStruct.Create(); test.RunStructFldScenario_Load(this); } public void RunUnsupportedScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunUnsupportedScenario)); bool succeeded = false; try { RunBasicScenario_UnsafeRead(); } catch (PlatformNotSupportedException) { succeeded = true; } if (!succeeded) { Succeeded = false; } } private void ValidateResult(Vector64<Int64> op1, Vector64<Int32> op2, void* result, [CallerMemberName] string method = "") { Int64[] inArray1 = new Int64[Op1ElementCount]; Int32[] inArray2 = new Int32[Op2ElementCount]; Int64[] outArray = new Int64[RetElementCount]; Unsafe.WriteUnaligned(ref Unsafe.As<Int64, byte>(ref inArray1[0]), op1); Unsafe.WriteUnaligned(ref Unsafe.As<Int32, byte>(ref inArray2[0]), op2); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Int64>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(void* op1, void* op2, void* result, [CallerMemberName] string method = "") { Int64[] inArray1 = new Int64[Op1ElementCount]; Int32[] inArray2 = new Int32[Op2ElementCount]; Int64[] outArray = new Int64[RetElementCount]; Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector64<Int64>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int32, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector64<Int32>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Int64>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(Int64[] left, Int32[] right, Int64[] result, [CallerMemberName] string method = "") { bool succeeded = true; if (Helpers.AddPairwiseWideningAndAdd(left, right, 0) != result[0]) { succeeded = false; } else { for (var i = 1; i < RetElementCount; i++) { if (result[i] != 0) { succeeded = false; break; } } } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"{nameof(AdvSimd)}.{nameof(AdvSimd.AddPairwiseWideningAndAddScalar)}<Int64>(Vector64<Int64>, Vector64<Int32>): {method} failed:"); TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})"); TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})"); TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})"); TestLibrary.TestFramework.LogInformation(string.Empty); Succeeded = false; } } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics.Arm\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; using System.Runtime.Intrinsics.Arm; namespace JIT.HardwareIntrinsics.Arm { public static partial class Program { private static void AddPairwiseWideningAndAddScalar_Vector64_Int32() { var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); if (test.IsSupported) { // Validates basic functionality works, using Unsafe.Read test.RunBasicScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates basic functionality works, using Load test.RunBasicScenario_Load(); } // Validates calling via reflection works, using Unsafe.Read test.RunReflectionScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates calling via reflection works, using Load test.RunReflectionScenario_Load(); } // Validates passing a static member works test.RunClsVarScenario(); if (AdvSimd.IsSupported) { // Validates passing a static member works, using pinning and Load test.RunClsVarScenario_Load(); } // Validates passing a local works, using Unsafe.Read test.RunLclVarScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates passing a local works, using Load test.RunLclVarScenario_Load(); } // Validates passing the field of a local class works test.RunClassLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local class works, using pinning and Load test.RunClassLclFldScenario_Load(); } // Validates passing an instance member of a class works test.RunClassFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a class works, using pinning and Load test.RunClassFldScenario_Load(); } // Validates passing the field of a local struct works test.RunStructLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local struct works, using pinning and Load test.RunStructLclFldScenario_Load(); } // Validates passing an instance member of a struct works test.RunStructFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a struct works, using pinning and Load test.RunStructFldScenario_Load(); } } else { // Validates we throw on unsupported hardware test.RunUnsupportedScenario(); } if (!test.Succeeded) { throw new Exception("One or more scenarios did not complete as expected."); } } } public sealed unsafe class SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 { private struct DataTable { private byte[] inArray1; private byte[] inArray2; private byte[] outArray; private GCHandle inHandle1; private GCHandle inHandle2; private GCHandle outHandle; private ulong alignment; public DataTable(Int64[] inArray1, Int32[] inArray2, Int64[] outArray, int alignment) { int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Int64>(); int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Int32>(); int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<Int64>(); if ((alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2 || (alignment * 2) < sizeOfoutArray) { throw new ArgumentException("Invalid value of alignment"); } this.inArray1 = new byte[alignment * 2]; this.inArray2 = new byte[alignment * 2]; this.outArray = new byte[alignment * 2]; this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned); this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned); this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned); this.alignment = (ulong)alignment; Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Int64, byte>(ref inArray1[0]), (uint)sizeOfinArray1); Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Int32, byte>(ref inArray2[0]), (uint)sizeOfinArray2); } public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment); public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment); public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment); public void Dispose() { inHandle1.Free(); inHandle2.Free(); outHandle.Free(); } private static unsafe void* Align(byte* buffer, ulong expectedAlignment) { return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1)); } } private struct TestStruct { public Vector64<Int64> _fld1; public Vector64<Int32> _fld2; public static TestStruct Create() { var testStruct = new TestStruct(); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref testStruct._fld1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref testStruct._fld2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); return testStruct; } public void RunStructFldScenario(SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 testClass) { var result = AdvSimd.AddPairwiseWideningAndAddScalar(_fld1, _fld2); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } public void RunStructFldScenario_Load(SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32 testClass) { fixed (Vector64<Int64>* pFld1 = &_fld1) fixed (Vector64<Int32>* pFld2 = &_fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } } } private static readonly int LargestVectorSize = 8; private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector64<Int64>>() / sizeof(Int64); private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector64<Int32>>() / sizeof(Int32); private static readonly int RetElementCount = Unsafe.SizeOf<Vector64<Int64>>() / sizeof(Int64); private static Int64[] _data1 = new Int64[Op1ElementCount]; private static Int32[] _data2 = new Int32[Op2ElementCount]; private static Vector64<Int64> _clsVar1; private static Vector64<Int32> _clsVar2; private Vector64<Int64> _fld1; private Vector64<Int32> _fld2; private DataTable _dataTable; static SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32() { for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref _clsVar1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref _clsVar2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); } public SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32() { Succeeded = true; for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int64>, byte>(ref _fld1), ref Unsafe.As<Int64, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Int64>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Int32>, byte>(ref _fld2), ref Unsafe.As<Int32, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Int32>>()); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetInt64(); } for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetInt32(); } _dataTable = new DataTable(_data1, _data2, new Int64[RetElementCount], LargestVectorSize); } public bool IsSupported => AdvSimd.IsSupported; public bool Succeeded { get; set; } public void RunBasicScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunBasicScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_Load)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead)); var result = typeof(AdvSimd).GetMethod(nameof(AdvSimd.AddPairwiseWideningAndAddScalar), new Type[] { typeof(Vector64<Int64>), typeof(Vector64<Int32>) }) .Invoke(null, new object[] { Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Int64>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_Load)); var result = typeof(AdvSimd).GetMethod(nameof(AdvSimd.AddPairwiseWideningAndAddScalar), new Type[] { typeof(Vector64<Int64>), typeof(Vector64<Int32>) }) .Invoke(null, new object[] { AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Int64>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunClsVarScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario)); var result = AdvSimd.AddPairwiseWideningAndAddScalar( _clsVar1, _clsVar2 ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } public void RunClsVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario_Load)); fixed (Vector64<Int64>* pClsVar1 = &_clsVar1) fixed (Vector64<Int32>* pClsVar2 = &_clsVar2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pClsVar1)), AdvSimd.LoadVector64((Int32*)(pClsVar2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } } public void RunLclVarScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead)); var op1 = Unsafe.Read<Vector64<Int64>>(_dataTable.inArray1Ptr); var op2 = Unsafe.Read<Vector64<Int32>>(_dataTable.inArray2Ptr); var result = AdvSimd.AddPairwiseWideningAndAddScalar(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunLclVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_Load)); var op1 = AdvSimd.LoadVector64((Int64*)(_dataTable.inArray1Ptr)); var op2 = AdvSimd.LoadVector64((Int32*)(_dataTable.inArray2Ptr)); var result = AdvSimd.AddPairwiseWideningAndAddScalar(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario)); var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); var result = AdvSimd.AddPairwiseWideningAndAddScalar(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario_Load)); var test = new SimpleBinaryOpTest__AddPairwiseWideningAndAddScalar_Vector64_Int32(); fixed (Vector64<Int64>* pFld1 = &test._fld1) fixed (Vector64<Int32>* pFld2 = &test._fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } } public void RunClassFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario)); var result = AdvSimd.AddPairwiseWideningAndAddScalar(_fld1, _fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } public void RunClassFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario_Load)); fixed (Vector64<Int64>* pFld1 = &_fld1) fixed (Vector64<Int32>* pFld2 = &_fld2) { var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(pFld1)), AdvSimd.LoadVector64((Int32*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } } public void RunStructLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario)); var test = TestStruct.Create(); var result = AdvSimd.AddPairwiseWideningAndAddScalar(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario_Load)); var test = TestStruct.Create(); var result = AdvSimd.AddPairwiseWideningAndAddScalar( AdvSimd.LoadVector64((Int64*)(&test._fld1)), AdvSimd.LoadVector64((Int32*)(&test._fld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario)); var test = TestStruct.Create(); test.RunStructFldScenario(this); } public void RunStructFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario_Load)); var test = TestStruct.Create(); test.RunStructFldScenario_Load(this); } public void RunUnsupportedScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunUnsupportedScenario)); bool succeeded = false; try { RunBasicScenario_UnsafeRead(); } catch (PlatformNotSupportedException) { succeeded = true; } if (!succeeded) { Succeeded = false; } } private void ValidateResult(Vector64<Int64> op1, Vector64<Int32> op2, void* result, [CallerMemberName] string method = "") { Int64[] inArray1 = new Int64[Op1ElementCount]; Int32[] inArray2 = new Int32[Op2ElementCount]; Int64[] outArray = new Int64[RetElementCount]; Unsafe.WriteUnaligned(ref Unsafe.As<Int64, byte>(ref inArray1[0]), op1); Unsafe.WriteUnaligned(ref Unsafe.As<Int32, byte>(ref inArray2[0]), op2); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Int64>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(void* op1, void* op2, void* result, [CallerMemberName] string method = "") { Int64[] inArray1 = new Int64[Op1ElementCount]; Int32[] inArray2 = new Int32[Op2ElementCount]; Int64[] outArray = new Int64[RetElementCount]; Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector64<Int64>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int32, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector64<Int32>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Int64, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Int64>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(Int64[] left, Int32[] right, Int64[] result, [CallerMemberName] string method = "") { bool succeeded = true; if (Helpers.AddPairwiseWideningAndAdd(left, right, 0) != result[0]) { succeeded = false; } else { for (var i = 1; i < RetElementCount; i++) { if (result[i] != 0) { succeeded = false; break; } } } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"{nameof(AdvSimd)}.{nameof(AdvSimd.AddPairwiseWideningAndAddScalar)}<Int64>(Vector64<Int64>, Vector64<Int32>): {method} failed:"); TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})"); TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})"); TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})"); TestLibrary.TestFramework.LogInformation(string.Empty); Succeeded = false; } } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/src/System/Xml/Xsl/XsltOld/XsltDebugger.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.Xml.Xsl.XsltOld.Debugger { using System.Xml.XPath; internal interface IXsltProcessor { } internal interface IXsltDebugger { string GetBuiltInTemplatesUri(); void OnInstructionCompile(XPathNavigator styleSheetNavigator); void OnInstructionExecute(IXsltProcessor xsltProcessor); } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.Xml.Xsl.XsltOld.Debugger { using System.Xml.XPath; internal interface IXsltProcessor { } internal interface IXsltDebugger { string GetBuiltInTemplatesUri(); void OnInstructionCompile(XPathNavigator styleSheetNavigator); void OnInstructionExecute(IXsltProcessor xsltProcessor); } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.ComponentModel.TypeConverter/src/System/ComponentModel/Design/ServiceCreatorCallback.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.ComponentModel.Design { /// <summary> /// Declares a callback function to create an instance of a service on demand. /// </summary> public delegate object? ServiceCreatorCallback(IServiceContainer container, Type serviceType); }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.ComponentModel.Design { /// <summary> /// Declares a callback function to create an instance of a service on demand. /// </summary> public delegate object? ServiceCreatorCallback(IServiceContainer container, Type serviceType); }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Formats.Asn1/src/System/Formats/Asn1/Asn1Tag.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; using System.Diagnostics.CodeAnalysis; namespace System.Formats.Asn1 { /// <summary> /// This type represents an ASN.1 tag, as described in ITU-T Recommendation X.680. /// </summary> // T-REC-X.690-201508 sec 8.1.2 public readonly partial struct Asn1Tag : IEquatable<Asn1Tag> { private const byte ClassMask = 0b1100_0000; private const byte ConstructedMask = 0b0010_0000; private const byte ControlMask = ClassMask | ConstructedMask; private const byte TagNumberMask = 0b0001_1111; private readonly byte _controlFlags; /// <summary> /// The tag class to which this tag belongs. /// </summary> public TagClass TagClass => (TagClass)(_controlFlags & ClassMask); /// <summary> /// Indicates if the tag represents a constructed encoding (<see langword="true"/>), or /// a primitive encoding (<see langword="false"/>). /// </summary> public bool IsConstructed => (_controlFlags & ConstructedMask) != 0; /// <summary> /// The numeric value for this tag. /// </summary> /// <remarks> /// If <see cref="TagClass"/> is <see cref="Asn1.TagClass.Universal"/>, this value can /// be interpreted as a <see cref="UniversalTagNumber"/>. /// </remarks> public int TagValue { get; } private Asn1Tag(byte controlFlags, int tagValue) { _controlFlags = (byte)(controlFlags & ControlMask); TagValue = tagValue; } /// <summary> /// Create an <see cref="Asn1Tag"/> for a tag from the UNIVERSAL class. /// </summary> /// <param name="universalTagNumber"> /// One of the enumeration values that specifies the semantic type for this tag. /// </param> /// <param name="isConstructed"> /// <see langword="true"/> for a constructed tag, <see langword="false"/> for a primitive tag. /// </param> /// <exception cref="ArgumentOutOfRangeException"> /// <paramref name="universalTagNumber"/> is not a known value. /// </exception> public Asn1Tag(UniversalTagNumber universalTagNumber, bool isConstructed = false) : this(isConstructed ? ConstructedMask : (byte)0, (int)universalTagNumber) { // T-REC-X.680-201508 sec 8.6 (Table 1) const UniversalTagNumber ReservedIndex = (UniversalTagNumber)15; if (universalTagNumber < UniversalTagNumber.EndOfContents || universalTagNumber > UniversalTagNumber.RelativeObjectIdentifierIRI || universalTagNumber == ReservedIndex) { throw new ArgumentOutOfRangeException(nameof(universalTagNumber)); } } /// <summary> /// Create an <see cref="Asn1Tag"/> for a specified value within a specified tag class. /// </summary> /// <param name="tagClass"> /// The tag class for this tag. /// </param> /// <param name="tagValue"> /// The numeric value for this tag. /// </param> /// <param name="isConstructed"> /// <see langword="true"/> for a constructed tag, <see langword="false"/> for a primitive tag. /// </param> /// <exception cref="ArgumentOutOfRangeException"> /// <paramref name="tagClass"/> is not a known value. /// /// -or- /// /// <paramref name="tagValue" /> is negative. /// </exception> /// <remarks> /// This constructor allows for the creation undefined UNIVERSAL class tags. /// </remarks> public Asn1Tag(TagClass tagClass, int tagValue, bool isConstructed = false) : this((byte)((byte)tagClass | (isConstructed ? ConstructedMask : 0)), tagValue) { switch (tagClass) { case TagClass.Universal: case TagClass.ContextSpecific: case TagClass.Application: case TagClass.Private: break; default: throw new ArgumentOutOfRangeException(nameof(tagClass)); } if (tagValue < 0) { throw new ArgumentOutOfRangeException(nameof(tagValue)); } } /// <summary> /// Produces a tag with the same <see cref="TagClass"/> and /// <see cref="TagValue"/> values, but whose <see cref="IsConstructed"/> is <see langword="true"/>. /// </summary> /// <returns> /// A tag with the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values, but whose <see cref="IsConstructed"/> is <see langword="true"/>. /// </returns> public Asn1Tag AsConstructed() { return new Asn1Tag((byte)(_controlFlags | ConstructedMask), TagValue); } /// <summary> /// Produces a tag with the same <see cref="TagClass"/> and /// <see cref="TagValue"/> values, but whose <see cref="IsConstructed"/> is <see langword="false"/>. /// </summary> /// <returns> /// A tag with the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values, but whose <see cref="IsConstructed"/> is <see langword="false"/>. /// </returns> public Asn1Tag AsPrimitive() { return new Asn1Tag((byte)(_controlFlags & ~ConstructedMask), TagValue); } /// <summary> /// Attempts to read a BER-encoded tag which starts at <paramref name="source"/>. /// </summary> /// <param name="source"> /// The read only byte sequence whose beginning is a BER-encoded tag. /// </param> /// <param name="tag"> /// The decoded tag. /// </param> /// <param name="bytesConsumed"> /// When this method returns, contains the number of bytes that contributed /// to the encoded tag, 0 on failure. This parameter is treated as uninitialized. /// </param> /// <returns> /// <see langword="true" /> if a tag was correctly decoded; otherwise, <see langword="false" />. /// </returns> public static bool TryDecode(ReadOnlySpan<byte> source, out Asn1Tag tag, out int bytesConsumed) { tag = default(Asn1Tag); bytesConsumed = 0; if (source.IsEmpty) { return false; } byte first = source[bytesConsumed]; bytesConsumed++; uint tagValue = (uint)(first & TagNumberMask); if (tagValue == TagNumberMask) { // Multi-byte encoding // T-REC-X.690-201508 sec 8.1.2.4 const byte ContinuationFlag = 0x80; const byte ValueMask = ContinuationFlag - 1; tagValue = 0; byte current; do { if (source.Length <= bytesConsumed) { bytesConsumed = 0; return false; } current = source[bytesConsumed]; byte currentValue = (byte)(current & ValueMask); bytesConsumed++; // If TooBigToShift is shifted left 7, the content bit shifts out. // So any value greater than or equal to this cannot be shifted without loss. const int TooBigToShift = 0b00000010_00000000_00000000_00000000; if (tagValue >= TooBigToShift) { bytesConsumed = 0; return false; } tagValue <<= 7; tagValue |= currentValue; // The first byte cannot have the value 0 (T-REC-X.690-201508 sec 8.1.2.4.2.c) if (tagValue == 0) { bytesConsumed = 0; return false; } } while ((current & ContinuationFlag) == ContinuationFlag); // This encoding is only valid for tag values greater than 30. // (T-REC-X.690-201508 sec 8.1.2.3, 8.1.2.4) if (tagValue <= 30) { bytesConsumed = 0; return false; } // There's not really any ambiguity, but prevent negative numbers from showing up. if (tagValue > int.MaxValue) { bytesConsumed = 0; return false; } } Debug.Assert(bytesConsumed > 0); tag = new Asn1Tag(first, (int)tagValue); return true; } /// <summary> /// Reads a BER-encoded tag which starts at <paramref name="source"/>. /// </summary> /// <param name="source"> /// The read only byte sequence whose beginning is a BER-encoded tag. /// </param> /// <param name="bytesConsumed"> /// When this method returns, contains the number of bytes that contributed /// to the encoded tag. This parameter is treated as uninitialized. /// </param> /// <returns> /// The decoded tag. /// </returns> /// <exception cref="AsnContentException"> /// The provided data does not decode to a tag. /// </exception> public static Asn1Tag Decode(ReadOnlySpan<byte> source, out int bytesConsumed) { if (TryDecode(source, out Asn1Tag tag, out bytesConsumed)) { return tag; } throw new AsnContentException(SR.ContentException_InvalidTag); } /// <summary> /// Reports the number of bytes required for the BER-encoding of this tag. /// </summary> /// <returns> /// The number of bytes required for the BER-encoding of this tag. /// </returns> /// <seealso cref="TryEncode(Span{byte},out int)"/> public int CalculateEncodedSize() { const int SevenBits = 0b0111_1111; const int FourteenBits = 0b0011_1111_1111_1111; const int TwentyOneBits = 0b0001_1111_1111_1111_1111_1111; const int TwentyEightBits = 0b0000_1111_1111_1111_1111_1111_1111_1111; if (TagValue < TagNumberMask) return 1; if (TagValue <= SevenBits) return 2; if (TagValue <= FourteenBits) return 3; if (TagValue <= TwentyOneBits) return 4; if (TagValue <= TwentyEightBits) return 5; return 6; } /// <summary> /// Attempts to write the BER-encoded form of this tag to <paramref name="destination"/>. /// </summary> /// <param name="destination"> /// The start of where the encoded tag should be written. /// </param> /// <param name="bytesWritten"> /// Receives the value from <see cref="CalculateEncodedSize"/> on success, 0 on failure. /// </param> /// <returns> /// <see langword="false"/> if <paramref name="destination"/>.<see cref="Span{T}.Length"/> &lt; /// <see cref="CalculateEncodedSize"/>(), <see langword="true"/> otherwise. /// </returns> public bool TryEncode(Span<byte> destination, out int bytesWritten) { int spaceRequired = CalculateEncodedSize(); if (destination.Length < spaceRequired) { bytesWritten = 0; return false; } if (spaceRequired == 1) { byte value = (byte)(_controlFlags | TagValue); destination[0] = value; bytesWritten = 1; return true; } byte firstByte = (byte)(_controlFlags | TagNumberMask); destination[0] = firstByte; int remaining = TagValue; int idx = spaceRequired - 1; while (remaining > 0) { int segment = remaining & 0x7F; // The last byte doesn't get the marker, which we write first. if (remaining != TagValue) { segment |= 0x80; } Debug.Assert(segment <= byte.MaxValue); destination[idx] = (byte)segment; remaining >>= 7; idx--; } Debug.Assert(idx == 0); bytesWritten = spaceRequired; return true; } /// <summary> /// Writes the BER-encoded form of this tag to <paramref name="destination"/>. /// </summary> /// <param name="destination"> /// The start of where the encoded tag should be written. /// </param> /// <returns> /// The number of bytes written to <paramref name="destination"/>. /// </returns> /// <seealso cref="CalculateEncodedSize"/> /// <exception cref="ArgumentException"> /// <paramref name="destination"/>.<see cref="Span{T}.Length"/> &lt; <see cref="CalculateEncodedSize"/>. /// </exception> public int Encode(Span<byte> destination) { if (TryEncode(destination, out int bytesWritten)) { return bytesWritten; } throw new ArgumentException(SR.Argument_DestinationTooShort, nameof(destination)); } /// <summary> /// Tests if <paramref name="other"/> has the same encoding as this tag. /// </summary> /// <param name="other"> /// Tag to test for equality. /// </param> /// <returns> /// <see langword="true"/> if <paramref name="other"/> has the same values for /// <see cref="TagClass"/>, <see cref="TagValue"/>, and <see cref="IsConstructed"/>; /// <see langword="false"/> otherwise. /// </returns> public bool Equals(Asn1Tag other) { return _controlFlags == other._controlFlags && TagValue == other.TagValue; } /// <summary> /// Tests if <paramref name="obj"/> is an <see cref="Asn1Tag"/> with the same /// encoding as this tag. /// </summary> /// <param name="obj">Object to test for value equality</param> /// <returns> /// <see langword="false"/> if <paramref name="obj"/> is not an <see cref="Asn1Tag"/>, /// <see cref="Equals(Asn1Tag)"/> otherwise. /// </returns> public override bool Equals([NotNullWhen(true)] object? obj) { return obj is Asn1Tag tag && Equals(tag); } /// <summary> /// Returns the hash code for this instance. /// </summary> /// <returns> /// A 32-bit signed integer hash code. /// </returns> public override int GetHashCode() { // Most TagValue values will be in the 0-30 range, // the GetHashCode value only has collisions when TagValue is // between 2^29 and uint.MaxValue return (_controlFlags << 24) ^ TagValue; } /// <summary> /// Tests if two <see cref="Asn1Tag"/> values have the same BER encoding. /// </summary> /// <param name="left">The first value to compare.</param> /// <param name="right">The second value to compare.</param> /// <returns> /// <see langword="true"/> if <paramref name="left"/> and <paramref name="right"/> have the same /// BER encoding, <see langword="false"/> otherwise. /// </returns> public static bool operator ==(Asn1Tag left, Asn1Tag right) { return left.Equals(right); } /// <summary> /// Tests if two <see cref="Asn1Tag"/> values have a different BER encoding. /// </summary> /// <param name="left">The first value to compare.</param> /// <param name="right">The second value to compare.</param> /// <returns> /// <see langword="true"/> if <paramref name="left"/> and <paramref name="right"/> have a different /// BER encoding, <see langword="false"/> otherwise. /// </returns> public static bool operator !=(Asn1Tag left, Asn1Tag right) { return !left.Equals(right); } /// <summary> /// Tests if <paramref name="other"/> has the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values as this tag, and does not compare <see cref="IsConstructed"/>. /// </summary> /// <param name="other">Tag to test for concept equality.</param> /// <returns> /// <see langword="true"/> if <paramref name="other"/> has the same <see cref="TagClass"/> and <see cref="TagValue"/> /// as this tag, <see langword="false"/> otherwise. /// </returns> public bool HasSameClassAndValue(Asn1Tag other) { return TagValue == other.TagValue && TagClass == other.TagClass; } /// <summary> /// Provides a text representation of this tag suitable for debugging. /// </summary> /// <returns> /// A text representation of this tag suitable for debugging. /// </returns> public override string ToString() { const string ConstructedPrefix = "Constructed "; string classAndValue; if (TagClass == TagClass.Universal) { classAndValue = ((UniversalTagNumber)TagValue).ToString(); } else { classAndValue = TagClass + "-" + TagValue; } if (IsConstructed) { return ConstructedPrefix + classAndValue; } return classAndValue; } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; using System.Diagnostics.CodeAnalysis; namespace System.Formats.Asn1 { /// <summary> /// This type represents an ASN.1 tag, as described in ITU-T Recommendation X.680. /// </summary> // T-REC-X.690-201508 sec 8.1.2 public readonly partial struct Asn1Tag : IEquatable<Asn1Tag> { private const byte ClassMask = 0b1100_0000; private const byte ConstructedMask = 0b0010_0000; private const byte ControlMask = ClassMask | ConstructedMask; private const byte TagNumberMask = 0b0001_1111; private readonly byte _controlFlags; /// <summary> /// The tag class to which this tag belongs. /// </summary> public TagClass TagClass => (TagClass)(_controlFlags & ClassMask); /// <summary> /// Indicates if the tag represents a constructed encoding (<see langword="true"/>), or /// a primitive encoding (<see langword="false"/>). /// </summary> public bool IsConstructed => (_controlFlags & ConstructedMask) != 0; /// <summary> /// The numeric value for this tag. /// </summary> /// <remarks> /// If <see cref="TagClass"/> is <see cref="Asn1.TagClass.Universal"/>, this value can /// be interpreted as a <see cref="UniversalTagNumber"/>. /// </remarks> public int TagValue { get; } private Asn1Tag(byte controlFlags, int tagValue) { _controlFlags = (byte)(controlFlags & ControlMask); TagValue = tagValue; } /// <summary> /// Create an <see cref="Asn1Tag"/> for a tag from the UNIVERSAL class. /// </summary> /// <param name="universalTagNumber"> /// One of the enumeration values that specifies the semantic type for this tag. /// </param> /// <param name="isConstructed"> /// <see langword="true"/> for a constructed tag, <see langword="false"/> for a primitive tag. /// </param> /// <exception cref="ArgumentOutOfRangeException"> /// <paramref name="universalTagNumber"/> is not a known value. /// </exception> public Asn1Tag(UniversalTagNumber universalTagNumber, bool isConstructed = false) : this(isConstructed ? ConstructedMask : (byte)0, (int)universalTagNumber) { // T-REC-X.680-201508 sec 8.6 (Table 1) const UniversalTagNumber ReservedIndex = (UniversalTagNumber)15; if (universalTagNumber < UniversalTagNumber.EndOfContents || universalTagNumber > UniversalTagNumber.RelativeObjectIdentifierIRI || universalTagNumber == ReservedIndex) { throw new ArgumentOutOfRangeException(nameof(universalTagNumber)); } } /// <summary> /// Create an <see cref="Asn1Tag"/> for a specified value within a specified tag class. /// </summary> /// <param name="tagClass"> /// The tag class for this tag. /// </param> /// <param name="tagValue"> /// The numeric value for this tag. /// </param> /// <param name="isConstructed"> /// <see langword="true"/> for a constructed tag, <see langword="false"/> for a primitive tag. /// </param> /// <exception cref="ArgumentOutOfRangeException"> /// <paramref name="tagClass"/> is not a known value. /// /// -or- /// /// <paramref name="tagValue" /> is negative. /// </exception> /// <remarks> /// This constructor allows for the creation undefined UNIVERSAL class tags. /// </remarks> public Asn1Tag(TagClass tagClass, int tagValue, bool isConstructed = false) : this((byte)((byte)tagClass | (isConstructed ? ConstructedMask : 0)), tagValue) { switch (tagClass) { case TagClass.Universal: case TagClass.ContextSpecific: case TagClass.Application: case TagClass.Private: break; default: throw new ArgumentOutOfRangeException(nameof(tagClass)); } if (tagValue < 0) { throw new ArgumentOutOfRangeException(nameof(tagValue)); } } /// <summary> /// Produces a tag with the same <see cref="TagClass"/> and /// <see cref="TagValue"/> values, but whose <see cref="IsConstructed"/> is <see langword="true"/>. /// </summary> /// <returns> /// A tag with the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values, but whose <see cref="IsConstructed"/> is <see langword="true"/>. /// </returns> public Asn1Tag AsConstructed() { return new Asn1Tag((byte)(_controlFlags | ConstructedMask), TagValue); } /// <summary> /// Produces a tag with the same <see cref="TagClass"/> and /// <see cref="TagValue"/> values, but whose <see cref="IsConstructed"/> is <see langword="false"/>. /// </summary> /// <returns> /// A tag with the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values, but whose <see cref="IsConstructed"/> is <see langword="false"/>. /// </returns> public Asn1Tag AsPrimitive() { return new Asn1Tag((byte)(_controlFlags & ~ConstructedMask), TagValue); } /// <summary> /// Attempts to read a BER-encoded tag which starts at <paramref name="source"/>. /// </summary> /// <param name="source"> /// The read only byte sequence whose beginning is a BER-encoded tag. /// </param> /// <param name="tag"> /// The decoded tag. /// </param> /// <param name="bytesConsumed"> /// When this method returns, contains the number of bytes that contributed /// to the encoded tag, 0 on failure. This parameter is treated as uninitialized. /// </param> /// <returns> /// <see langword="true" /> if a tag was correctly decoded; otherwise, <see langword="false" />. /// </returns> public static bool TryDecode(ReadOnlySpan<byte> source, out Asn1Tag tag, out int bytesConsumed) { tag = default(Asn1Tag); bytesConsumed = 0; if (source.IsEmpty) { return false; } byte first = source[bytesConsumed]; bytesConsumed++; uint tagValue = (uint)(first & TagNumberMask); if (tagValue == TagNumberMask) { // Multi-byte encoding // T-REC-X.690-201508 sec 8.1.2.4 const byte ContinuationFlag = 0x80; const byte ValueMask = ContinuationFlag - 1; tagValue = 0; byte current; do { if (source.Length <= bytesConsumed) { bytesConsumed = 0; return false; } current = source[bytesConsumed]; byte currentValue = (byte)(current & ValueMask); bytesConsumed++; // If TooBigToShift is shifted left 7, the content bit shifts out. // So any value greater than or equal to this cannot be shifted without loss. const int TooBigToShift = 0b00000010_00000000_00000000_00000000; if (tagValue >= TooBigToShift) { bytesConsumed = 0; return false; } tagValue <<= 7; tagValue |= currentValue; // The first byte cannot have the value 0 (T-REC-X.690-201508 sec 8.1.2.4.2.c) if (tagValue == 0) { bytesConsumed = 0; return false; } } while ((current & ContinuationFlag) == ContinuationFlag); // This encoding is only valid for tag values greater than 30. // (T-REC-X.690-201508 sec 8.1.2.3, 8.1.2.4) if (tagValue <= 30) { bytesConsumed = 0; return false; } // There's not really any ambiguity, but prevent negative numbers from showing up. if (tagValue > int.MaxValue) { bytesConsumed = 0; return false; } } Debug.Assert(bytesConsumed > 0); tag = new Asn1Tag(first, (int)tagValue); return true; } /// <summary> /// Reads a BER-encoded tag which starts at <paramref name="source"/>. /// </summary> /// <param name="source"> /// The read only byte sequence whose beginning is a BER-encoded tag. /// </param> /// <param name="bytesConsumed"> /// When this method returns, contains the number of bytes that contributed /// to the encoded tag. This parameter is treated as uninitialized. /// </param> /// <returns> /// The decoded tag. /// </returns> /// <exception cref="AsnContentException"> /// The provided data does not decode to a tag. /// </exception> public static Asn1Tag Decode(ReadOnlySpan<byte> source, out int bytesConsumed) { if (TryDecode(source, out Asn1Tag tag, out bytesConsumed)) { return tag; } throw new AsnContentException(SR.ContentException_InvalidTag); } /// <summary> /// Reports the number of bytes required for the BER-encoding of this tag. /// </summary> /// <returns> /// The number of bytes required for the BER-encoding of this tag. /// </returns> /// <seealso cref="TryEncode(Span{byte},out int)"/> public int CalculateEncodedSize() { const int SevenBits = 0b0111_1111; const int FourteenBits = 0b0011_1111_1111_1111; const int TwentyOneBits = 0b0001_1111_1111_1111_1111_1111; const int TwentyEightBits = 0b0000_1111_1111_1111_1111_1111_1111_1111; if (TagValue < TagNumberMask) return 1; if (TagValue <= SevenBits) return 2; if (TagValue <= FourteenBits) return 3; if (TagValue <= TwentyOneBits) return 4; if (TagValue <= TwentyEightBits) return 5; return 6; } /// <summary> /// Attempts to write the BER-encoded form of this tag to <paramref name="destination"/>. /// </summary> /// <param name="destination"> /// The start of where the encoded tag should be written. /// </param> /// <param name="bytesWritten"> /// Receives the value from <see cref="CalculateEncodedSize"/> on success, 0 on failure. /// </param> /// <returns> /// <see langword="false"/> if <paramref name="destination"/>.<see cref="Span{T}.Length"/> &lt; /// <see cref="CalculateEncodedSize"/>(), <see langword="true"/> otherwise. /// </returns> public bool TryEncode(Span<byte> destination, out int bytesWritten) { int spaceRequired = CalculateEncodedSize(); if (destination.Length < spaceRequired) { bytesWritten = 0; return false; } if (spaceRequired == 1) { byte value = (byte)(_controlFlags | TagValue); destination[0] = value; bytesWritten = 1; return true; } byte firstByte = (byte)(_controlFlags | TagNumberMask); destination[0] = firstByte; int remaining = TagValue; int idx = spaceRequired - 1; while (remaining > 0) { int segment = remaining & 0x7F; // The last byte doesn't get the marker, which we write first. if (remaining != TagValue) { segment |= 0x80; } Debug.Assert(segment <= byte.MaxValue); destination[idx] = (byte)segment; remaining >>= 7; idx--; } Debug.Assert(idx == 0); bytesWritten = spaceRequired; return true; } /// <summary> /// Writes the BER-encoded form of this tag to <paramref name="destination"/>. /// </summary> /// <param name="destination"> /// The start of where the encoded tag should be written. /// </param> /// <returns> /// The number of bytes written to <paramref name="destination"/>. /// </returns> /// <seealso cref="CalculateEncodedSize"/> /// <exception cref="ArgumentException"> /// <paramref name="destination"/>.<see cref="Span{T}.Length"/> &lt; <see cref="CalculateEncodedSize"/>. /// </exception> public int Encode(Span<byte> destination) { if (TryEncode(destination, out int bytesWritten)) { return bytesWritten; } throw new ArgumentException(SR.Argument_DestinationTooShort, nameof(destination)); } /// <summary> /// Tests if <paramref name="other"/> has the same encoding as this tag. /// </summary> /// <param name="other"> /// Tag to test for equality. /// </param> /// <returns> /// <see langword="true"/> if <paramref name="other"/> has the same values for /// <see cref="TagClass"/>, <see cref="TagValue"/>, and <see cref="IsConstructed"/>; /// <see langword="false"/> otherwise. /// </returns> public bool Equals(Asn1Tag other) { return _controlFlags == other._controlFlags && TagValue == other.TagValue; } /// <summary> /// Tests if <paramref name="obj"/> is an <see cref="Asn1Tag"/> with the same /// encoding as this tag. /// </summary> /// <param name="obj">Object to test for value equality</param> /// <returns> /// <see langword="false"/> if <paramref name="obj"/> is not an <see cref="Asn1Tag"/>, /// <see cref="Equals(Asn1Tag)"/> otherwise. /// </returns> public override bool Equals([NotNullWhen(true)] object? obj) { return obj is Asn1Tag tag && Equals(tag); } /// <summary> /// Returns the hash code for this instance. /// </summary> /// <returns> /// A 32-bit signed integer hash code. /// </returns> public override int GetHashCode() { // Most TagValue values will be in the 0-30 range, // the GetHashCode value only has collisions when TagValue is // between 2^29 and uint.MaxValue return (_controlFlags << 24) ^ TagValue; } /// <summary> /// Tests if two <see cref="Asn1Tag"/> values have the same BER encoding. /// </summary> /// <param name="left">The first value to compare.</param> /// <param name="right">The second value to compare.</param> /// <returns> /// <see langword="true"/> if <paramref name="left"/> and <paramref name="right"/> have the same /// BER encoding, <see langword="false"/> otherwise. /// </returns> public static bool operator ==(Asn1Tag left, Asn1Tag right) { return left.Equals(right); } /// <summary> /// Tests if two <see cref="Asn1Tag"/> values have a different BER encoding. /// </summary> /// <param name="left">The first value to compare.</param> /// <param name="right">The second value to compare.</param> /// <returns> /// <see langword="true"/> if <paramref name="left"/> and <paramref name="right"/> have a different /// BER encoding, <see langword="false"/> otherwise. /// </returns> public static bool operator !=(Asn1Tag left, Asn1Tag right) { return !left.Equals(right); } /// <summary> /// Tests if <paramref name="other"/> has the same <see cref="TagClass"/> and <see cref="TagValue"/> /// values as this tag, and does not compare <see cref="IsConstructed"/>. /// </summary> /// <param name="other">Tag to test for concept equality.</param> /// <returns> /// <see langword="true"/> if <paramref name="other"/> has the same <see cref="TagClass"/> and <see cref="TagValue"/> /// as this tag, <see langword="false"/> otherwise. /// </returns> public bool HasSameClassAndValue(Asn1Tag other) { return TagValue == other.TagValue && TagClass == other.TagClass; } /// <summary> /// Provides a text representation of this tag suitable for debugging. /// </summary> /// <returns> /// A text representation of this tag suitable for debugging. /// </returns> public override string ToString() { const string ConstructedPrefix = "Constructed "; string classAndValue; if (TagClass == TagClass.Universal) { classAndValue = ((UniversalTagNumber)TagValue).ToString(); } else { classAndValue = TagClass + "-" + TagValue; } if (IsConstructed) { return ConstructedPrefix + classAndValue; } return classAndValue; } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Methodical/explicit/basic/refarg_s.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Test { internal class AA { protected char pad1 = 'z'; public String mm = "aha"; public AA() { _self = this; } private AA _self = null; ~AA() { if (pad1 != 'z' || mm != "aha") throw new Exception(); if (_self != null && (pad1 != 'z' || mm != "aha")) throw new Exception(); } } internal class App { private static AA s_aa = new AA(); private static void Litter() { GC.Collect(); for (int i = 0; i < 1000; i++) { int[] p = new int[1000]; } GC.Collect(); } private static int Test(ref String n) { s_aa = null; Litter(); if (n != "aha") { Console.WriteLine("*** failed ***"); return 1; } Console.WriteLine("*** passed ***"); return 100; } private static int Main() { int exitCode = Test(ref s_aa.mm); GC.Collect(); GC.WaitForPendingFinalizers(); return exitCode; } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Test { internal class AA { protected char pad1 = 'z'; public String mm = "aha"; public AA() { _self = this; } private AA _self = null; ~AA() { if (pad1 != 'z' || mm != "aha") throw new Exception(); if (_self != null && (pad1 != 'z' || mm != "aha")) throw new Exception(); } } internal class App { private static AA s_aa = new AA(); private static void Litter() { GC.Collect(); for (int i = 0; i < 1000; i++) { int[] p = new int[1000]; } GC.Collect(); } private static int Test(ref String n) { s_aa = null; Litter(); if (n != "aha") { Console.WriteLine("*** failed ***"); return 1; } Console.WriteLine("*** passed ***"); return 100; } private static int Main() { int exitCode = Test(ref s_aa.mm); GC.Collect(); GC.WaitForPendingFinalizers(); return exitCode; } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/Loader/classloader/TypeGeneratorTests/TypeGeneratorTest710/Generated710.ilproj
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Generated710.il" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\TestFramework\TestFramework.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Generated710.il" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\TestFramework\TestFramework.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Drawing.Common/src/System/Drawing/Printing/PageSettings.Windows.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; using System.Drawing.Internal; using System.Runtime.InteropServices; namespace System.Drawing.Printing { /// <summary> /// Specifies settings that apply to a single page. /// </summary> public partial class PageSettings : ICloneable { internal PrinterSettings printerSettings; private TriState _color = TriState.Default; private PaperSize? _paperSize; private PaperSource? _paperSource; private PrinterResolution? _printerResolution; private TriState _landscape = TriState.Default; private Margins _margins = new Margins(); /// <summary> /// Initializes a new instance of the <see cref='PageSettings'/> class using the default printer. /// </summary> public PageSettings() : this(new PrinterSettings()) { } /// <summary> /// Initializes a new instance of the <see cref='PageSettings'/> class using the specified printer. /// </summary> public PageSettings(PrinterSettings printerSettings) { Debug.Assert(printerSettings != null, "printerSettings == null"); this.printerSettings = printerSettings; } /// <summary> /// Gets the bounds of the page, taking into account the Landscape property. /// </summary> public Rectangle Bounds { get { IntPtr modeHandle = printerSettings.GetHdevmode(); Rectangle pageBounds = GetBounds(modeHandle); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return pageBounds; } } /// <summary> /// Gets or sets a value indicating whether the page is printed in color. /// </summary> public bool Color { get { if (_color.IsDefault) return printerSettings.GetModeField(ModeField.Color, SafeNativeMethods.DMCOLOR_MONOCHROME) == SafeNativeMethods.DMCOLOR_COLOR; else return (bool)_color; } set { _color = value; } } /// <summary> /// Returns the x dimension of the hard margin /// </summary> public float HardMarginX { get { float hardMarginX = 0; DeviceContext dc = printerSettings.CreateDeviceContext(this); try { int dpiX = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.LOGPIXELSX); int hardMarginX_DU = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX); hardMarginX = hardMarginX_DU * 100 / dpiX; } finally { dc.Dispose(); } return hardMarginX; } } /// <summary> /// Returns the y dimension of the hard margin. /// </summary> public float HardMarginY { get { float hardMarginY = 0; DeviceContext dc = printerSettings.CreateDeviceContext(this); try { int dpiY = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.LOGPIXELSY); int hardMarginY_DU = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY); hardMarginY = hardMarginY_DU * 100 / dpiY; } finally { dc.Dispose(); } return hardMarginY; } } /// <summary> /// Gets or sets a value indicating whether the page should be printed in landscape or portrait orientation. /// </summary> public bool Landscape { get { if (_landscape.IsDefault) return printerSettings.GetModeField(ModeField.Orientation, SafeNativeMethods.DMORIENT_PORTRAIT) == SafeNativeMethods.DMORIENT_LANDSCAPE; else return (bool)_landscape; } set { _landscape = value; } } /// <summary> /// Gets or sets a value indicating the margins for this page. /// </summary> public Margins Margins { get { return _margins; } set { _margins = value; } } /// <summary> /// Gets or sets the paper size. /// </summary> public PaperSize PaperSize { get { return GetPaperSize(IntPtr.Zero); } set { _paperSize = value; } } /// <summary> /// Gets or sets a value indicating the paper source (i.e. upper bin). /// </summary> public PaperSource PaperSource { get { if (_paperSource == null) { IntPtr modeHandle = printerSettings.GetHdevmode(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PaperSource result = PaperSourceFromMode(mode); Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } else return _paperSource; } set { _paperSource = value; } } /// <summary> /// Gets the PrintableArea for the printer. Units = 100ths of an inch. /// </summary> public RectangleF PrintableArea { get { RectangleF printableArea = default; DeviceContext dc = printerSettings.CreateInformationContext(this); HandleRef hdc = new HandleRef(dc, dc.Hdc); try { int dpiX = Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.LOGPIXELSX); int dpiY = Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.LOGPIXELSY); if (!Landscape) { // // Need to convert the printable area to 100th of an inch from the device units printableArea.X = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX) * 100 / dpiX; printableArea.Y = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY) * 100 / dpiY; printableArea.Width = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.HORZRES) * 100 / dpiX; printableArea.Height = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.VERTRES) * 100 / dpiY; } else { // // Need to convert the printable area to 100th of an inch from the device units printableArea.Y = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX) * 100 / dpiX; printableArea.X = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY) * 100 / dpiY; printableArea.Height = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.HORZRES) * 100 / dpiX; printableArea.Width = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.VERTRES) * 100 / dpiY; } } finally { dc.Dispose(); } return printableArea; } } /// <summary> /// Gets or sets the printer resolution for the page. /// </summary> public PrinterResolution PrinterResolution { get { if (_printerResolution == null) { IntPtr modeHandle = printerSettings.GetHdevmode(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PrinterResolution result = PrinterResolutionFromMode(mode); Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } else return _printerResolution; } set { _printerResolution = value; } } /// <summary> /// Gets or sets the associated printer settings. /// </summary> public PrinterSettings PrinterSettings { get { return printerSettings; } set { if (value == null) value = new PrinterSettings(); printerSettings = value; } } /// <summary> /// Copies the settings and margins. /// </summary> public object Clone() { PageSettings result = (PageSettings)MemberwiseClone(); result._margins = (Margins)_margins.Clone(); return result; } /// <summary> /// Copies the relevant information out of the PageSettings and into the handle. /// </summary> public void CopyToHdevmode(IntPtr hdevmode) { IntPtr modePointer = Interop.Kernel32.GlobalLock(hdevmode); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; if (_color.IsNotDefault && ((mode.dmFields & SafeNativeMethods.DM_COLOR) == SafeNativeMethods.DM_COLOR)) mode.dmColor = unchecked((short)(((bool)_color) ? SafeNativeMethods.DMCOLOR_COLOR : SafeNativeMethods.DMCOLOR_MONOCHROME)); if (_landscape.IsNotDefault && ((mode.dmFields & SafeNativeMethods.DM_ORIENTATION) == SafeNativeMethods.DM_ORIENTATION)) mode.dmOrientation = unchecked((short)(((bool)_landscape) ? SafeNativeMethods.DMORIENT_LANDSCAPE : SafeNativeMethods.DMORIENT_PORTRAIT)); if (_paperSize != null) { if ((mode.dmFields & SafeNativeMethods.DM_PAPERSIZE) == SafeNativeMethods.DM_PAPERSIZE) { mode.dmPaperSize = unchecked((short)_paperSize.RawKind); } bool setWidth = false; bool setLength = false; if ((mode.dmFields & SafeNativeMethods.DM_PAPERLENGTH) == SafeNativeMethods.DM_PAPERLENGTH) { // dmPaperLength is always in tenths of millimeter but paperSizes are in hundredth of inch .. // so we need to convert :: use PrinterUnitConvert.Convert(value, PrinterUnit.TenthsOfAMillimeter /*fromUnit*/, PrinterUnit.Display /*ToUnit*/) int length = PrinterUnitConvert.Convert(_paperSize.Height, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperLength = unchecked((short)length); setLength = true; } if ((mode.dmFields & SafeNativeMethods.DM_PAPERWIDTH) == SafeNativeMethods.DM_PAPERWIDTH) { int width = PrinterUnitConvert.Convert(_paperSize.Width, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperWidth = unchecked((short)width); setWidth = true; } if (_paperSize.Kind == PaperKind.Custom) { if (!setLength) { mode.dmFields |= SafeNativeMethods.DM_PAPERLENGTH; int length = PrinterUnitConvert.Convert(_paperSize.Height, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperLength = unchecked((short)length); } if (!setWidth) { mode.dmFields |= SafeNativeMethods.DM_PAPERWIDTH; int width = PrinterUnitConvert.Convert(_paperSize.Width, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperWidth = unchecked((short)width); } } } if (_paperSource != null && ((mode.dmFields & SafeNativeMethods.DM_DEFAULTSOURCE) == SafeNativeMethods.DM_DEFAULTSOURCE)) { mode.dmDefaultSource = unchecked((short)_paperSource.RawKind); } if (_printerResolution != null) { if (_printerResolution.Kind == PrinterResolutionKind.Custom) { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { mode.dmPrintQuality = unchecked((short)_printerResolution.X); } if ((mode.dmFields & SafeNativeMethods.DM_YRESOLUTION) == SafeNativeMethods.DM_YRESOLUTION) { mode.dmYResolution = unchecked((short)_printerResolution.Y); } } else { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { mode.dmPrintQuality = unchecked((short)_printerResolution.Kind); } } } Marshal.StructureToPtr(mode, modePointer, false); // It's possible this page has a DEVMODE for a different printer than the DEVMODE passed in here // (Ex: occurs when Doc.DefaultPageSettings.PrinterSettings.PrinterName != Doc.PrinterSettings.PrinterName) // // if the passed in devmode has fewer bytes than our buffer for the extrainfo, we want to skip the merge as it will cause // a buffer overrun if (mode.dmDriverExtra >= ExtraBytes) { int retCode = Interop.Winspool.DocumentProperties(NativeMethods.NullHandleRef, NativeMethods.NullHandleRef, printerSettings.PrinterName, modePointer, modePointer, SafeNativeMethods.DM_IN_BUFFER | SafeNativeMethods.DM_OUT_BUFFER); if (retCode < 0) { Interop.Kernel32.GlobalFree(modePointer); } } Interop.Kernel32.GlobalUnlock(hdevmode); } private short ExtraBytes { get { IntPtr modeHandle = printerSettings.GetHdevmodeInternal(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; short result = mode?.dmDriverExtra ?? 0; Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } } // This function shows up big on profiles, so we need to make it fast internal Rectangle GetBounds(IntPtr modeHandle) { Rectangle pageBounds; PaperSize size = GetPaperSize(modeHandle); if (GetLandscape(modeHandle)) pageBounds = new Rectangle(0, 0, size.Height, size.Width); else pageBounds = new Rectangle(0, 0, size.Width, size.Height); return pageBounds; } private bool GetLandscape(IntPtr modeHandle) { if (_landscape.IsDefault) return printerSettings.GetModeField(ModeField.Orientation, SafeNativeMethods.DMORIENT_PORTRAIT, modeHandle) == SafeNativeMethods.DMORIENT_LANDSCAPE; else return (bool)_landscape; } private PaperSize GetPaperSize(IntPtr modeHandle) { if (_paperSize == null) { bool ownHandle = false; if (modeHandle == IntPtr.Zero) { modeHandle = printerSettings.GetHdevmode(); ownHandle = true; } IntPtr modePointer = Interop.Kernel32.GlobalLock(modeHandle); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PaperSize result = PaperSizeFromMode(mode); Interop.Kernel32.GlobalUnlock(modeHandle); if (ownHandle) { Interop.Kernel32.GlobalFree(modeHandle); } return result; } else return _paperSize; } private PaperSize PaperSizeFromMode(Interop.Gdi32.DEVMODE mode) { PaperSize[] sizes = printerSettings.Get_PaperSizes(); if ((mode.dmFields & SafeNativeMethods.DM_PAPERSIZE) == SafeNativeMethods.DM_PAPERSIZE) { for (int i = 0; i < sizes.Length; i++) { if ((int)sizes[i].RawKind == mode.dmPaperSize) return sizes[i]; } } return new PaperSize(PaperKind.Custom, "custom", //mode.dmPaperWidth, mode.dmPaperLength); PrinterUnitConvert.Convert(mode.dmPaperWidth, PrinterUnit.TenthsOfAMillimeter, PrinterUnit.Display), PrinterUnitConvert.Convert(mode.dmPaperLength, PrinterUnit.TenthsOfAMillimeter, PrinterUnit.Display)); } private PaperSource PaperSourceFromMode(Interop.Gdi32.DEVMODE mode) { PaperSource[] sources = printerSettings.Get_PaperSources(); if ((mode.dmFields & SafeNativeMethods.DM_DEFAULTSOURCE) == SafeNativeMethods.DM_DEFAULTSOURCE) { for (int i = 0; i < sources.Length; i++) { // the dmDefaultSource == to the RawKind in the Papersource.. and Not the Kind... // if the PaperSource is populated with CUSTOM values... if (unchecked((short)sources[i].RawKind) == mode.dmDefaultSource) { return sources[i]; } } } return new PaperSource((PaperSourceKind)mode.dmDefaultSource, "unknown"); } private PrinterResolution PrinterResolutionFromMode(Interop.Gdi32.DEVMODE mode) { PrinterResolution[] resolutions = printerSettings.Get_PrinterResolutions(); for (int i = 0; i < resolutions.Length; i++) { if (mode.dmPrintQuality >= 0 && ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) && ((mode.dmFields & SafeNativeMethods.DM_YRESOLUTION) == SafeNativeMethods.DM_YRESOLUTION)) { if (resolutions[i].X == unchecked((int)(PrinterResolutionKind)mode.dmPrintQuality) && resolutions[i].Y == unchecked((int)(PrinterResolutionKind)mode.dmYResolution)) return resolutions[i]; } else { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { if (resolutions[i].Kind == (PrinterResolutionKind)mode.dmPrintQuality) return resolutions[i]; } } } return new PrinterResolution(PrinterResolutionKind.Custom, mode.dmPrintQuality, mode.dmYResolution); } /// <summary> /// Copies the relevant information out of the handle and into the PageSettings. /// </summary> public void SetHdevmode(IntPtr hdevmode) { if (hdevmode == IntPtr.Zero) { throw new ArgumentException(SR.Format(SR.InvalidPrinterHandle, hdevmode)); } IntPtr pointer = Interop.Kernel32.GlobalLock(hdevmode); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(pointer)!; if ((mode.dmFields & SafeNativeMethods.DM_COLOR) == SafeNativeMethods.DM_COLOR) { _color = (mode.dmColor == SafeNativeMethods.DMCOLOR_COLOR); } if ((mode.dmFields & SafeNativeMethods.DM_ORIENTATION) == SafeNativeMethods.DM_ORIENTATION) { _landscape = (mode.dmOrientation == SafeNativeMethods.DMORIENT_LANDSCAPE); } _paperSize = PaperSizeFromMode(mode); _paperSource = PaperSourceFromMode(mode); _printerResolution = PrinterResolutionFromMode(mode); Interop.Kernel32.GlobalUnlock(hdevmode); } /// <summary> /// Provides some interesting information about the PageSettings in String form. /// </summary> public override string ToString() => $"[{nameof(PageSettings)}: Color={Color}, Landscape={Landscape}, Margins={Margins}, PaperSize={PaperSize}, PaperSource={PaperSource}, PrinterResolution={PrinterResolution}]"; } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; using System.Drawing.Internal; using System.Runtime.InteropServices; namespace System.Drawing.Printing { /// <summary> /// Specifies settings that apply to a single page. /// </summary> public partial class PageSettings : ICloneable { internal PrinterSettings printerSettings; private TriState _color = TriState.Default; private PaperSize? _paperSize; private PaperSource? _paperSource; private PrinterResolution? _printerResolution; private TriState _landscape = TriState.Default; private Margins _margins = new Margins(); /// <summary> /// Initializes a new instance of the <see cref='PageSettings'/> class using the default printer. /// </summary> public PageSettings() : this(new PrinterSettings()) { } /// <summary> /// Initializes a new instance of the <see cref='PageSettings'/> class using the specified printer. /// </summary> public PageSettings(PrinterSettings printerSettings) { Debug.Assert(printerSettings != null, "printerSettings == null"); this.printerSettings = printerSettings; } /// <summary> /// Gets the bounds of the page, taking into account the Landscape property. /// </summary> public Rectangle Bounds { get { IntPtr modeHandle = printerSettings.GetHdevmode(); Rectangle pageBounds = GetBounds(modeHandle); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return pageBounds; } } /// <summary> /// Gets or sets a value indicating whether the page is printed in color. /// </summary> public bool Color { get { if (_color.IsDefault) return printerSettings.GetModeField(ModeField.Color, SafeNativeMethods.DMCOLOR_MONOCHROME) == SafeNativeMethods.DMCOLOR_COLOR; else return (bool)_color; } set { _color = value; } } /// <summary> /// Returns the x dimension of the hard margin /// </summary> public float HardMarginX { get { float hardMarginX = 0; DeviceContext dc = printerSettings.CreateDeviceContext(this); try { int dpiX = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.LOGPIXELSX); int hardMarginX_DU = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX); hardMarginX = hardMarginX_DU * 100 / dpiX; } finally { dc.Dispose(); } return hardMarginX; } } /// <summary> /// Returns the y dimension of the hard margin. /// </summary> public float HardMarginY { get { float hardMarginY = 0; DeviceContext dc = printerSettings.CreateDeviceContext(this); try { int dpiY = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.LOGPIXELSY); int hardMarginY_DU = Interop.Gdi32.GetDeviceCaps(new HandleRef(dc, dc.Hdc), Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY); hardMarginY = hardMarginY_DU * 100 / dpiY; } finally { dc.Dispose(); } return hardMarginY; } } /// <summary> /// Gets or sets a value indicating whether the page should be printed in landscape or portrait orientation. /// </summary> public bool Landscape { get { if (_landscape.IsDefault) return printerSettings.GetModeField(ModeField.Orientation, SafeNativeMethods.DMORIENT_PORTRAIT) == SafeNativeMethods.DMORIENT_LANDSCAPE; else return (bool)_landscape; } set { _landscape = value; } } /// <summary> /// Gets or sets a value indicating the margins for this page. /// </summary> public Margins Margins { get { return _margins; } set { _margins = value; } } /// <summary> /// Gets or sets the paper size. /// </summary> public PaperSize PaperSize { get { return GetPaperSize(IntPtr.Zero); } set { _paperSize = value; } } /// <summary> /// Gets or sets a value indicating the paper source (i.e. upper bin). /// </summary> public PaperSource PaperSource { get { if (_paperSource == null) { IntPtr modeHandle = printerSettings.GetHdevmode(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PaperSource result = PaperSourceFromMode(mode); Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } else return _paperSource; } set { _paperSource = value; } } /// <summary> /// Gets the PrintableArea for the printer. Units = 100ths of an inch. /// </summary> public RectangleF PrintableArea { get { RectangleF printableArea = default; DeviceContext dc = printerSettings.CreateInformationContext(this); HandleRef hdc = new HandleRef(dc, dc.Hdc); try { int dpiX = Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.LOGPIXELSX); int dpiY = Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.LOGPIXELSY); if (!Landscape) { // // Need to convert the printable area to 100th of an inch from the device units printableArea.X = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX) * 100 / dpiX; printableArea.Y = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY) * 100 / dpiY; printableArea.Width = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.HORZRES) * 100 / dpiX; printableArea.Height = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.VERTRES) * 100 / dpiY; } else { // // Need to convert the printable area to 100th of an inch from the device units printableArea.Y = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETX) * 100 / dpiX; printableArea.X = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.PHYSICALOFFSETY) * 100 / dpiY; printableArea.Height = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.HORZRES) * 100 / dpiX; printableArea.Width = (float)Interop.Gdi32.GetDeviceCaps(hdc, Interop.Gdi32.DeviceCapability.VERTRES) * 100 / dpiY; } } finally { dc.Dispose(); } return printableArea; } } /// <summary> /// Gets or sets the printer resolution for the page. /// </summary> public PrinterResolution PrinterResolution { get { if (_printerResolution == null) { IntPtr modeHandle = printerSettings.GetHdevmode(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PrinterResolution result = PrinterResolutionFromMode(mode); Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } else return _printerResolution; } set { _printerResolution = value; } } /// <summary> /// Gets or sets the associated printer settings. /// </summary> public PrinterSettings PrinterSettings { get { return printerSettings; } set { if (value == null) value = new PrinterSettings(); printerSettings = value; } } /// <summary> /// Copies the settings and margins. /// </summary> public object Clone() { PageSettings result = (PageSettings)MemberwiseClone(); result._margins = (Margins)_margins.Clone(); return result; } /// <summary> /// Copies the relevant information out of the PageSettings and into the handle. /// </summary> public void CopyToHdevmode(IntPtr hdevmode) { IntPtr modePointer = Interop.Kernel32.GlobalLock(hdevmode); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; if (_color.IsNotDefault && ((mode.dmFields & SafeNativeMethods.DM_COLOR) == SafeNativeMethods.DM_COLOR)) mode.dmColor = unchecked((short)(((bool)_color) ? SafeNativeMethods.DMCOLOR_COLOR : SafeNativeMethods.DMCOLOR_MONOCHROME)); if (_landscape.IsNotDefault && ((mode.dmFields & SafeNativeMethods.DM_ORIENTATION) == SafeNativeMethods.DM_ORIENTATION)) mode.dmOrientation = unchecked((short)(((bool)_landscape) ? SafeNativeMethods.DMORIENT_LANDSCAPE : SafeNativeMethods.DMORIENT_PORTRAIT)); if (_paperSize != null) { if ((mode.dmFields & SafeNativeMethods.DM_PAPERSIZE) == SafeNativeMethods.DM_PAPERSIZE) { mode.dmPaperSize = unchecked((short)_paperSize.RawKind); } bool setWidth = false; bool setLength = false; if ((mode.dmFields & SafeNativeMethods.DM_PAPERLENGTH) == SafeNativeMethods.DM_PAPERLENGTH) { // dmPaperLength is always in tenths of millimeter but paperSizes are in hundredth of inch .. // so we need to convert :: use PrinterUnitConvert.Convert(value, PrinterUnit.TenthsOfAMillimeter /*fromUnit*/, PrinterUnit.Display /*ToUnit*/) int length = PrinterUnitConvert.Convert(_paperSize.Height, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperLength = unchecked((short)length); setLength = true; } if ((mode.dmFields & SafeNativeMethods.DM_PAPERWIDTH) == SafeNativeMethods.DM_PAPERWIDTH) { int width = PrinterUnitConvert.Convert(_paperSize.Width, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperWidth = unchecked((short)width); setWidth = true; } if (_paperSize.Kind == PaperKind.Custom) { if (!setLength) { mode.dmFields |= SafeNativeMethods.DM_PAPERLENGTH; int length = PrinterUnitConvert.Convert(_paperSize.Height, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperLength = unchecked((short)length); } if (!setWidth) { mode.dmFields |= SafeNativeMethods.DM_PAPERWIDTH; int width = PrinterUnitConvert.Convert(_paperSize.Width, PrinterUnit.Display, PrinterUnit.TenthsOfAMillimeter); mode.dmPaperWidth = unchecked((short)width); } } } if (_paperSource != null && ((mode.dmFields & SafeNativeMethods.DM_DEFAULTSOURCE) == SafeNativeMethods.DM_DEFAULTSOURCE)) { mode.dmDefaultSource = unchecked((short)_paperSource.RawKind); } if (_printerResolution != null) { if (_printerResolution.Kind == PrinterResolutionKind.Custom) { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { mode.dmPrintQuality = unchecked((short)_printerResolution.X); } if ((mode.dmFields & SafeNativeMethods.DM_YRESOLUTION) == SafeNativeMethods.DM_YRESOLUTION) { mode.dmYResolution = unchecked((short)_printerResolution.Y); } } else { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { mode.dmPrintQuality = unchecked((short)_printerResolution.Kind); } } } Marshal.StructureToPtr(mode, modePointer, false); // It's possible this page has a DEVMODE for a different printer than the DEVMODE passed in here // (Ex: occurs when Doc.DefaultPageSettings.PrinterSettings.PrinterName != Doc.PrinterSettings.PrinterName) // // if the passed in devmode has fewer bytes than our buffer for the extrainfo, we want to skip the merge as it will cause // a buffer overrun if (mode.dmDriverExtra >= ExtraBytes) { int retCode = Interop.Winspool.DocumentProperties(NativeMethods.NullHandleRef, NativeMethods.NullHandleRef, printerSettings.PrinterName, modePointer, modePointer, SafeNativeMethods.DM_IN_BUFFER | SafeNativeMethods.DM_OUT_BUFFER); if (retCode < 0) { Interop.Kernel32.GlobalFree(modePointer); } } Interop.Kernel32.GlobalUnlock(hdevmode); } private short ExtraBytes { get { IntPtr modeHandle = printerSettings.GetHdevmodeInternal(); IntPtr modePointer = Interop.Kernel32.GlobalLock(new HandleRef(this, modeHandle)); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; short result = mode?.dmDriverExtra ?? 0; Interop.Kernel32.GlobalUnlock(new HandleRef(this, modeHandle)); Interop.Kernel32.GlobalFree(new HandleRef(this, modeHandle)); return result; } } // This function shows up big on profiles, so we need to make it fast internal Rectangle GetBounds(IntPtr modeHandle) { Rectangle pageBounds; PaperSize size = GetPaperSize(modeHandle); if (GetLandscape(modeHandle)) pageBounds = new Rectangle(0, 0, size.Height, size.Width); else pageBounds = new Rectangle(0, 0, size.Width, size.Height); return pageBounds; } private bool GetLandscape(IntPtr modeHandle) { if (_landscape.IsDefault) return printerSettings.GetModeField(ModeField.Orientation, SafeNativeMethods.DMORIENT_PORTRAIT, modeHandle) == SafeNativeMethods.DMORIENT_LANDSCAPE; else return (bool)_landscape; } private PaperSize GetPaperSize(IntPtr modeHandle) { if (_paperSize == null) { bool ownHandle = false; if (modeHandle == IntPtr.Zero) { modeHandle = printerSettings.GetHdevmode(); ownHandle = true; } IntPtr modePointer = Interop.Kernel32.GlobalLock(modeHandle); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(modePointer)!; PaperSize result = PaperSizeFromMode(mode); Interop.Kernel32.GlobalUnlock(modeHandle); if (ownHandle) { Interop.Kernel32.GlobalFree(modeHandle); } return result; } else return _paperSize; } private PaperSize PaperSizeFromMode(Interop.Gdi32.DEVMODE mode) { PaperSize[] sizes = printerSettings.Get_PaperSizes(); if ((mode.dmFields & SafeNativeMethods.DM_PAPERSIZE) == SafeNativeMethods.DM_PAPERSIZE) { for (int i = 0; i < sizes.Length; i++) { if ((int)sizes[i].RawKind == mode.dmPaperSize) return sizes[i]; } } return new PaperSize(PaperKind.Custom, "custom", //mode.dmPaperWidth, mode.dmPaperLength); PrinterUnitConvert.Convert(mode.dmPaperWidth, PrinterUnit.TenthsOfAMillimeter, PrinterUnit.Display), PrinterUnitConvert.Convert(mode.dmPaperLength, PrinterUnit.TenthsOfAMillimeter, PrinterUnit.Display)); } private PaperSource PaperSourceFromMode(Interop.Gdi32.DEVMODE mode) { PaperSource[] sources = printerSettings.Get_PaperSources(); if ((mode.dmFields & SafeNativeMethods.DM_DEFAULTSOURCE) == SafeNativeMethods.DM_DEFAULTSOURCE) { for (int i = 0; i < sources.Length; i++) { // the dmDefaultSource == to the RawKind in the Papersource.. and Not the Kind... // if the PaperSource is populated with CUSTOM values... if (unchecked((short)sources[i].RawKind) == mode.dmDefaultSource) { return sources[i]; } } } return new PaperSource((PaperSourceKind)mode.dmDefaultSource, "unknown"); } private PrinterResolution PrinterResolutionFromMode(Interop.Gdi32.DEVMODE mode) { PrinterResolution[] resolutions = printerSettings.Get_PrinterResolutions(); for (int i = 0; i < resolutions.Length; i++) { if (mode.dmPrintQuality >= 0 && ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) && ((mode.dmFields & SafeNativeMethods.DM_YRESOLUTION) == SafeNativeMethods.DM_YRESOLUTION)) { if (resolutions[i].X == unchecked((int)(PrinterResolutionKind)mode.dmPrintQuality) && resolutions[i].Y == unchecked((int)(PrinterResolutionKind)mode.dmYResolution)) return resolutions[i]; } else { if ((mode.dmFields & SafeNativeMethods.DM_PRINTQUALITY) == SafeNativeMethods.DM_PRINTQUALITY) { if (resolutions[i].Kind == (PrinterResolutionKind)mode.dmPrintQuality) return resolutions[i]; } } } return new PrinterResolution(PrinterResolutionKind.Custom, mode.dmPrintQuality, mode.dmYResolution); } /// <summary> /// Copies the relevant information out of the handle and into the PageSettings. /// </summary> public void SetHdevmode(IntPtr hdevmode) { if (hdevmode == IntPtr.Zero) { throw new ArgumentException(SR.Format(SR.InvalidPrinterHandle, hdevmode)); } IntPtr pointer = Interop.Kernel32.GlobalLock(hdevmode); Interop.Gdi32.DEVMODE mode = Marshal.PtrToStructure<Interop.Gdi32.DEVMODE>(pointer)!; if ((mode.dmFields & SafeNativeMethods.DM_COLOR) == SafeNativeMethods.DM_COLOR) { _color = (mode.dmColor == SafeNativeMethods.DMCOLOR_COLOR); } if ((mode.dmFields & SafeNativeMethods.DM_ORIENTATION) == SafeNativeMethods.DM_ORIENTATION) { _landscape = (mode.dmOrientation == SafeNativeMethods.DMORIENT_LANDSCAPE); } _paperSize = PaperSizeFromMode(mode); _paperSource = PaperSourceFromMode(mode); _printerResolution = PrinterResolutionFromMode(mode); Interop.Kernel32.GlobalUnlock(hdevmode); } /// <summary> /// Provides some interesting information about the PageSettings in String form. /// </summary> public override string ToString() => $"[{nameof(PageSettings)}: Color={Color}, Landscape={Landscape}, Margins={Margins}, PaperSize={PaperSize}, PaperSource={PaperSource}, PrinterResolution={PrinterResolution}]"; } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/coreclr/nativeaot/Runtime/amd64/UniversalTransition.asm
;; Licensed to the .NET Foundation under one or more agreements. ;; The .NET Foundation licenses this file to you under the MIT license. include AsmMacros.inc ifdef FEATURE_DYNAMIC_CODE ifdef _DEBUG TRASH_SAVED_ARGUMENT_REGISTERS equ 1 else TRASH_SAVED_ARGUMENT_REGISTERS equ 0 endif if TRASH_SAVED_ARGUMENT_REGISTERS ne 0 EXTERN RhpIntegerTrashValues : QWORD EXTERN RhpFpTrashValues : QWORD endif ;; TRASH_SAVED_ARGUMENT_REGISTERS SIZEOF_RETADDR equ 8h SIZEOF_ALIGNMENT_PADDING equ 8h SIZEOF_RETURN_BLOCK equ 10h ; for 16 bytes of conservatively reported space that the callee can ; use to manage the return value that the call eventually generates SIZEOF_FP_REGS equ 40h ; xmm0-3 SIZEOF_OUT_REG_HOMES equ 20h ; Callee register spill ; ; From CallerSP to ChildSP, the stack frame is composed of the following adjacent regions: ; ; SIZEOF_RETADDR ; SIZEOF_ALIGNMENT_PADDING ; SIZEOF_RETURN_BLOCK ; SIZEOF_FP_REGS ; SIZEOF_OUT_REG_HOMES ; DISTANCE_FROM_CHILDSP_TO_FP_REGS equ SIZEOF_OUT_REG_HOMES DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK equ DISTANCE_FROM_CHILDSP_TO_FP_REGS + SIZEOF_FP_REGS DISTANCE_FROM_CHILDSP_TO_RETADDR equ DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK + SIZEOF_RETURN_BLOCK + SIZEOF_ALIGNMENT_PADDING DISTANCE_FROM_CHILDSP_TO_CALLERSP equ DISTANCE_FROM_CHILDSP_TO_RETADDR + SIZEOF_RETADDR .errnz DISTANCE_FROM_CHILDSP_TO_CALLERSP mod 16 ;; ;; Defines an assembly thunk used to make a transition from managed code to a callee, ;; then (based on the return value from the callee), either returning or jumping to ;; a new location while preserving the input arguments. The usage of this thunk also ;; ensures arguments passed are properly reported. ;; ;; TODO: This code currently only tailcalls, and does not return. ;; ;; Inputs: ;; rcx, rdx, r8, r9, stack space: arguments as normal ;; r10: The location of the target code the UniversalTransition thunk will call ;; r11: The only parameter to the target function (passed in rdx to callee) ;; ; ; Frame layout is: ; ; {StackPassedArgs} ChildSP+0a0 CallerSP+020 ; {IntArgRegs (rcx,rdx,r8,r9) (0x20 bytes)} ChildSP+080 CallerSP+000 ; {CallerRetaddr} ChildSP+078 CallerSP-008 ; {AlignmentPad (0x8 bytes)} ChildSP+070 CallerSP-010 ; {ReturnBlock (0x10 bytes)} ChildSP+060 CallerSP-020 ; {FpArgRegs (xmm0-xmm3) (0x40 bytes)} ChildSP+020 CallerSP-060 ; {CalleeArgumentHomes (0x20 bytes)} ChildSP+000 CallerSP-080 ; {CalleeRetaddr} ChildSP-008 CallerSP-088 ; ; NOTE: If the frame layout ever changes, the C++ UniversalTransitionStackFrame structure ; must be updated as well. ; ; NOTE: The callee receives a pointer to the base of the ReturnBlock, and the callee has ; knowledge of the exact layout of all pieces of the frame that lie at or above the pushed ; FpArgRegs. ; ; NOTE: The stack walker guarantees that conservative GC reporting will be applied to ; everything between the base of the ReturnBlock and the top of the StackPassedArgs. ; UNIVERSAL_TRANSITION macro FunctionName NESTED_ENTRY Rhp&FunctionName, _TEXT alloc_stack DISTANCE_FROM_CHILDSP_TO_RETADDR save_reg_postrsp rcx, 0h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp rdx, 8h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp r8, 10h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp r9, 18h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_xmm128_postrsp xmm0, DISTANCE_FROM_CHILDSP_TO_FP_REGS save_xmm128_postrsp xmm1, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 10h save_xmm128_postrsp xmm2, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 20h save_xmm128_postrsp xmm3, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 30h END_PROLOGUE if TRASH_SAVED_ARGUMENT_REGISTERS ne 0 ; Before calling out, trash all of the argument registers except the ones (rcx, rdx) that ; hold outgoing arguments. All of these registers have been saved to the transition ; frame, and the code at the call target is required to use only the transition frame ; copies when dispatching this call to the eventual callee. movsd xmm0, mmword ptr [RhpFpTrashValues + 0h] movsd xmm1, mmword ptr [RhpFpTrashValues + 8h] movsd xmm2, mmword ptr [RhpFpTrashValues + 10h] movsd xmm3, mmword ptr [RhpFpTrashValues + 18h] mov r8, qword ptr [RhpIntegerTrashValues + 10h] mov r9, qword ptr [RhpIntegerTrashValues + 18h] endif ; TRASH_SAVED_ARGUMENT_REGISTERS ; ; Call out to the target, while storing and reporting arguments to the GC. ; mov rdx, r11 lea rcx, [rsp + DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK] call r10 EXPORT_POINTER_TO_ADDRESS PointerToReturnFrom&FunctionName ; We cannot make the label public as that tricks DIA stackwalker into thinking ; it's the beginning of a method. For this reason we export the address ; by means of an auxiliary variable. ; restore fp argument registers movdqa xmm0, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS ] movdqa xmm1, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 10h] movdqa xmm2, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 20h] movdqa xmm3, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 30h] ; restore integer argument registers mov rcx, [rsp + 0h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov rdx, [rsp + 8h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov r8, [rsp + 10h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov r9, [rsp + 18h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] ; epilog nop ; Pop the space that was allocated between the ChildSP and the caller return address. add rsp, DISTANCE_FROM_CHILDSP_TO_RETADDR TAILJMP_RAX NESTED_END Rhp&FunctionName, _TEXT endm ; To enable proper step-in behavior in the debugger, we need to have two instances ; of the thunk. For the first one, the debugger steps into the call in the function, ; for the other, it steps over it. UNIVERSAL_TRANSITION UniversalTransition UNIVERSAL_TRANSITION UniversalTransition_DebugStepTailCall endif end
;; Licensed to the .NET Foundation under one or more agreements. ;; The .NET Foundation licenses this file to you under the MIT license. include AsmMacros.inc ifdef FEATURE_DYNAMIC_CODE ifdef _DEBUG TRASH_SAVED_ARGUMENT_REGISTERS equ 1 else TRASH_SAVED_ARGUMENT_REGISTERS equ 0 endif if TRASH_SAVED_ARGUMENT_REGISTERS ne 0 EXTERN RhpIntegerTrashValues : QWORD EXTERN RhpFpTrashValues : QWORD endif ;; TRASH_SAVED_ARGUMENT_REGISTERS SIZEOF_RETADDR equ 8h SIZEOF_ALIGNMENT_PADDING equ 8h SIZEOF_RETURN_BLOCK equ 10h ; for 16 bytes of conservatively reported space that the callee can ; use to manage the return value that the call eventually generates SIZEOF_FP_REGS equ 40h ; xmm0-3 SIZEOF_OUT_REG_HOMES equ 20h ; Callee register spill ; ; From CallerSP to ChildSP, the stack frame is composed of the following adjacent regions: ; ; SIZEOF_RETADDR ; SIZEOF_ALIGNMENT_PADDING ; SIZEOF_RETURN_BLOCK ; SIZEOF_FP_REGS ; SIZEOF_OUT_REG_HOMES ; DISTANCE_FROM_CHILDSP_TO_FP_REGS equ SIZEOF_OUT_REG_HOMES DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK equ DISTANCE_FROM_CHILDSP_TO_FP_REGS + SIZEOF_FP_REGS DISTANCE_FROM_CHILDSP_TO_RETADDR equ DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK + SIZEOF_RETURN_BLOCK + SIZEOF_ALIGNMENT_PADDING DISTANCE_FROM_CHILDSP_TO_CALLERSP equ DISTANCE_FROM_CHILDSP_TO_RETADDR + SIZEOF_RETADDR .errnz DISTANCE_FROM_CHILDSP_TO_CALLERSP mod 16 ;; ;; Defines an assembly thunk used to make a transition from managed code to a callee, ;; then (based on the return value from the callee), either returning or jumping to ;; a new location while preserving the input arguments. The usage of this thunk also ;; ensures arguments passed are properly reported. ;; ;; TODO: This code currently only tailcalls, and does not return. ;; ;; Inputs: ;; rcx, rdx, r8, r9, stack space: arguments as normal ;; r10: The location of the target code the UniversalTransition thunk will call ;; r11: The only parameter to the target function (passed in rdx to callee) ;; ; ; Frame layout is: ; ; {StackPassedArgs} ChildSP+0a0 CallerSP+020 ; {IntArgRegs (rcx,rdx,r8,r9) (0x20 bytes)} ChildSP+080 CallerSP+000 ; {CallerRetaddr} ChildSP+078 CallerSP-008 ; {AlignmentPad (0x8 bytes)} ChildSP+070 CallerSP-010 ; {ReturnBlock (0x10 bytes)} ChildSP+060 CallerSP-020 ; {FpArgRegs (xmm0-xmm3) (0x40 bytes)} ChildSP+020 CallerSP-060 ; {CalleeArgumentHomes (0x20 bytes)} ChildSP+000 CallerSP-080 ; {CalleeRetaddr} ChildSP-008 CallerSP-088 ; ; NOTE: If the frame layout ever changes, the C++ UniversalTransitionStackFrame structure ; must be updated as well. ; ; NOTE: The callee receives a pointer to the base of the ReturnBlock, and the callee has ; knowledge of the exact layout of all pieces of the frame that lie at or above the pushed ; FpArgRegs. ; ; NOTE: The stack walker guarantees that conservative GC reporting will be applied to ; everything between the base of the ReturnBlock and the top of the StackPassedArgs. ; UNIVERSAL_TRANSITION macro FunctionName NESTED_ENTRY Rhp&FunctionName, _TEXT alloc_stack DISTANCE_FROM_CHILDSP_TO_RETADDR save_reg_postrsp rcx, 0h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp rdx, 8h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp r8, 10h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_reg_postrsp r9, 18h + DISTANCE_FROM_CHILDSP_TO_CALLERSP save_xmm128_postrsp xmm0, DISTANCE_FROM_CHILDSP_TO_FP_REGS save_xmm128_postrsp xmm1, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 10h save_xmm128_postrsp xmm2, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 20h save_xmm128_postrsp xmm3, DISTANCE_FROM_CHILDSP_TO_FP_REGS + 30h END_PROLOGUE if TRASH_SAVED_ARGUMENT_REGISTERS ne 0 ; Before calling out, trash all of the argument registers except the ones (rcx, rdx) that ; hold outgoing arguments. All of these registers have been saved to the transition ; frame, and the code at the call target is required to use only the transition frame ; copies when dispatching this call to the eventual callee. movsd xmm0, mmword ptr [RhpFpTrashValues + 0h] movsd xmm1, mmword ptr [RhpFpTrashValues + 8h] movsd xmm2, mmword ptr [RhpFpTrashValues + 10h] movsd xmm3, mmword ptr [RhpFpTrashValues + 18h] mov r8, qword ptr [RhpIntegerTrashValues + 10h] mov r9, qword ptr [RhpIntegerTrashValues + 18h] endif ; TRASH_SAVED_ARGUMENT_REGISTERS ; ; Call out to the target, while storing and reporting arguments to the GC. ; mov rdx, r11 lea rcx, [rsp + DISTANCE_FROM_CHILDSP_TO_RETURN_BLOCK] call r10 EXPORT_POINTER_TO_ADDRESS PointerToReturnFrom&FunctionName ; We cannot make the label public as that tricks DIA stackwalker into thinking ; it's the beginning of a method. For this reason we export the address ; by means of an auxiliary variable. ; restore fp argument registers movdqa xmm0, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS ] movdqa xmm1, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 10h] movdqa xmm2, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 20h] movdqa xmm3, [rsp + DISTANCE_FROM_CHILDSP_TO_FP_REGS + 30h] ; restore integer argument registers mov rcx, [rsp + 0h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov rdx, [rsp + 8h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov r8, [rsp + 10h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] mov r9, [rsp + 18h + DISTANCE_FROM_CHILDSP_TO_CALLERSP] ; epilog nop ; Pop the space that was allocated between the ChildSP and the caller return address. add rsp, DISTANCE_FROM_CHILDSP_TO_RETADDR TAILJMP_RAX NESTED_END Rhp&FunctionName, _TEXT endm ; To enable proper step-in behavior in the debugger, we need to have two instances ; of the thunk. For the first one, the debugger steps into the call in the function, ; for the other, it steps over it. UNIVERSAL_TRANSITION UniversalTransition UNIVERSAL_TRANSITION UniversalTransition_DebugStepTailCall endif end
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Runtime.Caching/ref/System.Runtime.Caching.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // ------------------------------------------------------------------------------ // Changes to this file must follow the https://aka.ms/api-review process. // ------------------------------------------------------------------------------ namespace System.Runtime.Caching { public abstract partial class CacheEntryChangeMonitor : System.Runtime.Caching.ChangeMonitor { protected CacheEntryChangeMonitor() { } public abstract System.Collections.ObjectModel.ReadOnlyCollection<string> CacheKeys { get; } public abstract System.DateTimeOffset LastModified { get; } public abstract string RegionName { get; } } public partial class CacheEntryRemovedArguments { public CacheEntryRemovedArguments(System.Runtime.Caching.ObjectCache source, System.Runtime.Caching.CacheEntryRemovedReason reason, System.Runtime.Caching.CacheItem cacheItem) { } public System.Runtime.Caching.CacheItem CacheItem { get { throw null; } } public System.Runtime.Caching.CacheEntryRemovedReason RemovedReason { get { throw null; } } public System.Runtime.Caching.ObjectCache Source { get { throw null; } } } public delegate void CacheEntryRemovedCallback(System.Runtime.Caching.CacheEntryRemovedArguments arguments); public enum CacheEntryRemovedReason { Removed = 0, Expired = 1, Evicted = 2, ChangeMonitorChanged = 3, CacheSpecificEviction = 4, } public partial class CacheEntryUpdateArguments { public CacheEntryUpdateArguments(System.Runtime.Caching.ObjectCache source, System.Runtime.Caching.CacheEntryRemovedReason reason, string key, string regionName) { } public string Key { get { throw null; } } public string RegionName { get { throw null; } } public System.Runtime.Caching.CacheEntryRemovedReason RemovedReason { get { throw null; } } public System.Runtime.Caching.ObjectCache Source { get { throw null; } } public System.Runtime.Caching.CacheItem UpdatedCacheItem { get { throw null; } set { } } public System.Runtime.Caching.CacheItemPolicy UpdatedCacheItemPolicy { get { throw null; } set { } } } public delegate void CacheEntryUpdateCallback(System.Runtime.Caching.CacheEntryUpdateArguments arguments); public partial class CacheItem { public CacheItem(string key) { } public CacheItem(string key, object value) { } public CacheItem(string key, object value, string regionName) { } public string Key { get { throw null; } set { } } public string RegionName { get { throw null; } set { } } public object Value { get { throw null; } set { } } } public partial class CacheItemPolicy { public CacheItemPolicy() { } public System.DateTimeOffset AbsoluteExpiration { get { throw null; } set { } } public System.Collections.ObjectModel.Collection<System.Runtime.Caching.ChangeMonitor> ChangeMonitors { get { throw null; } } public System.Runtime.Caching.CacheItemPriority Priority { get { throw null; } set { } } public System.Runtime.Caching.CacheEntryRemovedCallback RemovedCallback { get { throw null; } set { } } public System.TimeSpan SlidingExpiration { get { throw null; } set { } } public System.Runtime.Caching.CacheEntryUpdateCallback UpdateCallback { get { throw null; } set { } } } public enum CacheItemPriority { Default = 0, NotRemovable = 1, } public abstract partial class ChangeMonitor : System.IDisposable { protected ChangeMonitor() { } public bool HasChanged { get { throw null; } } public bool IsDisposed { get { throw null; } } public abstract string UniqueId { get; } public void Dispose() { } protected abstract void Dispose(bool disposing); protected void InitializationComplete() { } public void NotifyOnChanged(System.Runtime.Caching.OnChangedCallback onChangedCallback) { } protected void OnChanged(object state) { } } [System.FlagsAttribute] public enum DefaultCacheCapabilities { None = 0, InMemoryProvider = 1, OutOfProcessProvider = 2, CacheEntryChangeMonitors = 4, AbsoluteExpirations = 8, SlidingExpirations = 16, CacheEntryUpdateCallback = 32, CacheEntryRemovedCallback = 64, CacheRegions = 128, } public abstract partial class FileChangeMonitor : System.Runtime.Caching.ChangeMonitor { protected FileChangeMonitor() { } public abstract System.Collections.ObjectModel.ReadOnlyCollection<string> FilePaths { get; } public abstract System.DateTimeOffset LastModified { get; } } public sealed partial class HostFileChangeMonitor : System.Runtime.Caching.FileChangeMonitor { public HostFileChangeMonitor(System.Collections.Generic.IList<string> filePaths) { } public override System.Collections.ObjectModel.ReadOnlyCollection<string> FilePaths { get { throw null; } } public override System.DateTimeOffset LastModified { get { throw null; } } public override string UniqueId { get { throw null; } } protected override void Dispose(bool disposing) { } } public partial class MemoryCache : System.Runtime.Caching.ObjectCache, System.Collections.IEnumerable, System.IDisposable { public MemoryCache(string name, System.Collections.Specialized.NameValueCollection config = null) { } public MemoryCache(string name, System.Collections.Specialized.NameValueCollection config, bool ignoreConfigSection) { } public long CacheMemoryLimit { get { throw null; } } public static System.Runtime.Caching.MemoryCache Default { get { throw null; } } public override System.Runtime.Caching.DefaultCacheCapabilities DefaultCacheCapabilities { get { throw null; } } public override object this[string key] { get { throw null; } set { } } public override string Name { get { throw null; } } public long PhysicalMemoryLimit { get { throw null; } } public System.TimeSpan PollingInterval { get { throw null; } } public override bool Add(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public override System.Runtime.Caching.CacheItem AddOrGetExisting(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public override object AddOrGetExisting(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { throw null; } public override object AddOrGetExisting(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { throw null; } public override bool Contains(string key, string regionName = null) { throw null; } public override System.Runtime.Caching.CacheEntryChangeMonitor CreateCacheEntryChangeMonitor(System.Collections.Generic.IEnumerable<string> keys, string regionName = null) { throw null; } public void Dispose() { } public override object Get(string key, string regionName = null) { throw null; } public override System.Runtime.Caching.CacheItem GetCacheItem(string key, string regionName = null) { throw null; } public override long GetCount(string regionName = null) { throw null; } protected override System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> GetEnumerator() { throw null; } public long GetLastSize(string regionName = null) { throw null; } public override System.Collections.Generic.IDictionary<string, object> GetValues(System.Collections.Generic.IEnumerable<string> keys, string regionName = null) { throw null; } public object Remove(string key, System.Runtime.Caching.CacheEntryRemovedReason reason, string regionName = null) { throw null; } public override object Remove(string key, string regionName = null) { throw null; } public override void Set(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { } public override void Set(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { } public override void Set(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { throw null; } public long Trim(int percent) { throw null; } } public abstract partial class ObjectCache : System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string, object>>, System.Collections.IEnumerable { public static readonly System.DateTimeOffset InfiniteAbsoluteExpiration; public static readonly System.TimeSpan NoSlidingExpiration; protected ObjectCache() { } public abstract System.Runtime.Caching.DefaultCacheCapabilities DefaultCacheCapabilities { get; } public static System.IServiceProvider Host { get { throw null; } set { } } public abstract object this[string key] { get; set; } public abstract string Name { get; } public virtual bool Add(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public virtual bool Add(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { throw null; } public virtual bool Add(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { throw null; } public abstract System.Runtime.Caching.CacheItem AddOrGetExisting(System.Runtime.Caching.CacheItem value, System.Runtime.Caching.CacheItemPolicy policy); public abstract object AddOrGetExisting(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null); public abstract object AddOrGetExisting(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null); public abstract bool Contains(string key, string regionName = null); public abstract System.Runtime.Caching.CacheEntryChangeMonitor CreateCacheEntryChangeMonitor(System.Collections.Generic.IEnumerable<string> keys, string regionName = null); public abstract object Get(string key, string regionName = null); public abstract System.Runtime.Caching.CacheItem GetCacheItem(string key, string regionName = null); public abstract long GetCount(string regionName = null); protected abstract System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> GetEnumerator(); public abstract System.Collections.Generic.IDictionary<string, object> GetValues(System.Collections.Generic.IEnumerable<string> keys, string regionName = null); public virtual System.Collections.Generic.IDictionary<string, object> GetValues(string regionName, params string[] keys) { throw null; } public abstract object Remove(string key, string regionName = null); public abstract void Set(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy); public abstract void Set(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null); public abstract void Set(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null); System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<System.String,System.Object>>.GetEnumerator() { throw null; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { throw null; } } public delegate void OnChangedCallback(object state); } namespace System.Runtime.Caching.Hosting { public partial interface IApplicationIdentifier { string GetApplicationId(); } public partial interface IFileChangeNotificationSystem { void StartMonitoring(string filePath, System.Runtime.Caching.OnChangedCallback onChangedCallback, out object state, out System.DateTimeOffset lastWriteTime, out long fileSize); void StopMonitoring(string filePath, object state); } public partial interface IMemoryCacheManager { void ReleaseCache(System.Runtime.Caching.MemoryCache cache); void UpdateCacheSize(long size, System.Runtime.Caching.MemoryCache cache); } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // ------------------------------------------------------------------------------ // Changes to this file must follow the https://aka.ms/api-review process. // ------------------------------------------------------------------------------ namespace System.Runtime.Caching { public abstract partial class CacheEntryChangeMonitor : System.Runtime.Caching.ChangeMonitor { protected CacheEntryChangeMonitor() { } public abstract System.Collections.ObjectModel.ReadOnlyCollection<string> CacheKeys { get; } public abstract System.DateTimeOffset LastModified { get; } public abstract string RegionName { get; } } public partial class CacheEntryRemovedArguments { public CacheEntryRemovedArguments(System.Runtime.Caching.ObjectCache source, System.Runtime.Caching.CacheEntryRemovedReason reason, System.Runtime.Caching.CacheItem cacheItem) { } public System.Runtime.Caching.CacheItem CacheItem { get { throw null; } } public System.Runtime.Caching.CacheEntryRemovedReason RemovedReason { get { throw null; } } public System.Runtime.Caching.ObjectCache Source { get { throw null; } } } public delegate void CacheEntryRemovedCallback(System.Runtime.Caching.CacheEntryRemovedArguments arguments); public enum CacheEntryRemovedReason { Removed = 0, Expired = 1, Evicted = 2, ChangeMonitorChanged = 3, CacheSpecificEviction = 4, } public partial class CacheEntryUpdateArguments { public CacheEntryUpdateArguments(System.Runtime.Caching.ObjectCache source, System.Runtime.Caching.CacheEntryRemovedReason reason, string key, string regionName) { } public string Key { get { throw null; } } public string RegionName { get { throw null; } } public System.Runtime.Caching.CacheEntryRemovedReason RemovedReason { get { throw null; } } public System.Runtime.Caching.ObjectCache Source { get { throw null; } } public System.Runtime.Caching.CacheItem UpdatedCacheItem { get { throw null; } set { } } public System.Runtime.Caching.CacheItemPolicy UpdatedCacheItemPolicy { get { throw null; } set { } } } public delegate void CacheEntryUpdateCallback(System.Runtime.Caching.CacheEntryUpdateArguments arguments); public partial class CacheItem { public CacheItem(string key) { } public CacheItem(string key, object value) { } public CacheItem(string key, object value, string regionName) { } public string Key { get { throw null; } set { } } public string RegionName { get { throw null; } set { } } public object Value { get { throw null; } set { } } } public partial class CacheItemPolicy { public CacheItemPolicy() { } public System.DateTimeOffset AbsoluteExpiration { get { throw null; } set { } } public System.Collections.ObjectModel.Collection<System.Runtime.Caching.ChangeMonitor> ChangeMonitors { get { throw null; } } public System.Runtime.Caching.CacheItemPriority Priority { get { throw null; } set { } } public System.Runtime.Caching.CacheEntryRemovedCallback RemovedCallback { get { throw null; } set { } } public System.TimeSpan SlidingExpiration { get { throw null; } set { } } public System.Runtime.Caching.CacheEntryUpdateCallback UpdateCallback { get { throw null; } set { } } } public enum CacheItemPriority { Default = 0, NotRemovable = 1, } public abstract partial class ChangeMonitor : System.IDisposable { protected ChangeMonitor() { } public bool HasChanged { get { throw null; } } public bool IsDisposed { get { throw null; } } public abstract string UniqueId { get; } public void Dispose() { } protected abstract void Dispose(bool disposing); protected void InitializationComplete() { } public void NotifyOnChanged(System.Runtime.Caching.OnChangedCallback onChangedCallback) { } protected void OnChanged(object state) { } } [System.FlagsAttribute] public enum DefaultCacheCapabilities { None = 0, InMemoryProvider = 1, OutOfProcessProvider = 2, CacheEntryChangeMonitors = 4, AbsoluteExpirations = 8, SlidingExpirations = 16, CacheEntryUpdateCallback = 32, CacheEntryRemovedCallback = 64, CacheRegions = 128, } public abstract partial class FileChangeMonitor : System.Runtime.Caching.ChangeMonitor { protected FileChangeMonitor() { } public abstract System.Collections.ObjectModel.ReadOnlyCollection<string> FilePaths { get; } public abstract System.DateTimeOffset LastModified { get; } } public sealed partial class HostFileChangeMonitor : System.Runtime.Caching.FileChangeMonitor { public HostFileChangeMonitor(System.Collections.Generic.IList<string> filePaths) { } public override System.Collections.ObjectModel.ReadOnlyCollection<string> FilePaths { get { throw null; } } public override System.DateTimeOffset LastModified { get { throw null; } } public override string UniqueId { get { throw null; } } protected override void Dispose(bool disposing) { } } public partial class MemoryCache : System.Runtime.Caching.ObjectCache, System.Collections.IEnumerable, System.IDisposable { public MemoryCache(string name, System.Collections.Specialized.NameValueCollection config = null) { } public MemoryCache(string name, System.Collections.Specialized.NameValueCollection config, bool ignoreConfigSection) { } public long CacheMemoryLimit { get { throw null; } } public static System.Runtime.Caching.MemoryCache Default { get { throw null; } } public override System.Runtime.Caching.DefaultCacheCapabilities DefaultCacheCapabilities { get { throw null; } } public override object this[string key] { get { throw null; } set { } } public override string Name { get { throw null; } } public long PhysicalMemoryLimit { get { throw null; } } public System.TimeSpan PollingInterval { get { throw null; } } public override bool Add(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public override System.Runtime.Caching.CacheItem AddOrGetExisting(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public override object AddOrGetExisting(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { throw null; } public override object AddOrGetExisting(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { throw null; } public override bool Contains(string key, string regionName = null) { throw null; } public override System.Runtime.Caching.CacheEntryChangeMonitor CreateCacheEntryChangeMonitor(System.Collections.Generic.IEnumerable<string> keys, string regionName = null) { throw null; } public void Dispose() { } public override object Get(string key, string regionName = null) { throw null; } public override System.Runtime.Caching.CacheItem GetCacheItem(string key, string regionName = null) { throw null; } public override long GetCount(string regionName = null) { throw null; } protected override System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> GetEnumerator() { throw null; } public long GetLastSize(string regionName = null) { throw null; } public override System.Collections.Generic.IDictionary<string, object> GetValues(System.Collections.Generic.IEnumerable<string> keys, string regionName = null) { throw null; } public object Remove(string key, System.Runtime.Caching.CacheEntryRemovedReason reason, string regionName = null) { throw null; } public override object Remove(string key, string regionName = null) { throw null; } public override void Set(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { } public override void Set(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { } public override void Set(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { throw null; } public long Trim(int percent) { throw null; } } public abstract partial class ObjectCache : System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<string, object>>, System.Collections.IEnumerable { public static readonly System.DateTimeOffset InfiniteAbsoluteExpiration; public static readonly System.TimeSpan NoSlidingExpiration; protected ObjectCache() { } public abstract System.Runtime.Caching.DefaultCacheCapabilities DefaultCacheCapabilities { get; } public static System.IServiceProvider Host { get { throw null; } set { } } public abstract object this[string key] { get; set; } public abstract string Name { get; } public virtual bool Add(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy) { throw null; } public virtual bool Add(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null) { throw null; } public virtual bool Add(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null) { throw null; } public abstract System.Runtime.Caching.CacheItem AddOrGetExisting(System.Runtime.Caching.CacheItem value, System.Runtime.Caching.CacheItemPolicy policy); public abstract object AddOrGetExisting(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null); public abstract object AddOrGetExisting(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null); public abstract bool Contains(string key, string regionName = null); public abstract System.Runtime.Caching.CacheEntryChangeMonitor CreateCacheEntryChangeMonitor(System.Collections.Generic.IEnumerable<string> keys, string regionName = null); public abstract object Get(string key, string regionName = null); public abstract System.Runtime.Caching.CacheItem GetCacheItem(string key, string regionName = null); public abstract long GetCount(string regionName = null); protected abstract System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> GetEnumerator(); public abstract System.Collections.Generic.IDictionary<string, object> GetValues(System.Collections.Generic.IEnumerable<string> keys, string regionName = null); public virtual System.Collections.Generic.IDictionary<string, object> GetValues(string regionName, params string[] keys) { throw null; } public abstract object Remove(string key, string regionName = null); public abstract void Set(System.Runtime.Caching.CacheItem item, System.Runtime.Caching.CacheItemPolicy policy); public abstract void Set(string key, object value, System.DateTimeOffset absoluteExpiration, string regionName = null); public abstract void Set(string key, object value, System.Runtime.Caching.CacheItemPolicy policy, string regionName = null); System.Collections.Generic.IEnumerator<System.Collections.Generic.KeyValuePair<string, object>> System.Collections.Generic.IEnumerable<System.Collections.Generic.KeyValuePair<System.String,System.Object>>.GetEnumerator() { throw null; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { throw null; } } public delegate void OnChangedCallback(object state); } namespace System.Runtime.Caching.Hosting { public partial interface IApplicationIdentifier { string GetApplicationId(); } public partial interface IFileChangeNotificationSystem { void StartMonitoring(string filePath, System.Runtime.Caching.OnChangedCallback onChangedCallback, out object state, out System.DateTimeOffset lastWriteTime, out long fileSize); void StopMonitoring(string filePath, object state); } public partial interface IMemoryCacheManager { void ReleaseCache(System.Runtime.Caching.MemoryCache cache); void UpdateCacheSize(long size, System.Runtime.Caching.MemoryCache cache); } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Text.Json/src/System/Text/Json/Document/JsonDocument.Parse.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Buffers; using System.Diagnostics; using System.Diagnostics.CodeAnalysis; using System.IO; using System.Threading; using System.Threading.Tasks; namespace System.Text.Json { public sealed partial class JsonDocument { // Cached unrented documents for literal values. private static JsonDocument? s_nullLiteral; private static JsonDocument? s_trueLiteral; private static JsonDocument? s_falseLiteral; private const int UnseekableStreamInitialRentSize = 4096; /// <summary> /// Parse memory as UTF-8-encoded text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// <para> /// The <see cref="ReadOnlyMemory{T}"/> value will be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </para> /// /// <para> /// Because the input is considered to be text, a UTF-8 Byte-Order-Mark (BOM) must not be present. /// </para> /// </remarks> /// <param name="utf8Json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(ReadOnlyMemory<byte> utf8Json, JsonDocumentOptions options = default) { return Parse(utf8Json, options.GetReaderOptions()); } /// <summary> /// Parse a sequence as UTF-8-encoded text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// <para> /// The <see cref="ReadOnlySequence{T}"/> may be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </para> /// /// <para> /// Because the input is considered to be text, a UTF-8 Byte-Order-Mark (BOM) must not be present. /// </para> /// </remarks> /// <param name="utf8Json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(ReadOnlySequence<byte> utf8Json, JsonDocumentOptions options = default) { JsonReaderOptions readerOptions = options.GetReaderOptions(); if (utf8Json.IsSingleSegment) { return Parse(utf8Json.First, readerOptions); } int length = checked((int)utf8Json.Length); byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(length); try { utf8Json.CopyTo(utf8Bytes.AsSpan()); return Parse(utf8Bytes.AsMemory(0, length), readerOptions, utf8Bytes); } catch { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, length).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); throw; } } /// <summary> /// Parse a <see cref="Stream"/> as UTF-8-encoded data representing a single JSON value into a /// JsonDocument. The Stream will be read to completion. /// </summary> /// <param name="utf8Json">JSON data to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(Stream utf8Json!!, JsonDocumentOptions options = default) { ArraySegment<byte> drained = ReadToEnd(utf8Json); Debug.Assert(drained.Array != null); try { return Parse(drained.AsMemory(), options.GetReaderOptions(), drained.Array); } catch { // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); throw; } } internal static JsonDocument ParseRented(PooledByteBufferWriter utf8Json, JsonDocumentOptions options = default) { return Parse( utf8Json.WrittenMemory, options.GetReaderOptions(), extraRentedArrayPoolBytes: null, extraPooledByteBufferWriter: utf8Json); } internal static JsonDocument ParseValue(Stream utf8Json, JsonDocumentOptions options) { Debug.Assert(utf8Json != null); ArraySegment<byte> drained = ReadToEnd(utf8Json); Debug.Assert(drained.Array != null); byte[] owned = new byte[drained.Count]; Buffer.BlockCopy(drained.Array, 0, owned, 0, drained.Count); // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } internal static JsonDocument ParseValue(ReadOnlySpan<byte> utf8Json, JsonDocumentOptions options) { Debug.Assert(utf8Json != null); byte[] owned = new byte[utf8Json.Length]; utf8Json.CopyTo(owned); return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } internal static JsonDocument ParseValue(string json, JsonDocumentOptions options) { Debug.Assert(json != null); return ParseValue(json.AsMemory(), options); } /// <summary> /// Parse a <see cref="Stream"/> as UTF-8-encoded data representing a single JSON value into a /// JsonDocument. The Stream will be read to completion. /// </summary> /// <param name="utf8Json">JSON data to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <param name="cancellationToken">The token to monitor for cancellation requests.</param> /// <returns> /// A Task to produce a JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static Task<JsonDocument> ParseAsync( Stream utf8Json!!, JsonDocumentOptions options = default, CancellationToken cancellationToken = default) { return ParseAsyncCore(utf8Json, options, cancellationToken); } private static async Task<JsonDocument> ParseAsyncCore( Stream utf8Json, JsonDocumentOptions options = default, CancellationToken cancellationToken = default) { ArraySegment<byte> drained = await ReadToEndAsync(utf8Json, cancellationToken).ConfigureAwait(false); Debug.Assert(drained.Array != null); try { return Parse(drained.AsMemory(), options.GetReaderOptions(), drained.Array); } catch { // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); throw; } } /// <summary> /// Parses text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// The <see cref="ReadOnlyMemory{T}"/> value may be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </remarks> /// <param name="json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse([StringSyntax(StringSyntaxAttribute.Json)] ReadOnlyMemory<char> json, JsonDocumentOptions options = default) { ReadOnlySpan<char> jsonChars = json.Span; int expectedByteCount = JsonReaderHelper.GetUtf8ByteCount(jsonChars); byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(expectedByteCount); try { int actualByteCount = JsonReaderHelper.GetUtf8FromText(jsonChars, utf8Bytes); Debug.Assert(expectedByteCount == actualByteCount); return Parse( utf8Bytes.AsMemory(0, actualByteCount), options.GetReaderOptions(), utf8Bytes); } catch { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, expectedByteCount).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); throw; } } internal static JsonDocument ParseValue(ReadOnlyMemory<char> json, JsonDocumentOptions options) { ReadOnlySpan<char> jsonChars = json.Span; int expectedByteCount = JsonReaderHelper.GetUtf8ByteCount(jsonChars); byte[] owned; byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(expectedByteCount); try { int actualByteCount = JsonReaderHelper.GetUtf8FromText(jsonChars, utf8Bytes); Debug.Assert(expectedByteCount == actualByteCount); owned = new byte[actualByteCount]; Buffer.BlockCopy(utf8Bytes, 0, owned, 0, actualByteCount); } finally { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, expectedByteCount).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); } return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } /// <summary> /// Parses text representing a single JSON value into a JsonDocument. /// </summary> /// <param name="json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse([StringSyntax(StringSyntaxAttribute.Json)] string json!!, JsonDocumentOptions options = default) { return Parse(json.AsMemory(), options); } /// <summary> /// Attempts to parse one JSON value (including objects or arrays) from the provided reader. /// </summary> /// <param name="reader">The reader to read.</param> /// <param name="document">Receives the parsed document.</param> /// <returns> /// <see langword="true"/> if a value was read and parsed into a JsonDocument, /// <see langword="false"/> if the reader ran out of data while parsing. /// All other situations result in an exception being thrown. /// </returns> /// <remarks> /// <para> /// If the <see cref="Utf8JsonReader.TokenType"/> property of <paramref name="reader"/> /// is <see cref="JsonTokenType.PropertyName"/> or <see cref="JsonTokenType.None"/>, the /// reader will be advanced by one call to <see cref="Utf8JsonReader.Read"/> to determine /// the start of the value. /// </para> /// /// <para> /// Upon completion of this method, <paramref name="reader"/> will be positioned at the /// final token in the JSON value. If an exception is thrown, or <see langword="false"/> /// is returned, the reader is reset to the state it was in when the method was called. /// </para> /// /// <para> /// This method makes a copy of the data the reader acted on, so there is no caller /// requirement to maintain data integrity beyond the return of this method. /// </para> /// </remarks> /// <exception cref="ArgumentException"> /// <paramref name="reader"/> is using unsupported options. /// </exception> /// <exception cref="ArgumentException"> /// The current <paramref name="reader"/> token does not start or represent a value. /// </exception> /// <exception cref="JsonException"> /// A value could not be read from the reader. /// </exception> public static bool TryParseValue(ref Utf8JsonReader reader, [NotNullWhen(true)] out JsonDocument? document) { return TryParseValue(ref reader, out document, shouldThrow: false, useArrayPools: true); } /// <summary> /// Parses one JSON value (including objects or arrays) from the provided reader. /// </summary> /// <param name="reader">The reader to read.</param> /// <returns> /// A JsonDocument representing the value (and nested values) read from the reader. /// </returns> /// <remarks> /// <para> /// If the <see cref="Utf8JsonReader.TokenType"/> property of <paramref name="reader"/> /// is <see cref="JsonTokenType.PropertyName"/> or <see cref="JsonTokenType.None"/>, the /// reader will be advanced by one call to <see cref="Utf8JsonReader.Read"/> to determine /// the start of the value. /// </para> /// /// <para> /// Upon completion of this method, <paramref name="reader"/> will be positioned at the /// final token in the JSON value. If an exception is thrown, the reader is reset to /// the state it was in when the method was called. /// </para> /// /// <para> /// This method makes a copy of the data the reader acted on, so there is no caller /// requirement to maintain data integrity beyond the return of this method. /// </para> /// </remarks> /// <exception cref="ArgumentException"> /// <paramref name="reader"/> is using unsupported options. /// </exception> /// <exception cref="ArgumentException"> /// The current <paramref name="reader"/> token does not start or represent a value. /// </exception> /// <exception cref="JsonException"> /// A value could not be read from the reader. /// </exception> public static JsonDocument ParseValue(ref Utf8JsonReader reader) { bool ret = TryParseValue(ref reader, out JsonDocument? document, shouldThrow: true, useArrayPools: true); Debug.Assert(ret, "TryParseValue returned false with shouldThrow: true."); Debug.Assert(document != null, "null document returned with shouldThrow: true."); return document; } internal static bool TryParseValue( ref Utf8JsonReader reader, [NotNullWhen(true)] out JsonDocument? document, bool shouldThrow, bool useArrayPools) { JsonReaderState state = reader.CurrentState; CheckSupportedOptions(state.Options, nameof(reader)); // Value copy to overwrite the ref on an exception and undo the destructive reads. Utf8JsonReader restore = reader; ReadOnlySpan<byte> valueSpan = default; ReadOnlySequence<byte> valueSequence = default; try { switch (reader.TokenType) { // A new reader was created and has never been read, // so we need to move to the first token. // (or a reader has terminated and we're about to throw) case JsonTokenType.None: // Using a reader loop the caller has identified a property they wish to // hydrate into a JsonDocument. Move to the value first. case JsonTokenType.PropertyName: { if (!reader.Read()) { if (shouldThrow) { ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedJsonTokens); } reader = restore; document = null; return false; } break; } } switch (reader.TokenType) { // Any of the "value start" states are acceptable. case JsonTokenType.StartObject: case JsonTokenType.StartArray: { long startingOffset = reader.TokenStartIndex; if (!reader.TrySkip()) { if (shouldThrow) { ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedJsonTokens); } reader = restore; document = null; return false; } long totalLength = reader.BytesConsumed - startingOffset; ReadOnlySequence<byte> sequence = reader.OriginalSequence; if (sequence.IsEmpty) { valueSpan = reader.OriginalSpan.Slice( checked((int)startingOffset), checked((int)totalLength)); } else { valueSequence = sequence.Slice(startingOffset, totalLength); } Debug.Assert( reader.TokenType == JsonTokenType.EndObject || reader.TokenType == JsonTokenType.EndArray); break; } case JsonTokenType.False: case JsonTokenType.True: case JsonTokenType.Null: if (useArrayPools) { if (reader.HasValueSequence) { valueSequence = reader.ValueSequence; } else { valueSpan = reader.ValueSpan; } break; } document = CreateForLiteral(reader.TokenType); return true; case JsonTokenType.Number: { if (reader.HasValueSequence) { valueSequence = reader.ValueSequence; } else { valueSpan = reader.ValueSpan; } break; } // String's ValueSequence/ValueSpan omits the quotes, we need them back. case JsonTokenType.String: { ReadOnlySequence<byte> sequence = reader.OriginalSequence; if (sequence.IsEmpty) { // Since the quoted string fit in a ReadOnlySpan originally // the contents length plus the two quotes can't overflow. int payloadLength = reader.ValueSpan.Length + 2; Debug.Assert(payloadLength > 1); ReadOnlySpan<byte> readerSpan = reader.OriginalSpan; Debug.Assert( readerSpan[(int)reader.TokenStartIndex] == (byte)'"', $"Calculated span starts with {readerSpan[(int)reader.TokenStartIndex]}"); Debug.Assert( readerSpan[(int)reader.TokenStartIndex + payloadLength - 1] == (byte)'"', $"Calculated span ends with {readerSpan[(int)reader.TokenStartIndex + payloadLength - 1]}"); valueSpan = readerSpan.Slice((int)reader.TokenStartIndex, payloadLength); } else { long payloadLength = 2; if (reader.HasValueSequence) { payloadLength += reader.ValueSequence.Length; } else { payloadLength += reader.ValueSpan.Length; } valueSequence = sequence.Slice(reader.TokenStartIndex, payloadLength); Debug.Assert( valueSequence.First.Span[0] == (byte)'"', $"Calculated sequence starts with {valueSequence.First.Span[0]}"); Debug.Assert( valueSequence.ToArray()[payloadLength - 1] == (byte)'"', $"Calculated sequence ends with {valueSequence.ToArray()[payloadLength - 1]}"); } break; } default: { if (shouldThrow) { // Default case would only hit if TokenType equals JsonTokenType.EndObject or JsonTokenType.EndArray in which case it would never be sequence Debug.Assert(!reader.HasValueSequence); byte displayByte = reader.ValueSpan[0]; ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedStartOfValueNotFound, displayByte); } reader = restore; document = null; return false; } } } catch { reader = restore; throw; } int length = valueSpan.IsEmpty ? checked((int)valueSequence.Length) : valueSpan.Length; if (useArrayPools) { byte[] rented = ArrayPool<byte>.Shared.Rent(length); Span<byte> rentedSpan = rented.AsSpan(0, length); try { if (valueSpan.IsEmpty) { valueSequence.CopyTo(rentedSpan); } else { valueSpan.CopyTo(rentedSpan); } document = Parse(rented.AsMemory(0, length), state.Options, rented); } catch { // This really shouldn't happen since the document was already checked // for consistency by Skip. But if data mutations happened just after // the calls to Read then the copy may not be valid. rentedSpan.Clear(); ArrayPool<byte>.Shared.Return(rented); throw; } } else { byte[] owned; if (valueSpan.IsEmpty) { owned = valueSequence.ToArray(); } else { owned = valueSpan.ToArray(); } document = ParseUnrented(owned, state.Options, reader.TokenType); } return true; } private static JsonDocument CreateForLiteral(JsonTokenType tokenType) { switch (tokenType) { case JsonTokenType.False: s_falseLiteral ??= Create(JsonConstants.FalseValue.ToArray()); return s_falseLiteral; case JsonTokenType.True: s_trueLiteral ??= Create(JsonConstants.TrueValue.ToArray()); return s_trueLiteral; default: Debug.Assert(tokenType == JsonTokenType.Null); s_nullLiteral ??= Create(JsonConstants.NullValue.ToArray()); return s_nullLiteral; } JsonDocument Create(byte[] utf8Json) { MetadataDb database = MetadataDb.CreateLocked(utf8Json.Length); database.Append(tokenType, startLocation: 0, utf8Json.Length); return new JsonDocument(utf8Json, database); } } private static JsonDocument Parse( ReadOnlyMemory<byte> utf8Json, JsonReaderOptions readerOptions, byte[]? extraRentedArrayPoolBytes = null, PooledByteBufferWriter? extraPooledByteBufferWriter = null) { ReadOnlySpan<byte> utf8JsonSpan = utf8Json.Span; var database = MetadataDb.CreateRented(utf8Json.Length, convertToAlloc: false); var stack = new StackRowStack(JsonDocumentOptions.DefaultMaxDepth * StackRow.Size); try { Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } catch { database.Dispose(); throw; } finally { stack.Dispose(); } return new JsonDocument(utf8Json, database, extraRentedArrayPoolBytes, extraPooledByteBufferWriter); } private static JsonDocument ParseUnrented( ReadOnlyMemory<byte> utf8Json, JsonReaderOptions readerOptions, JsonTokenType tokenType = JsonTokenType.None) { // These tokens should already have been processed. Debug.Assert( tokenType != JsonTokenType.Null && tokenType != JsonTokenType.False && tokenType != JsonTokenType.True); ReadOnlySpan<byte> utf8JsonSpan = utf8Json.Span; MetadataDb database; if (tokenType == JsonTokenType.String || tokenType == JsonTokenType.Number) { // For primitive types, we can avoid renting MetadataDb and creating StackRowStack. database = MetadataDb.CreateLocked(utf8Json.Length); StackRowStack stack = default; Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } else { database = MetadataDb.CreateRented(utf8Json.Length, convertToAlloc: true); var stack = new StackRowStack(JsonDocumentOptions.DefaultMaxDepth * StackRow.Size); try { Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } finally { stack.Dispose(); } } return new JsonDocument(utf8Json, database); } private static ArraySegment<byte> ReadToEnd(Stream stream) { int written = 0; byte[]? rented = null; ReadOnlySpan<byte> utf8Bom = JsonConstants.Utf8Bom; try { if (stream.CanSeek) { // Ask for 1 more than the length to avoid resizing later, // which is unnecessary in the common case where the stream length doesn't change. long expectedLength = Math.Max(utf8Bom.Length, stream.Length - stream.Position) + 1; rented = ArrayPool<byte>.Shared.Rent(checked((int)expectedLength)); } else { rented = ArrayPool<byte>.Shared.Rent(UnseekableStreamInitialRentSize); } int lastRead; // Read up to 3 bytes to see if it's the UTF-8 BOM do { // No need for checking for growth, the minimal rent sizes both guarantee it'll fit. Debug.Assert(rented.Length >= utf8Bom.Length); lastRead = stream.Read( rented, written, utf8Bom.Length - written); written += lastRead; } while (lastRead > 0 && written < utf8Bom.Length); // If we have 3 bytes, and they're the BOM, reset the write position to 0. if (written == utf8Bom.Length && utf8Bom.SequenceEqual(rented.AsSpan(0, utf8Bom.Length))) { written = 0; } do { if (rented.Length == written) { byte[] toReturn = rented; rented = ArrayPool<byte>.Shared.Rent(checked(toReturn.Length * 2)); Buffer.BlockCopy(toReturn, 0, rented, 0, toReturn.Length); // Holds document content, clear it. ArrayPool<byte>.Shared.Return(toReturn, clearArray: true); } lastRead = stream.Read(rented, written, rented.Length - written); written += lastRead; } while (lastRead > 0); return new ArraySegment<byte>(rented, 0, written); } catch { if (rented != null) { // Holds document content, clear it before returning it. rented.AsSpan(0, written).Clear(); ArrayPool<byte>.Shared.Return(rented); } throw; } } private static async #if BUILDING_INBOX_LIBRARY ValueTask<ArraySegment<byte>> #else Task<ArraySegment<byte>> #endif ReadToEndAsync( Stream stream, CancellationToken cancellationToken) { int written = 0; byte[]? rented = null; try { // Save the length to a local to be reused across awaits. int utf8BomLength = JsonConstants.Utf8Bom.Length; if (stream.CanSeek) { // Ask for 1 more than the length to avoid resizing later, // which is unnecessary in the common case where the stream length doesn't change. long expectedLength = Math.Max(utf8BomLength, stream.Length - stream.Position) + 1; rented = ArrayPool<byte>.Shared.Rent(checked((int)expectedLength)); } else { rented = ArrayPool<byte>.Shared.Rent(UnseekableStreamInitialRentSize); } int lastRead; // Read up to 3 bytes to see if it's the UTF-8 BOM do { // No need for checking for growth, the minimal rent sizes both guarantee it'll fit. Debug.Assert(rented.Length >= JsonConstants.Utf8Bom.Length); lastRead = await stream.ReadAsync( #if BUILDING_INBOX_LIBRARY rented.AsMemory(written, utf8BomLength - written), #else rented, written, utf8BomLength - written, #endif cancellationToken).ConfigureAwait(false); written += lastRead; } while (lastRead > 0 && written < utf8BomLength); // If we have 3 bytes, and they're the BOM, reset the write position to 0. if (written == utf8BomLength && JsonConstants.Utf8Bom.SequenceEqual(rented.AsSpan(0, utf8BomLength))) { written = 0; } do { if (rented.Length == written) { byte[] toReturn = rented; rented = ArrayPool<byte>.Shared.Rent(toReturn.Length * 2); Buffer.BlockCopy(toReturn, 0, rented, 0, toReturn.Length); // Holds document content, clear it. ArrayPool<byte>.Shared.Return(toReturn, clearArray: true); } lastRead = await stream.ReadAsync( #if BUILDING_INBOX_LIBRARY rented.AsMemory(written), #else rented, written, rented.Length - written, #endif cancellationToken).ConfigureAwait(false); written += lastRead; } while (lastRead > 0); return new ArraySegment<byte>(rented, 0, written); } catch { if (rented != null) { // Holds document content, clear it before returning it. rented.AsSpan(0, written).Clear(); ArrayPool<byte>.Shared.Return(rented); } throw; } } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Buffers; using System.Diagnostics; using System.Diagnostics.CodeAnalysis; using System.IO; using System.Threading; using System.Threading.Tasks; namespace System.Text.Json { public sealed partial class JsonDocument { // Cached unrented documents for literal values. private static JsonDocument? s_nullLiteral; private static JsonDocument? s_trueLiteral; private static JsonDocument? s_falseLiteral; private const int UnseekableStreamInitialRentSize = 4096; /// <summary> /// Parse memory as UTF-8-encoded text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// <para> /// The <see cref="ReadOnlyMemory{T}"/> value will be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </para> /// /// <para> /// Because the input is considered to be text, a UTF-8 Byte-Order-Mark (BOM) must not be present. /// </para> /// </remarks> /// <param name="utf8Json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(ReadOnlyMemory<byte> utf8Json, JsonDocumentOptions options = default) { return Parse(utf8Json, options.GetReaderOptions()); } /// <summary> /// Parse a sequence as UTF-8-encoded text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// <para> /// The <see cref="ReadOnlySequence{T}"/> may be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </para> /// /// <para> /// Because the input is considered to be text, a UTF-8 Byte-Order-Mark (BOM) must not be present. /// </para> /// </remarks> /// <param name="utf8Json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(ReadOnlySequence<byte> utf8Json, JsonDocumentOptions options = default) { JsonReaderOptions readerOptions = options.GetReaderOptions(); if (utf8Json.IsSingleSegment) { return Parse(utf8Json.First, readerOptions); } int length = checked((int)utf8Json.Length); byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(length); try { utf8Json.CopyTo(utf8Bytes.AsSpan()); return Parse(utf8Bytes.AsMemory(0, length), readerOptions, utf8Bytes); } catch { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, length).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); throw; } } /// <summary> /// Parse a <see cref="Stream"/> as UTF-8-encoded data representing a single JSON value into a /// JsonDocument. The Stream will be read to completion. /// </summary> /// <param name="utf8Json">JSON data to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse(Stream utf8Json!!, JsonDocumentOptions options = default) { ArraySegment<byte> drained = ReadToEnd(utf8Json); Debug.Assert(drained.Array != null); try { return Parse(drained.AsMemory(), options.GetReaderOptions(), drained.Array); } catch { // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); throw; } } internal static JsonDocument ParseRented(PooledByteBufferWriter utf8Json, JsonDocumentOptions options = default) { return Parse( utf8Json.WrittenMemory, options.GetReaderOptions(), extraRentedArrayPoolBytes: null, extraPooledByteBufferWriter: utf8Json); } internal static JsonDocument ParseValue(Stream utf8Json, JsonDocumentOptions options) { Debug.Assert(utf8Json != null); ArraySegment<byte> drained = ReadToEnd(utf8Json); Debug.Assert(drained.Array != null); byte[] owned = new byte[drained.Count]; Buffer.BlockCopy(drained.Array, 0, owned, 0, drained.Count); // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } internal static JsonDocument ParseValue(ReadOnlySpan<byte> utf8Json, JsonDocumentOptions options) { Debug.Assert(utf8Json != null); byte[] owned = new byte[utf8Json.Length]; utf8Json.CopyTo(owned); return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } internal static JsonDocument ParseValue(string json, JsonDocumentOptions options) { Debug.Assert(json != null); return ParseValue(json.AsMemory(), options); } /// <summary> /// Parse a <see cref="Stream"/> as UTF-8-encoded data representing a single JSON value into a /// JsonDocument. The Stream will be read to completion. /// </summary> /// <param name="utf8Json">JSON data to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <param name="cancellationToken">The token to monitor for cancellation requests.</param> /// <returns> /// A Task to produce a JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="utf8Json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static Task<JsonDocument> ParseAsync( Stream utf8Json!!, JsonDocumentOptions options = default, CancellationToken cancellationToken = default) { return ParseAsyncCore(utf8Json, options, cancellationToken); } private static async Task<JsonDocument> ParseAsyncCore( Stream utf8Json, JsonDocumentOptions options = default, CancellationToken cancellationToken = default) { ArraySegment<byte> drained = await ReadToEndAsync(utf8Json, cancellationToken).ConfigureAwait(false); Debug.Assert(drained.Array != null); try { return Parse(drained.AsMemory(), options.GetReaderOptions(), drained.Array); } catch { // Holds document content, clear it before returning it. drained.AsSpan().Clear(); ArrayPool<byte>.Shared.Return(drained.Array); throw; } } /// <summary> /// Parses text representing a single JSON value into a JsonDocument. /// </summary> /// <remarks> /// The <see cref="ReadOnlyMemory{T}"/> value may be used for the entire lifetime of the /// JsonDocument object, and the caller must ensure that the data therein does not change during /// the object lifetime. /// </remarks> /// <param name="json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse([StringSyntax(StringSyntaxAttribute.Json)] ReadOnlyMemory<char> json, JsonDocumentOptions options = default) { ReadOnlySpan<char> jsonChars = json.Span; int expectedByteCount = JsonReaderHelper.GetUtf8ByteCount(jsonChars); byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(expectedByteCount); try { int actualByteCount = JsonReaderHelper.GetUtf8FromText(jsonChars, utf8Bytes); Debug.Assert(expectedByteCount == actualByteCount); return Parse( utf8Bytes.AsMemory(0, actualByteCount), options.GetReaderOptions(), utf8Bytes); } catch { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, expectedByteCount).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); throw; } } internal static JsonDocument ParseValue(ReadOnlyMemory<char> json, JsonDocumentOptions options) { ReadOnlySpan<char> jsonChars = json.Span; int expectedByteCount = JsonReaderHelper.GetUtf8ByteCount(jsonChars); byte[] owned; byte[] utf8Bytes = ArrayPool<byte>.Shared.Rent(expectedByteCount); try { int actualByteCount = JsonReaderHelper.GetUtf8FromText(jsonChars, utf8Bytes); Debug.Assert(expectedByteCount == actualByteCount); owned = new byte[actualByteCount]; Buffer.BlockCopy(utf8Bytes, 0, owned, 0, actualByteCount); } finally { // Holds document content, clear it before returning it. utf8Bytes.AsSpan(0, expectedByteCount).Clear(); ArrayPool<byte>.Shared.Return(utf8Bytes); } return ParseUnrented(owned.AsMemory(), options.GetReaderOptions()); } /// <summary> /// Parses text representing a single JSON value into a JsonDocument. /// </summary> /// <param name="json">JSON text to parse.</param> /// <param name="options">Options to control the reader behavior during parsing.</param> /// <returns> /// A JsonDocument representation of the JSON value. /// </returns> /// <exception cref="JsonException"> /// <paramref name="json"/> does not represent a valid single JSON value. /// </exception> /// <exception cref="ArgumentException"> /// <paramref name="options"/> contains unsupported options. /// </exception> public static JsonDocument Parse([StringSyntax(StringSyntaxAttribute.Json)] string json!!, JsonDocumentOptions options = default) { return Parse(json.AsMemory(), options); } /// <summary> /// Attempts to parse one JSON value (including objects or arrays) from the provided reader. /// </summary> /// <param name="reader">The reader to read.</param> /// <param name="document">Receives the parsed document.</param> /// <returns> /// <see langword="true"/> if a value was read and parsed into a JsonDocument, /// <see langword="false"/> if the reader ran out of data while parsing. /// All other situations result in an exception being thrown. /// </returns> /// <remarks> /// <para> /// If the <see cref="Utf8JsonReader.TokenType"/> property of <paramref name="reader"/> /// is <see cref="JsonTokenType.PropertyName"/> or <see cref="JsonTokenType.None"/>, the /// reader will be advanced by one call to <see cref="Utf8JsonReader.Read"/> to determine /// the start of the value. /// </para> /// /// <para> /// Upon completion of this method, <paramref name="reader"/> will be positioned at the /// final token in the JSON value. If an exception is thrown, or <see langword="false"/> /// is returned, the reader is reset to the state it was in when the method was called. /// </para> /// /// <para> /// This method makes a copy of the data the reader acted on, so there is no caller /// requirement to maintain data integrity beyond the return of this method. /// </para> /// </remarks> /// <exception cref="ArgumentException"> /// <paramref name="reader"/> is using unsupported options. /// </exception> /// <exception cref="ArgumentException"> /// The current <paramref name="reader"/> token does not start or represent a value. /// </exception> /// <exception cref="JsonException"> /// A value could not be read from the reader. /// </exception> public static bool TryParseValue(ref Utf8JsonReader reader, [NotNullWhen(true)] out JsonDocument? document) { return TryParseValue(ref reader, out document, shouldThrow: false, useArrayPools: true); } /// <summary> /// Parses one JSON value (including objects or arrays) from the provided reader. /// </summary> /// <param name="reader">The reader to read.</param> /// <returns> /// A JsonDocument representing the value (and nested values) read from the reader. /// </returns> /// <remarks> /// <para> /// If the <see cref="Utf8JsonReader.TokenType"/> property of <paramref name="reader"/> /// is <see cref="JsonTokenType.PropertyName"/> or <see cref="JsonTokenType.None"/>, the /// reader will be advanced by one call to <see cref="Utf8JsonReader.Read"/> to determine /// the start of the value. /// </para> /// /// <para> /// Upon completion of this method, <paramref name="reader"/> will be positioned at the /// final token in the JSON value. If an exception is thrown, the reader is reset to /// the state it was in when the method was called. /// </para> /// /// <para> /// This method makes a copy of the data the reader acted on, so there is no caller /// requirement to maintain data integrity beyond the return of this method. /// </para> /// </remarks> /// <exception cref="ArgumentException"> /// <paramref name="reader"/> is using unsupported options. /// </exception> /// <exception cref="ArgumentException"> /// The current <paramref name="reader"/> token does not start or represent a value. /// </exception> /// <exception cref="JsonException"> /// A value could not be read from the reader. /// </exception> public static JsonDocument ParseValue(ref Utf8JsonReader reader) { bool ret = TryParseValue(ref reader, out JsonDocument? document, shouldThrow: true, useArrayPools: true); Debug.Assert(ret, "TryParseValue returned false with shouldThrow: true."); Debug.Assert(document != null, "null document returned with shouldThrow: true."); return document; } internal static bool TryParseValue( ref Utf8JsonReader reader, [NotNullWhen(true)] out JsonDocument? document, bool shouldThrow, bool useArrayPools) { JsonReaderState state = reader.CurrentState; CheckSupportedOptions(state.Options, nameof(reader)); // Value copy to overwrite the ref on an exception and undo the destructive reads. Utf8JsonReader restore = reader; ReadOnlySpan<byte> valueSpan = default; ReadOnlySequence<byte> valueSequence = default; try { switch (reader.TokenType) { // A new reader was created and has never been read, // so we need to move to the first token. // (or a reader has terminated and we're about to throw) case JsonTokenType.None: // Using a reader loop the caller has identified a property they wish to // hydrate into a JsonDocument. Move to the value first. case JsonTokenType.PropertyName: { if (!reader.Read()) { if (shouldThrow) { ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedJsonTokens); } reader = restore; document = null; return false; } break; } } switch (reader.TokenType) { // Any of the "value start" states are acceptable. case JsonTokenType.StartObject: case JsonTokenType.StartArray: { long startingOffset = reader.TokenStartIndex; if (!reader.TrySkip()) { if (shouldThrow) { ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedJsonTokens); } reader = restore; document = null; return false; } long totalLength = reader.BytesConsumed - startingOffset; ReadOnlySequence<byte> sequence = reader.OriginalSequence; if (sequence.IsEmpty) { valueSpan = reader.OriginalSpan.Slice( checked((int)startingOffset), checked((int)totalLength)); } else { valueSequence = sequence.Slice(startingOffset, totalLength); } Debug.Assert( reader.TokenType == JsonTokenType.EndObject || reader.TokenType == JsonTokenType.EndArray); break; } case JsonTokenType.False: case JsonTokenType.True: case JsonTokenType.Null: if (useArrayPools) { if (reader.HasValueSequence) { valueSequence = reader.ValueSequence; } else { valueSpan = reader.ValueSpan; } break; } document = CreateForLiteral(reader.TokenType); return true; case JsonTokenType.Number: { if (reader.HasValueSequence) { valueSequence = reader.ValueSequence; } else { valueSpan = reader.ValueSpan; } break; } // String's ValueSequence/ValueSpan omits the quotes, we need them back. case JsonTokenType.String: { ReadOnlySequence<byte> sequence = reader.OriginalSequence; if (sequence.IsEmpty) { // Since the quoted string fit in a ReadOnlySpan originally // the contents length plus the two quotes can't overflow. int payloadLength = reader.ValueSpan.Length + 2; Debug.Assert(payloadLength > 1); ReadOnlySpan<byte> readerSpan = reader.OriginalSpan; Debug.Assert( readerSpan[(int)reader.TokenStartIndex] == (byte)'"', $"Calculated span starts with {readerSpan[(int)reader.TokenStartIndex]}"); Debug.Assert( readerSpan[(int)reader.TokenStartIndex + payloadLength - 1] == (byte)'"', $"Calculated span ends with {readerSpan[(int)reader.TokenStartIndex + payloadLength - 1]}"); valueSpan = readerSpan.Slice((int)reader.TokenStartIndex, payloadLength); } else { long payloadLength = 2; if (reader.HasValueSequence) { payloadLength += reader.ValueSequence.Length; } else { payloadLength += reader.ValueSpan.Length; } valueSequence = sequence.Slice(reader.TokenStartIndex, payloadLength); Debug.Assert( valueSequence.First.Span[0] == (byte)'"', $"Calculated sequence starts with {valueSequence.First.Span[0]}"); Debug.Assert( valueSequence.ToArray()[payloadLength - 1] == (byte)'"', $"Calculated sequence ends with {valueSequence.ToArray()[payloadLength - 1]}"); } break; } default: { if (shouldThrow) { // Default case would only hit if TokenType equals JsonTokenType.EndObject or JsonTokenType.EndArray in which case it would never be sequence Debug.Assert(!reader.HasValueSequence); byte displayByte = reader.ValueSpan[0]; ThrowHelper.ThrowJsonReaderException( ref reader, ExceptionResource.ExpectedStartOfValueNotFound, displayByte); } reader = restore; document = null; return false; } } } catch { reader = restore; throw; } int length = valueSpan.IsEmpty ? checked((int)valueSequence.Length) : valueSpan.Length; if (useArrayPools) { byte[] rented = ArrayPool<byte>.Shared.Rent(length); Span<byte> rentedSpan = rented.AsSpan(0, length); try { if (valueSpan.IsEmpty) { valueSequence.CopyTo(rentedSpan); } else { valueSpan.CopyTo(rentedSpan); } document = Parse(rented.AsMemory(0, length), state.Options, rented); } catch { // This really shouldn't happen since the document was already checked // for consistency by Skip. But if data mutations happened just after // the calls to Read then the copy may not be valid. rentedSpan.Clear(); ArrayPool<byte>.Shared.Return(rented); throw; } } else { byte[] owned; if (valueSpan.IsEmpty) { owned = valueSequence.ToArray(); } else { owned = valueSpan.ToArray(); } document = ParseUnrented(owned, state.Options, reader.TokenType); } return true; } private static JsonDocument CreateForLiteral(JsonTokenType tokenType) { switch (tokenType) { case JsonTokenType.False: s_falseLiteral ??= Create(JsonConstants.FalseValue.ToArray()); return s_falseLiteral; case JsonTokenType.True: s_trueLiteral ??= Create(JsonConstants.TrueValue.ToArray()); return s_trueLiteral; default: Debug.Assert(tokenType == JsonTokenType.Null); s_nullLiteral ??= Create(JsonConstants.NullValue.ToArray()); return s_nullLiteral; } JsonDocument Create(byte[] utf8Json) { MetadataDb database = MetadataDb.CreateLocked(utf8Json.Length); database.Append(tokenType, startLocation: 0, utf8Json.Length); return new JsonDocument(utf8Json, database); } } private static JsonDocument Parse( ReadOnlyMemory<byte> utf8Json, JsonReaderOptions readerOptions, byte[]? extraRentedArrayPoolBytes = null, PooledByteBufferWriter? extraPooledByteBufferWriter = null) { ReadOnlySpan<byte> utf8JsonSpan = utf8Json.Span; var database = MetadataDb.CreateRented(utf8Json.Length, convertToAlloc: false); var stack = new StackRowStack(JsonDocumentOptions.DefaultMaxDepth * StackRow.Size); try { Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } catch { database.Dispose(); throw; } finally { stack.Dispose(); } return new JsonDocument(utf8Json, database, extraRentedArrayPoolBytes, extraPooledByteBufferWriter); } private static JsonDocument ParseUnrented( ReadOnlyMemory<byte> utf8Json, JsonReaderOptions readerOptions, JsonTokenType tokenType = JsonTokenType.None) { // These tokens should already have been processed. Debug.Assert( tokenType != JsonTokenType.Null && tokenType != JsonTokenType.False && tokenType != JsonTokenType.True); ReadOnlySpan<byte> utf8JsonSpan = utf8Json.Span; MetadataDb database; if (tokenType == JsonTokenType.String || tokenType == JsonTokenType.Number) { // For primitive types, we can avoid renting MetadataDb and creating StackRowStack. database = MetadataDb.CreateLocked(utf8Json.Length); StackRowStack stack = default; Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } else { database = MetadataDb.CreateRented(utf8Json.Length, convertToAlloc: true); var stack = new StackRowStack(JsonDocumentOptions.DefaultMaxDepth * StackRow.Size); try { Parse(utf8JsonSpan, readerOptions, ref database, ref stack); } finally { stack.Dispose(); } } return new JsonDocument(utf8Json, database); } private static ArraySegment<byte> ReadToEnd(Stream stream) { int written = 0; byte[]? rented = null; ReadOnlySpan<byte> utf8Bom = JsonConstants.Utf8Bom; try { if (stream.CanSeek) { // Ask for 1 more than the length to avoid resizing later, // which is unnecessary in the common case where the stream length doesn't change. long expectedLength = Math.Max(utf8Bom.Length, stream.Length - stream.Position) + 1; rented = ArrayPool<byte>.Shared.Rent(checked((int)expectedLength)); } else { rented = ArrayPool<byte>.Shared.Rent(UnseekableStreamInitialRentSize); } int lastRead; // Read up to 3 bytes to see if it's the UTF-8 BOM do { // No need for checking for growth, the minimal rent sizes both guarantee it'll fit. Debug.Assert(rented.Length >= utf8Bom.Length); lastRead = stream.Read( rented, written, utf8Bom.Length - written); written += lastRead; } while (lastRead > 0 && written < utf8Bom.Length); // If we have 3 bytes, and they're the BOM, reset the write position to 0. if (written == utf8Bom.Length && utf8Bom.SequenceEqual(rented.AsSpan(0, utf8Bom.Length))) { written = 0; } do { if (rented.Length == written) { byte[] toReturn = rented; rented = ArrayPool<byte>.Shared.Rent(checked(toReturn.Length * 2)); Buffer.BlockCopy(toReturn, 0, rented, 0, toReturn.Length); // Holds document content, clear it. ArrayPool<byte>.Shared.Return(toReturn, clearArray: true); } lastRead = stream.Read(rented, written, rented.Length - written); written += lastRead; } while (lastRead > 0); return new ArraySegment<byte>(rented, 0, written); } catch { if (rented != null) { // Holds document content, clear it before returning it. rented.AsSpan(0, written).Clear(); ArrayPool<byte>.Shared.Return(rented); } throw; } } private static async #if BUILDING_INBOX_LIBRARY ValueTask<ArraySegment<byte>> #else Task<ArraySegment<byte>> #endif ReadToEndAsync( Stream stream, CancellationToken cancellationToken) { int written = 0; byte[]? rented = null; try { // Save the length to a local to be reused across awaits. int utf8BomLength = JsonConstants.Utf8Bom.Length; if (stream.CanSeek) { // Ask for 1 more than the length to avoid resizing later, // which is unnecessary in the common case where the stream length doesn't change. long expectedLength = Math.Max(utf8BomLength, stream.Length - stream.Position) + 1; rented = ArrayPool<byte>.Shared.Rent(checked((int)expectedLength)); } else { rented = ArrayPool<byte>.Shared.Rent(UnseekableStreamInitialRentSize); } int lastRead; // Read up to 3 bytes to see if it's the UTF-8 BOM do { // No need for checking for growth, the minimal rent sizes both guarantee it'll fit. Debug.Assert(rented.Length >= JsonConstants.Utf8Bom.Length); lastRead = await stream.ReadAsync( #if BUILDING_INBOX_LIBRARY rented.AsMemory(written, utf8BomLength - written), #else rented, written, utf8BomLength - written, #endif cancellationToken).ConfigureAwait(false); written += lastRead; } while (lastRead > 0 && written < utf8BomLength); // If we have 3 bytes, and they're the BOM, reset the write position to 0. if (written == utf8BomLength && JsonConstants.Utf8Bom.SequenceEqual(rented.AsSpan(0, utf8BomLength))) { written = 0; } do { if (rented.Length == written) { byte[] toReturn = rented; rented = ArrayPool<byte>.Shared.Rent(toReturn.Length * 2); Buffer.BlockCopy(toReturn, 0, rented, 0, toReturn.Length); // Holds document content, clear it. ArrayPool<byte>.Shared.Return(toReturn, clearArray: true); } lastRead = await stream.ReadAsync( #if BUILDING_INBOX_LIBRARY rented.AsMemory(written), #else rented, written, rented.Length - written, #endif cancellationToken).ConfigureAwait(false); written += lastRead; } while (lastRead > 0); return new ArraySegment<byte>(rented, 0, written); } catch { if (rented != null) { // Holds document content, clear it before returning it. rented.AsSpan(0, written).Clear(); ArrayPool<byte>.Shared.Return(rented); } throw; } } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/Loader/classloader/TypeGeneratorTests/TypeGeneratorTest960/Generated960.ilproj
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Generated960.il" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\TestFramework\TestFramework.csproj" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <CLRTestPriority>1</CLRTestPriority> </PropertyGroup> <ItemGroup> <Compile Include="Generated960.il" /> </ItemGroup> <ItemGroup> <ProjectReference Include="..\TestFramework\TestFramework.csproj" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/fft14.txt
Microsoft (R) XSLT Compiler version 2.0.51103 for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727 Copyright (C) Microsoft Corporation 2006. All right reserved. error : Response file '.\infft14.txt' included multiple times.
Microsoft (R) XSLT Compiler version 2.0.51103 for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727 Copyright (C) Microsoft Corporation 2006. All right reserved. error : Response file '.\infft14.txt' included multiple times.
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Drawing.Common/ref/System.Drawing.Common.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFrameworks>$(NetCoreAppCurrent);$(NetCoreAppMinimum);netstandard2.0;$(NetFrameworkMinimum)</TargetFrameworks> <IncludeInternalObsoleteAttribute>true</IncludeInternalObsoleteAttribute> <Nullable>enable</Nullable> </PropertyGroup> <ItemGroup> <Compile Include="System.Drawing.Common.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" /> <Compile Include="System.Drawing.Common.netcoreapp.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'" /> <Compile Include="System.Drawing.Common.Forwards.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'" /> <Compile Include="System.Drawing.Common.netstandard.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETStandard'" /> <Compile Include="System.Drawing.Common.netframework.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'" /> </ItemGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\RequiresUnreferencedCodeAttribute.cs" /> </ItemGroup> <ItemGroup Condition="'$(TargetFramework)' == '$(NetCoreAppCurrent)'"> <ProjectReference Include="$(LibrariesProjectRoot)System.Collections.NonGeneric\ref\System.Collections.NonGeneric.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel\ref\System.ComponentModel.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel.Primitives\ref\System.ComponentModel.Primitives.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel.TypeConverter\ref\System.ComponentModel.TypeConverter.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Drawing.Primitives\ref\System.Drawing.Primitives.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Numerics.Vectors\ref\System.Numerics.Vectors.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ObjectModel\ref\System.ObjectModel.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime\ref\System.Runtime.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.Extensions\ref\System.Runtime.Extensions.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\ref\System.Runtime.InteropServices.csproj" /> </ItemGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(TargetFramework)' != '$(NetCoreAppCurrent)'"> <Reference Include="System.Collections.NonGeneric" /> <Reference Include="System.ComponentModel" /> <Reference Include="System.ComponentModel.Primitives" /> <Reference Include="System.ComponentModel.TypeConverter" /> <Reference Include="System.Diagnostics.Debug" /> <Reference Include="System.Drawing.Primitives" /> <Reference Include="System.Numerics.Vectors" /> <Reference Include="System.ObjectModel" /> <Reference Include="System.Runtime" /> <Reference Include="System.Runtime.Extensions" /> <Reference Include="System.Runtime.InteropServices" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFrameworks>$(NetCoreAppCurrent);$(NetCoreAppMinimum);netstandard2.0;$(NetFrameworkMinimum)</TargetFrameworks> <IncludeInternalObsoleteAttribute>true</IncludeInternalObsoleteAttribute> <Nullable>enable</Nullable> </PropertyGroup> <ItemGroup> <Compile Include="System.Drawing.Common.cs" Condition="'$(TargetFrameworkIdentifier)' != '.NETFramework'" /> <Compile Include="System.Drawing.Common.netcoreapp.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'" /> <Compile Include="System.Drawing.Common.Forwards.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp'" /> <Compile Include="System.Drawing.Common.netstandard.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETStandard'" /> <Compile Include="System.Drawing.Common.netframework.cs" Condition="'$(TargetFrameworkIdentifier)' == '.NETFramework'" /> </ItemGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' != '.NETCoreApp'"> <Compile Include="$(CoreLibSharedDir)System\Diagnostics\CodeAnalysis\RequiresUnreferencedCodeAttribute.cs" /> </ItemGroup> <ItemGroup Condition="'$(TargetFramework)' == '$(NetCoreAppCurrent)'"> <ProjectReference Include="$(LibrariesProjectRoot)System.Collections.NonGeneric\ref\System.Collections.NonGeneric.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel\ref\System.ComponentModel.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel.Primitives\ref\System.ComponentModel.Primitives.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ComponentModel.TypeConverter\ref\System.ComponentModel.TypeConverter.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Drawing.Primitives\ref\System.Drawing.Primitives.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Numerics.Vectors\ref\System.Numerics.Vectors.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.ObjectModel\ref\System.ObjectModel.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime\ref\System.Runtime.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.Extensions\ref\System.Runtime.Extensions.csproj" /> <ProjectReference Include="$(LibrariesProjectRoot)System.Runtime.InteropServices\ref\System.Runtime.InteropServices.csproj" /> </ItemGroup> <ItemGroup Condition="'$(TargetFrameworkIdentifier)' == '.NETCoreApp' and '$(TargetFramework)' != '$(NetCoreAppCurrent)'"> <Reference Include="System.Collections.NonGeneric" /> <Reference Include="System.ComponentModel" /> <Reference Include="System.ComponentModel.Primitives" /> <Reference Include="System.ComponentModel.TypeConverter" /> <Reference Include="System.Diagnostics.Debug" /> <Reference Include="System.Drawing.Primitives" /> <Reference Include="System.Numerics.Vectors" /> <Reference Include="System.ObjectModel" /> <Reference Include="System.Runtime" /> <Reference Include="System.Runtime.Extensions" /> <Reference Include="System.Runtime.InteropServices" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/XsltApiV2/baseline/RemoveParam13.txt
<result> <arg1>Test</arg1> <arg2>Test</arg2> <arg3>Test</arg3> <arg4>Test</arg4> <arg5>Test</arg5> <arg6>No Value Specified</arg6> </result>
<result> <arg1>Test</arg1> <arg2>Test</arg2> <arg3>Test</arg3> <arg4>Test</arg4> <arg5>Test</arg5> <arg6>No Value Specified</arg6> </result>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/HardwareIntrinsics/Arm/AdvSimd.Arm64/MinNumberPairwise.Vector64.Single.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics.Arm\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; using System.Runtime.Intrinsics.Arm; namespace JIT.HardwareIntrinsics.Arm { public static partial class Program { private static void MinNumberPairwise_Vector64_Single() { var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); if (test.IsSupported) { // Validates basic functionality works, using Unsafe.Read test.RunBasicScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates basic functionality works, using Load test.RunBasicScenario_Load(); } // Validates calling via reflection works, using Unsafe.Read test.RunReflectionScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates calling via reflection works, using Load test.RunReflectionScenario_Load(); } // Validates passing a static member works test.RunClsVarScenario(); if (AdvSimd.IsSupported) { // Validates passing a static member works, using pinning and Load test.RunClsVarScenario_Load(); } // Validates passing a local works, using Unsafe.Read test.RunLclVarScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates passing a local works, using Load test.RunLclVarScenario_Load(); } // Validates passing the field of a local class works test.RunClassLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local class works, using pinning and Load test.RunClassLclFldScenario_Load(); } // Validates passing an instance member of a class works test.RunClassFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a class works, using pinning and Load test.RunClassFldScenario_Load(); } // Validates passing the field of a local struct works test.RunStructLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local struct works, using pinning and Load test.RunStructLclFldScenario_Load(); } // Validates passing an instance member of a struct works test.RunStructFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a struct works, using pinning and Load test.RunStructFldScenario_Load(); } } else { // Validates we throw on unsupported hardware test.RunUnsupportedScenario(); } if (!test.Succeeded) { throw new Exception("One or more scenarios did not complete as expected."); } } } public sealed unsafe class SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single { private struct DataTable { private byte[] inArray1; private byte[] inArray2; private byte[] outArray; private GCHandle inHandle1; private GCHandle inHandle2; private GCHandle outHandle; private ulong alignment; public DataTable(Single[] inArray1, Single[] inArray2, Single[] outArray, int alignment) { int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Single>(); int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Single>(); int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<Single>(); if ((alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2 || (alignment * 2) < sizeOfoutArray) { throw new ArgumentException("Invalid value of alignment"); } this.inArray1 = new byte[alignment * 2]; this.inArray2 = new byte[alignment * 2]; this.outArray = new byte[alignment * 2]; this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned); this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned); this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned); this.alignment = (ulong)alignment; Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Single, byte>(ref inArray1[0]), (uint)sizeOfinArray1); Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Single, byte>(ref inArray2[0]), (uint)sizeOfinArray2); } public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment); public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment); public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment); public void Dispose() { inHandle1.Free(); inHandle2.Free(); outHandle.Free(); } private static unsafe void* Align(byte* buffer, ulong expectedAlignment) { return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1)); } } private struct TestStruct { public Vector64<Single> _fld1; public Vector64<Single> _fld2; public static TestStruct Create() { var testStruct = new TestStruct(); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref testStruct._fld1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref testStruct._fld2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); return testStruct; } public void RunStructFldScenario(SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single testClass) { var result = AdvSimd.Arm64.MinNumberPairwise(_fld1, _fld2); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } public void RunStructFldScenario_Load(SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single testClass) { fixed (Vector64<Single>* pFld1 = &_fld1) fixed (Vector64<Single>* pFld2 = &_fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } } } private static readonly int LargestVectorSize = 8; private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static readonly int RetElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static Single[] _data1 = new Single[Op1ElementCount]; private static Single[] _data2 = new Single[Op2ElementCount]; private static Vector64<Single> _clsVar1; private static Vector64<Single> _clsVar2; private Vector64<Single> _fld1; private Vector64<Single> _fld2; private DataTable _dataTable; static SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single() { for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _clsVar1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _clsVar2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); } public SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single() { Succeeded = true; for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _fld1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _fld2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } _dataTable = new DataTable(_data1, _data2, new Single[RetElementCount], LargestVectorSize); } public bool IsSupported => AdvSimd.Arm64.IsSupported; public bool Succeeded { get; set; } public void RunBasicScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead)); var result = AdvSimd.Arm64.MinNumberPairwise( Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunBasicScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_Load)); var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead)); var result = typeof(AdvSimd.Arm64).GetMethod(nameof(AdvSimd.Arm64.MinNumberPairwise), new Type[] { typeof(Vector64<Single>), typeof(Vector64<Single>) }) .Invoke(null, new object[] { Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Single>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_Load)); var result = typeof(AdvSimd.Arm64).GetMethod(nameof(AdvSimd.Arm64.MinNumberPairwise), new Type[] { typeof(Vector64<Single>), typeof(Vector64<Single>) }) .Invoke(null, new object[] { AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Single>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunClsVarScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario)); var result = AdvSimd.Arm64.MinNumberPairwise( _clsVar1, _clsVar2 ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } public void RunClsVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario_Load)); fixed (Vector64<Single>* pClsVar1 = &_clsVar1) fixed (Vector64<Single>* pClsVar2 = &_clsVar2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pClsVar1)), AdvSimd.LoadVector64((Single*)(pClsVar2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } } public void RunLclVarScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead)); var op1 = Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr); var op2 = Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr); var result = AdvSimd.Arm64.MinNumberPairwise(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunLclVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_Load)); var op1 = AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)); var op2 = AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)); var result = AdvSimd.Arm64.MinNumberPairwise(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario)); var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); var result = AdvSimd.Arm64.MinNumberPairwise(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario_Load)); var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); fixed (Vector64<Single>* pFld1 = &test._fld1) fixed (Vector64<Single>* pFld2 = &test._fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } } public void RunClassFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario)); var result = AdvSimd.Arm64.MinNumberPairwise(_fld1, _fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } public void RunClassFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario_Load)); fixed (Vector64<Single>* pFld1 = &_fld1) fixed (Vector64<Single>* pFld2 = &_fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } } public void RunStructLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario)); var test = TestStruct.Create(); var result = AdvSimd.Arm64.MinNumberPairwise(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario_Load)); var test = TestStruct.Create(); var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(&test._fld1)), AdvSimd.LoadVector64((Single*)(&test._fld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario)); var test = TestStruct.Create(); test.RunStructFldScenario(this); } public void RunStructFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario_Load)); var test = TestStruct.Create(); test.RunStructFldScenario_Load(this); } public void RunUnsupportedScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunUnsupportedScenario)); bool succeeded = false; try { RunBasicScenario_UnsafeRead(); } catch (PlatformNotSupportedException) { succeeded = true; } if (!succeeded) { Succeeded = false; } } private void ValidateResult(Vector64<Single> op1, Vector64<Single> op2, void* result, [CallerMemberName] string method = "") { Single[] inArray1 = new Single[Op1ElementCount]; Single[] inArray2 = new Single[Op2ElementCount]; Single[] outArray = new Single[RetElementCount]; Unsafe.WriteUnaligned(ref Unsafe.As<Single, byte>(ref inArray1[0]), op1); Unsafe.WriteUnaligned(ref Unsafe.As<Single, byte>(ref inArray2[0]), op2); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Single>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(void* op1, void* op2, void* result, [CallerMemberName] string method = "") { Single[] inArray1 = new Single[Op1ElementCount]; Single[] inArray2 = new Single[Op2ElementCount]; Single[] outArray = new Single[RetElementCount]; Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector64<Single>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector64<Single>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Single>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(Single[] left, Single[] right, Single[] result, [CallerMemberName] string method = "") { bool succeeded = true; for (var i = 0; i < RetElementCount; i++) { if (BitConverter.SingleToInt32Bits(Helpers.MinNumberPairwise(left, right, i)) != BitConverter.SingleToInt32Bits(result[i])) { succeeded = false; break; } } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"{nameof(AdvSimd.Arm64)}.{nameof(AdvSimd.Arm64.MinNumberPairwise)}<Single>(Vector64<Single>, Vector64<Single>): {method} failed:"); TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})"); TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})"); TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})"); TestLibrary.TestFramework.LogInformation(string.Empty); Succeeded = false; } } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. /****************************************************************************** * This file is auto-generated from a template file by the GenerateTests.csx * * script in tests\src\JIT\HardwareIntrinsics.Arm\Shared. In order to make * * changes, please update the corresponding template and run according to the * * directions listed in the file. * ******************************************************************************/ using System; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; using System.Runtime.Intrinsics; using System.Runtime.Intrinsics.Arm; namespace JIT.HardwareIntrinsics.Arm { public static partial class Program { private static void MinNumberPairwise_Vector64_Single() { var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); if (test.IsSupported) { // Validates basic functionality works, using Unsafe.Read test.RunBasicScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates basic functionality works, using Load test.RunBasicScenario_Load(); } // Validates calling via reflection works, using Unsafe.Read test.RunReflectionScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates calling via reflection works, using Load test.RunReflectionScenario_Load(); } // Validates passing a static member works test.RunClsVarScenario(); if (AdvSimd.IsSupported) { // Validates passing a static member works, using pinning and Load test.RunClsVarScenario_Load(); } // Validates passing a local works, using Unsafe.Read test.RunLclVarScenario_UnsafeRead(); if (AdvSimd.IsSupported) { // Validates passing a local works, using Load test.RunLclVarScenario_Load(); } // Validates passing the field of a local class works test.RunClassLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local class works, using pinning and Load test.RunClassLclFldScenario_Load(); } // Validates passing an instance member of a class works test.RunClassFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a class works, using pinning and Load test.RunClassFldScenario_Load(); } // Validates passing the field of a local struct works test.RunStructLclFldScenario(); if (AdvSimd.IsSupported) { // Validates passing the field of a local struct works, using pinning and Load test.RunStructLclFldScenario_Load(); } // Validates passing an instance member of a struct works test.RunStructFldScenario(); if (AdvSimd.IsSupported) { // Validates passing an instance member of a struct works, using pinning and Load test.RunStructFldScenario_Load(); } } else { // Validates we throw on unsupported hardware test.RunUnsupportedScenario(); } if (!test.Succeeded) { throw new Exception("One or more scenarios did not complete as expected."); } } } public sealed unsafe class SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single { private struct DataTable { private byte[] inArray1; private byte[] inArray2; private byte[] outArray; private GCHandle inHandle1; private GCHandle inHandle2; private GCHandle outHandle; private ulong alignment; public DataTable(Single[] inArray1, Single[] inArray2, Single[] outArray, int alignment) { int sizeOfinArray1 = inArray1.Length * Unsafe.SizeOf<Single>(); int sizeOfinArray2 = inArray2.Length * Unsafe.SizeOf<Single>(); int sizeOfoutArray = outArray.Length * Unsafe.SizeOf<Single>(); if ((alignment != 16 && alignment != 8) || (alignment * 2) < sizeOfinArray1 || (alignment * 2) < sizeOfinArray2 || (alignment * 2) < sizeOfoutArray) { throw new ArgumentException("Invalid value of alignment"); } this.inArray1 = new byte[alignment * 2]; this.inArray2 = new byte[alignment * 2]; this.outArray = new byte[alignment * 2]; this.inHandle1 = GCHandle.Alloc(this.inArray1, GCHandleType.Pinned); this.inHandle2 = GCHandle.Alloc(this.inArray2, GCHandleType.Pinned); this.outHandle = GCHandle.Alloc(this.outArray, GCHandleType.Pinned); this.alignment = (ulong)alignment; Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray1Ptr), ref Unsafe.As<Single, byte>(ref inArray1[0]), (uint)sizeOfinArray1); Unsafe.CopyBlockUnaligned(ref Unsafe.AsRef<byte>(inArray2Ptr), ref Unsafe.As<Single, byte>(ref inArray2[0]), (uint)sizeOfinArray2); } public void* inArray1Ptr => Align((byte*)(inHandle1.AddrOfPinnedObject().ToPointer()), alignment); public void* inArray2Ptr => Align((byte*)(inHandle2.AddrOfPinnedObject().ToPointer()), alignment); public void* outArrayPtr => Align((byte*)(outHandle.AddrOfPinnedObject().ToPointer()), alignment); public void Dispose() { inHandle1.Free(); inHandle2.Free(); outHandle.Free(); } private static unsafe void* Align(byte* buffer, ulong expectedAlignment) { return (void*)(((ulong)buffer + expectedAlignment - 1) & ~(expectedAlignment - 1)); } } private struct TestStruct { public Vector64<Single> _fld1; public Vector64<Single> _fld2; public static TestStruct Create() { var testStruct = new TestStruct(); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref testStruct._fld1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref testStruct._fld2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); return testStruct; } public void RunStructFldScenario(SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single testClass) { var result = AdvSimd.Arm64.MinNumberPairwise(_fld1, _fld2); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } public void RunStructFldScenario_Load(SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single testClass) { fixed (Vector64<Single>* pFld1 = &_fld1) fixed (Vector64<Single>* pFld2 = &_fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(testClass._dataTable.outArrayPtr, result); testClass.ValidateResult(_fld1, _fld2, testClass._dataTable.outArrayPtr); } } } private static readonly int LargestVectorSize = 8; private static readonly int Op1ElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static readonly int Op2ElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static readonly int RetElementCount = Unsafe.SizeOf<Vector64<Single>>() / sizeof(Single); private static Single[] _data1 = new Single[Op1ElementCount]; private static Single[] _data2 = new Single[Op2ElementCount]; private static Vector64<Single> _clsVar1; private static Vector64<Single> _clsVar2; private Vector64<Single> _fld1; private Vector64<Single> _fld2; private DataTable _dataTable; static SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single() { for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _clsVar1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _clsVar2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); } public SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single() { Succeeded = true; for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _fld1), ref Unsafe.As<Single, byte>(ref _data1[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } Unsafe.CopyBlockUnaligned(ref Unsafe.As<Vector64<Single>, byte>(ref _fld2), ref Unsafe.As<Single, byte>(ref _data2[0]), (uint)Unsafe.SizeOf<Vector64<Single>>()); for (var i = 0; i < Op1ElementCount; i++) { _data1[i] = TestLibrary.Generator.GetSingle(); } for (var i = 0; i < Op2ElementCount; i++) { _data2[i] = TestLibrary.Generator.GetSingle(); } _dataTable = new DataTable(_data1, _data2, new Single[RetElementCount], LargestVectorSize); } public bool IsSupported => AdvSimd.Arm64.IsSupported; public bool Succeeded { get; set; } public void RunBasicScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_UnsafeRead)); var result = AdvSimd.Arm64.MinNumberPairwise( Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunBasicScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunBasicScenario_Load)); var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_UnsafeRead)); var result = typeof(AdvSimd.Arm64).GetMethod(nameof(AdvSimd.Arm64.MinNumberPairwise), new Type[] { typeof(Vector64<Single>), typeof(Vector64<Single>) }) .Invoke(null, new object[] { Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr), Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Single>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunReflectionScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunReflectionScenario_Load)); var result = typeof(AdvSimd.Arm64).GetMethod(nameof(AdvSimd.Arm64.MinNumberPairwise), new Type[] { typeof(Vector64<Single>), typeof(Vector64<Single>) }) .Invoke(null, new object[] { AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)), AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)) }); Unsafe.Write(_dataTable.outArrayPtr, (Vector64<Single>)(result)); ValidateResult(_dataTable.inArray1Ptr, _dataTable.inArray2Ptr, _dataTable.outArrayPtr); } public void RunClsVarScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario)); var result = AdvSimd.Arm64.MinNumberPairwise( _clsVar1, _clsVar2 ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } public void RunClsVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClsVarScenario_Load)); fixed (Vector64<Single>* pClsVar1 = &_clsVar1) fixed (Vector64<Single>* pClsVar2 = &_clsVar2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pClsVar1)), AdvSimd.LoadVector64((Single*)(pClsVar2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_clsVar1, _clsVar2, _dataTable.outArrayPtr); } } public void RunLclVarScenario_UnsafeRead() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_UnsafeRead)); var op1 = Unsafe.Read<Vector64<Single>>(_dataTable.inArray1Ptr); var op2 = Unsafe.Read<Vector64<Single>>(_dataTable.inArray2Ptr); var result = AdvSimd.Arm64.MinNumberPairwise(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunLclVarScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunLclVarScenario_Load)); var op1 = AdvSimd.LoadVector64((Single*)(_dataTable.inArray1Ptr)); var op2 = AdvSimd.LoadVector64((Single*)(_dataTable.inArray2Ptr)); var result = AdvSimd.Arm64.MinNumberPairwise(op1, op2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(op1, op2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario)); var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); var result = AdvSimd.Arm64.MinNumberPairwise(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunClassLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassLclFldScenario_Load)); var test = new SimpleBinaryOpTest__MinNumberPairwise_Vector64_Single(); fixed (Vector64<Single>* pFld1 = &test._fld1) fixed (Vector64<Single>* pFld2 = &test._fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } } public void RunClassFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario)); var result = AdvSimd.Arm64.MinNumberPairwise(_fld1, _fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } public void RunClassFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunClassFldScenario_Load)); fixed (Vector64<Single>* pFld1 = &_fld1) fixed (Vector64<Single>* pFld2 = &_fld2) { var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(pFld1)), AdvSimd.LoadVector64((Single*)(pFld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(_fld1, _fld2, _dataTable.outArrayPtr); } } public void RunStructLclFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario)); var test = TestStruct.Create(); var result = AdvSimd.Arm64.MinNumberPairwise(test._fld1, test._fld2); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructLclFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructLclFldScenario_Load)); var test = TestStruct.Create(); var result = AdvSimd.Arm64.MinNumberPairwise( AdvSimd.LoadVector64((Single*)(&test._fld1)), AdvSimd.LoadVector64((Single*)(&test._fld2)) ); Unsafe.Write(_dataTable.outArrayPtr, result); ValidateResult(test._fld1, test._fld2, _dataTable.outArrayPtr); } public void RunStructFldScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario)); var test = TestStruct.Create(); test.RunStructFldScenario(this); } public void RunStructFldScenario_Load() { TestLibrary.TestFramework.BeginScenario(nameof(RunStructFldScenario_Load)); var test = TestStruct.Create(); test.RunStructFldScenario_Load(this); } public void RunUnsupportedScenario() { TestLibrary.TestFramework.BeginScenario(nameof(RunUnsupportedScenario)); bool succeeded = false; try { RunBasicScenario_UnsafeRead(); } catch (PlatformNotSupportedException) { succeeded = true; } if (!succeeded) { Succeeded = false; } } private void ValidateResult(Vector64<Single> op1, Vector64<Single> op2, void* result, [CallerMemberName] string method = "") { Single[] inArray1 = new Single[Op1ElementCount]; Single[] inArray2 = new Single[Op2ElementCount]; Single[] outArray = new Single[RetElementCount]; Unsafe.WriteUnaligned(ref Unsafe.As<Single, byte>(ref inArray1[0]), op1); Unsafe.WriteUnaligned(ref Unsafe.As<Single, byte>(ref inArray2[0]), op2); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Single>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(void* op1, void* op2, void* result, [CallerMemberName] string method = "") { Single[] inArray1 = new Single[Op1ElementCount]; Single[] inArray2 = new Single[Op2ElementCount]; Single[] outArray = new Single[RetElementCount]; Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref inArray1[0]), ref Unsafe.AsRef<byte>(op1), (uint)Unsafe.SizeOf<Vector64<Single>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref inArray2[0]), ref Unsafe.AsRef<byte>(op2), (uint)Unsafe.SizeOf<Vector64<Single>>()); Unsafe.CopyBlockUnaligned(ref Unsafe.As<Single, byte>(ref outArray[0]), ref Unsafe.AsRef<byte>(result), (uint)Unsafe.SizeOf<Vector64<Single>>()); ValidateResult(inArray1, inArray2, outArray, method); } private void ValidateResult(Single[] left, Single[] right, Single[] result, [CallerMemberName] string method = "") { bool succeeded = true; for (var i = 0; i < RetElementCount; i++) { if (BitConverter.SingleToInt32Bits(Helpers.MinNumberPairwise(left, right, i)) != BitConverter.SingleToInt32Bits(result[i])) { succeeded = false; break; } } if (!succeeded) { TestLibrary.TestFramework.LogInformation($"{nameof(AdvSimd.Arm64)}.{nameof(AdvSimd.Arm64.MinNumberPairwise)}<Single>(Vector64<Single>, Vector64<Single>): {method} failed:"); TestLibrary.TestFramework.LogInformation($" left: ({string.Join(", ", left)})"); TestLibrary.TestFramework.LogInformation($" right: ({string.Join(", ", right)})"); TestLibrary.TestFramework.LogInformation($" result: ({string.Join(", ", result)})"); TestLibrary.TestFramework.LogInformation(string.Empty); Succeeded = false; } } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.CoreLib/src/System/IO/InvalidDataException.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Runtime.Serialization; namespace System.IO { /// <summary>The exception that is thrown when a data stream is in an invalid format.</summary> /// <remarks>An <see cref="System.IO.InvalidDataException" /> is thrown when invalid data is detected in the data stream.</remarks> [Serializable] [System.Runtime.CompilerServices.TypeForwardedFrom("System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089")] public sealed class InvalidDataException : SystemException { /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class.</summary> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance to a system-supplied message that describes the error, such as "An invalid argument was specified." This message is localized based on the current system culture.</remarks> public InvalidDataException() : base(SR.GenericInvalidData) { } /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class with a specified error message.</summary> /// <param name="message">The error message that explains the reason for the exception.</param> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance to a system-supplied message that describes the error, such as "An invalid argument was specified." This message is localized based on the current system culture.</remarks> public InvalidDataException(string? message) : base(message) { } /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class with a reference to the inner exception that is the cause of this exception.</summary> /// <param name="message">The error message that explains the reason for the exception.</param> /// <param name="innerException">The exception that is the cause of the current exception. If the <paramref name="innerException" /> parameter is not <see langword="null" />, the current exception is raised in a <see langword="catch" /> block that handles the inner exception.</param> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance using the value of the <paramref name="message" /> parameter. The content of the <paramref name="message" /> parameter is intended to be understood by humans. The caller of this constructor is required to ensure that this string has been localized for the current system culture. /// An exception that is thrown as a direct result of a previous exception should include a reference to the previous exception in the <see cref="System.Exception.InnerException" /> property. The <see cref="System.Exception.InnerException" /> property returns the same value that is passed into the constructor, or <see langword="null" /> if the <see cref="System.Exception.InnerException" /> property does not supply the inner exception value to the constructor.</remarks> public InvalidDataException(string? message, Exception? innerException) : base(message, innerException) { } private InvalidDataException(SerializationInfo info, StreamingContext context) : base(info, context) { } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Runtime.Serialization; namespace System.IO { /// <summary>The exception that is thrown when a data stream is in an invalid format.</summary> /// <remarks>An <see cref="System.IO.InvalidDataException" /> is thrown when invalid data is detected in the data stream.</remarks> [Serializable] [System.Runtime.CompilerServices.TypeForwardedFrom("System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089")] public sealed class InvalidDataException : SystemException { /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class.</summary> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance to a system-supplied message that describes the error, such as "An invalid argument was specified." This message is localized based on the current system culture.</remarks> public InvalidDataException() : base(SR.GenericInvalidData) { } /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class with a specified error message.</summary> /// <param name="message">The error message that explains the reason for the exception.</param> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance to a system-supplied message that describes the error, such as "An invalid argument was specified." This message is localized based on the current system culture.</remarks> public InvalidDataException(string? message) : base(message) { } /// <summary>Initializes a new instance of the <see cref="System.IO.InvalidDataException" /> class with a reference to the inner exception that is the cause of this exception.</summary> /// <param name="message">The error message that explains the reason for the exception.</param> /// <param name="innerException">The exception that is the cause of the current exception. If the <paramref name="innerException" /> parameter is not <see langword="null" />, the current exception is raised in a <see langword="catch" /> block that handles the inner exception.</param> /// <remarks>This constructor initializes the <see cref="System.Exception.Message" /> property of the new instance using the value of the <paramref name="message" /> parameter. The content of the <paramref name="message" /> parameter is intended to be understood by humans. The caller of this constructor is required to ensure that this string has been localized for the current system culture. /// An exception that is thrown as a direct result of a previous exception should include a reference to the previous exception in the <see cref="System.Exception.InnerException" /> property. The <see cref="System.Exception.InnerException" /> property returns the same value that is passed into the constructor, or <see langword="null" /> if the <see cref="System.Exception.InnerException" /> property does not supply the inner exception value to the constructor.</remarks> public InvalidDataException(string? message, Exception? innerException) : base(message, innerException) { } private InvalidDataException(SerializationInfo info, StreamingContext context) : base(info, context) { } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Common/src/Interop/Windows/SspiCli/Interop.KerbLogonSubmitType.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; using System.Runtime.InteropServices; internal static partial class Interop { internal static partial class SspiCli { internal enum KERB_LOGON_SUBMIT_TYPE : int { KerbS4ULogon = 12, } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; using System.Runtime.InteropServices; internal static partial class Interop { internal static partial class SspiCli { internal enum KERB_LOGON_SUBMIT_TYPE : int { KerbS4ULogon = 12, } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/JitBlue/DevDiv_278523/DevDiv_278523_Target_32Bit.ilproj
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> <!-- There is a 32 and 64 version of this test to allow it to be compiled for all targets --> <CLRTestTargetUnsupported Condition="'$(TargetBits)' != '32'">true</CLRTestTargetUnsupported> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DevDiv_278523_32.il" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <OutputType>Exe</OutputType> <CLRTestPriority>1</CLRTestPriority> <!-- There is a 32 and 64 version of this test to allow it to be compiled for all targets --> <CLRTestTargetUnsupported Condition="'$(TargetBits)' != '32'">true</CLRTestTargetUnsupported> <DebugType>None</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="DevDiv_278523_32.il" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./docs/pr-builds.md
## PR Builds When submitting a PR to the `dotnet/runtime` repository various builds will run validation in many areas to ensure we keep productivity and quality high. The `dotnet/runtime` validation system can become overwhelming as we need to cover a lot of build scenarios and test in all the platforms that we support. In order to try to make this more reliable and spend the least amount of time testing what the PR changes need we have various pipelines, required and optional that are covered in this document. Most of the repository pipelines use a custom mechanism to evaluate paths based on the changes contained in the PR to try and build/test the least that we can without compromising quality. This is the initial step on every pipeline that depends on this infrastructure, called "Evalute Paths". In this step you can see the result of the evaluation for each subset of the repository. For more details on which subsets we have based on paths see [here](https://github.com/dotnet/runtime/blob/513fe2863ad5ec6dc453d223d4b60f787a0ffa78/eng/pipelines/common/evaluate-default-paths.yml). Also to understand how this mechanism works you can read this [comment](https://github.com/dotnet/runtime/blob/513fe2863ad5ec6dc453d223d4b60f787a0ffa78/eng/pipelines/evaluate-changed-paths.sh#L3-L12). ### Runtime pipeline This is the "main" pipeline for the runtime product. In this pipeline we include the most critical tests and platforms where we have enough test resources in order to deliver test results in a reasonable amount of time. The tests executed in this pipeline for runtime and libraries are considered innerloop, are the tests that are executed locally when one runs tests locally. For mobile platforms and wasm we run some smoke tests that aim to protect the quality of these platforms. We had to move to a smoke test approach given the hardware and time limitations that we encountered and contributors were affected by this with unstability and long wait times for their PRs to finish validation. ### Runtime-dev-innerloop pipeline This pipeline is also required, and its intent is to cover a developer innerloop scenarios that could be affected by any change, like running a specific build command or running tests inside Visual Studio, etc. ### Dotnet-linker-tests This is also a required pipeline. The purpose of this pipeline is to test that the libraries code is linker friendly. Meaning that when we trim our libraries using the ILLink, we don't have any trimming bugs, like a required method on a specific scenario is trimmed away by accident. ### Runtime-staging This pipeline runs on every change, however it behaves a little different than the other pipelines. This pipeline, will not fail if there are test failures, however it will fail if there is a timeout or a build failure. The reason why we fail on build failures is because we want to protect the developer innerloop (building the repository) for this platform. The tests will not fail because the intent of this platform is to stage new platforms where the test infrastructure is new and we need to test if we have enough capacity to include that new platform on the "main" runtime pipeline without causing flakiness. Once we analyze data and a platform is stable when running on PRs in this pipeline for at least a weak it can be promoted either to the `runtime-extra-platforms` pipeline or to the `runtime` pipeline. ### Runtime-extra-platforms This pipeline does not run by default as it is not required for a PR, but it runs twice a day, and it can also be invoked in specific PRs by commenting `/azp run runtime-extra-platforms`. However, this pipeline is still an important part of our testing. This pipeline runs innerloop tests on platforms where we don't have enough hardware capacity to run tests (mobile, browser) or on platforms where we believe tests should organically pass based on the coverage we have in the "main" runtime pipeline. For example, in the "main" pipeline we run tests on Ubuntu 21.10 but since we also support Ubuntu 18.04 which is an LTS release, we run tests on Ubuntu 18.04 of this pipeline just to make sure we have healthy tests on those platforms which we are releasing a product for. Another concrete scenario would be windows arm64 for libraries tests. Where we don't have enough hardware, but the JIT is the most important piece to test as that is what generates the native code to run on that platform, so we run JIT tests on arm64 in the "main" pipeline, but our libraries tests are only run on the `runtime-extra-platforms` pipeline. ### Outerloop pipelines We have various pipelines that their names contain `Outerloop` on them. These pipelines will not run by default on every PR, they can also be invoked using the `/azp run` comment and will run on a daily basis to analyze test results. These pipelines will run tests that take very long, that are not very stable (i.e some networking tests), or that modify machine state. Such tests are called `Outerloop` tests rather than `innerloop`. ## Rerunning Validation Validation may fail for several reasons: ### Option 1: You have a defect in your PR * Simply push the fix to your PR branch, and validation will start over. ### Option 2: There is a flaky test that is not related to your PR * Your assumption should be that a failed test indicates a problem in your PR. (If we don't operate this way, chaos ensues.) If the test fails when run again, it is almost surely a failure caused by your PR. However, there are occasions where unrelated failures occur. Here's some ways to know: * Perhaps you see the same failure in CI results for unrelated active PR's. * It's a known issue listed in our [big tracking issue](https://github.com/dotnet/runtime/issues/702) or tagged `blocking-clean-ci` [(query here)](https://github.com/dotnet/runtime/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Ablocking-clean-ci+) * It's otherwise beyond any reasonable doubt that your code changes could not have caused this. * If the tests pass on rerun, that may suggest it's not related. * In this situation, you want to re-run but not necessarily rebase on main. * To rerun just the failed leg(s): * Click on any leg. Navigate through the Azure DevOps UI, find the "..." button and choose "Retry failed legs" * Or, on the GitHub Checks tab choose "re-run failed checks". This will not rebase your change. * To rerun all validation: * Add a comment `/azp run runtime` * Or, click on "re-run all checks" in the GitHub Checks tab * Or, simply close and reopen the PR. * If you have established that it is an unrelated failure, please ensure we have an active issue for it. See the [unrelated failure](#what-to-do-if-you-determine-the-failure-is-unrelated) section below. * Whoever merges the PR should be satisfied that the failure is unrelated, is not introduced by the change, and that we are appropriately tracking it. ### Option 3: The state of the main branch HEAD is bad. * This is the very rare case where there was a build break in main, and you got unlucky. Hopefully the break has been fixed, and you want CI to rebase your change and rerun validation. * To rebase and rerun all validation: * Add a comment `/azp run runtime` * Or, click on "re-run all checks" in the GitHub Checks tab * Or, simply close and reopen the PR. * Or, ammend your commit with `--amend --no-edit` and force push to your branch. ### Additional information: * You can list the available pipelines by adding a comment like `/azp list` or get the available commands by adding a comment like `azp help`. * In the rare case the license/cla check fails to register a response, it can be rerun by issuing a GET request to `https://cla.dotnetfoundation.org/check/dotnet/runtime?pullRequest={pr_number}`. A successful response may be a redirect to `https://github.com`. * Reach out to the infrastructure team for assistance on [Teams channel](https://teams.microsoft.com/l/channel/19%3ab27b36ecd10a46398da76b02f0411de7%40thread.skype/Infrastructure?groupId=014ca51d-be57-47fa-9628-a15efcc3c376&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47) (for corpnet users) or on [Gitter](https://gitter.im/dotnet/community) in other cases. ## What to do if you determine the failure is unrelated If you have determined the failure is definitely not caused by changes in your PR, please do this: * Search for an [existing issue](https://github.com/dotnet/runtime/issues). Usually the test method name or (if a crash/hang) the test assembly name are good search parameters. * If there's an existing issue, add a comment with * a) the link to the build * b) the affected configuration (ie `net6.0-windows-Release-x64-Windows.81.Amd64.Open`) * c) all console output including the error message and stack trace from the Azure DevOps tab (This is necessary as retention policies are in place that recycle old builds.) * d) if there's a dump file (see Attachments tab in Azure DevOps) include that * If the issue is already closed, reopen it and update the labels to reflect the current failure state. * If there's no existing issue, create an issue with the same information listed above. * Update the original pull request with a comment linking to the new or existing issue. * In a follow-up Pull Request, disable the failing test(s) with the corresponding issue link tracking the disable. * Update the tracking issue with the label `disabled-test`. * For libraries tests add a [`[ActiveIssue(link)]`](https://github.com/dotnet/arcade/blob/master/src/Microsoft.DotNet.XUnitExtensions/src/Attributes/ActiveIssueAttribute.cs) attribute on the test method. You can narrow the disabling down to runtime variant, flavor, and platform. For an example see [File_AppendAllLinesAsync_Encoded](https://github.com/dotnet/runtime/blob/cf49643711ad8aa4685a8054286c1348cef6e1d8/src/libraries/System.IO.FileSystem/tests/File/AppendAsync.cs#L74) * For runtime tests found under `src/tests`, please edit [`issues.targets`](https://github.com/dotnet/runtime/blob/main/src/tests/issues.targets). There are several groups for different types of disable (mono vs. coreclr, different platforms, different scenarios). Add the folder containing the test and issue mimicking any of the samples in the file. There are plenty of possible bugs, e.g. race conditions, where a failure might highlight a real problem and it won't manifest again on a retry. Therefore these steps should be followed for every iteration of the PR build, e.g. before retrying/rebuilding.
## PR Builds When submitting a PR to the `dotnet/runtime` repository various builds will run validation in many areas to ensure we keep productivity and quality high. The `dotnet/runtime` validation system can become overwhelming as we need to cover a lot of build scenarios and test in all the platforms that we support. In order to try to make this more reliable and spend the least amount of time testing what the PR changes need we have various pipelines, required and optional that are covered in this document. Most of the repository pipelines use a custom mechanism to evaluate paths based on the changes contained in the PR to try and build/test the least that we can without compromising quality. This is the initial step on every pipeline that depends on this infrastructure, called "Evalute Paths". In this step you can see the result of the evaluation for each subset of the repository. For more details on which subsets we have based on paths see [here](https://github.com/dotnet/runtime/blob/513fe2863ad5ec6dc453d223d4b60f787a0ffa78/eng/pipelines/common/evaluate-default-paths.yml). Also to understand how this mechanism works you can read this [comment](https://github.com/dotnet/runtime/blob/513fe2863ad5ec6dc453d223d4b60f787a0ffa78/eng/pipelines/evaluate-changed-paths.sh#L3-L12). ### Runtime pipeline This is the "main" pipeline for the runtime product. In this pipeline we include the most critical tests and platforms where we have enough test resources in order to deliver test results in a reasonable amount of time. The tests executed in this pipeline for runtime and libraries are considered innerloop, are the tests that are executed locally when one runs tests locally. For mobile platforms and wasm we run some smoke tests that aim to protect the quality of these platforms. We had to move to a smoke test approach given the hardware and time limitations that we encountered and contributors were affected by this with unstability and long wait times for their PRs to finish validation. ### Runtime-dev-innerloop pipeline This pipeline is also required, and its intent is to cover a developer innerloop scenarios that could be affected by any change, like running a specific build command or running tests inside Visual Studio, etc. ### Dotnet-linker-tests This is also a required pipeline. The purpose of this pipeline is to test that the libraries code is linker friendly. Meaning that when we trim our libraries using the ILLink, we don't have any trimming bugs, like a required method on a specific scenario is trimmed away by accident. ### Runtime-staging This pipeline runs on every change, however it behaves a little different than the other pipelines. This pipeline, will not fail if there are test failures, however it will fail if there is a timeout or a build failure. The reason why we fail on build failures is because we want to protect the developer innerloop (building the repository) for this platform. The tests will not fail because the intent of this platform is to stage new platforms where the test infrastructure is new and we need to test if we have enough capacity to include that new platform on the "main" runtime pipeline without causing flakiness. Once we analyze data and a platform is stable when running on PRs in this pipeline for at least a weak it can be promoted either to the `runtime-extra-platforms` pipeline or to the `runtime` pipeline. ### Runtime-extra-platforms This pipeline does not run by default as it is not required for a PR, but it runs twice a day, and it can also be invoked in specific PRs by commenting `/azp run runtime-extra-platforms`. However, this pipeline is still an important part of our testing. This pipeline runs innerloop tests on platforms where we don't have enough hardware capacity to run tests (mobile, browser) or on platforms where we believe tests should organically pass based on the coverage we have in the "main" runtime pipeline. For example, in the "main" pipeline we run tests on Ubuntu 21.10 but since we also support Ubuntu 18.04 which is an LTS release, we run tests on Ubuntu 18.04 of this pipeline just to make sure we have healthy tests on those platforms which we are releasing a product for. Another concrete scenario would be windows arm64 for libraries tests. Where we don't have enough hardware, but the JIT is the most important piece to test as that is what generates the native code to run on that platform, so we run JIT tests on arm64 in the "main" pipeline, but our libraries tests are only run on the `runtime-extra-platforms` pipeline. ### Outerloop pipelines We have various pipelines that their names contain `Outerloop` on them. These pipelines will not run by default on every PR, they can also be invoked using the `/azp run` comment and will run on a daily basis to analyze test results. These pipelines will run tests that take very long, that are not very stable (i.e some networking tests), or that modify machine state. Such tests are called `Outerloop` tests rather than `innerloop`. ## Rerunning Validation Validation may fail for several reasons: ### Option 1: You have a defect in your PR * Simply push the fix to your PR branch, and validation will start over. ### Option 2: There is a flaky test that is not related to your PR * Your assumption should be that a failed test indicates a problem in your PR. (If we don't operate this way, chaos ensues.) If the test fails when run again, it is almost surely a failure caused by your PR. However, there are occasions where unrelated failures occur. Here's some ways to know: * Perhaps you see the same failure in CI results for unrelated active PR's. * It's a known issue listed in our [big tracking issue](https://github.com/dotnet/runtime/issues/702) or tagged `blocking-clean-ci` [(query here)](https://github.com/dotnet/runtime/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+label%3Ablocking-clean-ci+) * It's otherwise beyond any reasonable doubt that your code changes could not have caused this. * If the tests pass on rerun, that may suggest it's not related. * In this situation, you want to re-run but not necessarily rebase on main. * To rerun just the failed leg(s): * Click on any leg. Navigate through the Azure DevOps UI, find the "..." button and choose "Retry failed legs" * Or, on the GitHub Checks tab choose "re-run failed checks". This will not rebase your change. * To rerun all validation: * Add a comment `/azp run runtime` * Or, click on "re-run all checks" in the GitHub Checks tab * Or, simply close and reopen the PR. * If you have established that it is an unrelated failure, please ensure we have an active issue for it. See the [unrelated failure](#what-to-do-if-you-determine-the-failure-is-unrelated) section below. * Whoever merges the PR should be satisfied that the failure is unrelated, is not introduced by the change, and that we are appropriately tracking it. ### Option 3: The state of the main branch HEAD is bad. * This is the very rare case where there was a build break in main, and you got unlucky. Hopefully the break has been fixed, and you want CI to rebase your change and rerun validation. * To rebase and rerun all validation: * Add a comment `/azp run runtime` * Or, click on "re-run all checks" in the GitHub Checks tab * Or, simply close and reopen the PR. * Or, ammend your commit with `--amend --no-edit` and force push to your branch. ### Additional information: * You can list the available pipelines by adding a comment like `/azp list` or get the available commands by adding a comment like `azp help`. * In the rare case the license/cla check fails to register a response, it can be rerun by issuing a GET request to `https://cla.dotnetfoundation.org/check/dotnet/runtime?pullRequest={pr_number}`. A successful response may be a redirect to `https://github.com`. * Reach out to the infrastructure team for assistance on [Teams channel](https://teams.microsoft.com/l/channel/19%3ab27b36ecd10a46398da76b02f0411de7%40thread.skype/Infrastructure?groupId=014ca51d-be57-47fa-9628-a15efcc3c376&tenantId=72f988bf-86f1-41af-91ab-2d7cd011db47) (for corpnet users) or on [Gitter](https://gitter.im/dotnet/community) in other cases. ## What to do if you determine the failure is unrelated If you have determined the failure is definitely not caused by changes in your PR, please do this: * Search for an [existing issue](https://github.com/dotnet/runtime/issues). Usually the test method name or (if a crash/hang) the test assembly name are good search parameters. * If there's an existing issue, add a comment with * a) the link to the build * b) the affected configuration (ie `net6.0-windows-Release-x64-Windows.81.Amd64.Open`) * c) all console output including the error message and stack trace from the Azure DevOps tab (This is necessary as retention policies are in place that recycle old builds.) * d) if there's a dump file (see Attachments tab in Azure DevOps) include that * If the issue is already closed, reopen it and update the labels to reflect the current failure state. * If there's no existing issue, create an issue with the same information listed above. * Update the original pull request with a comment linking to the new or existing issue. * In a follow-up Pull Request, disable the failing test(s) with the corresponding issue link tracking the disable. * Update the tracking issue with the label `disabled-test`. * For libraries tests add a [`[ActiveIssue(link)]`](https://github.com/dotnet/arcade/blob/master/src/Microsoft.DotNet.XUnitExtensions/src/Attributes/ActiveIssueAttribute.cs) attribute on the test method. You can narrow the disabling down to runtime variant, flavor, and platform. For an example see [File_AppendAllLinesAsync_Encoded](https://github.com/dotnet/runtime/blob/cf49643711ad8aa4685a8054286c1348cef6e1d8/src/libraries/System.IO.FileSystem/tests/File/AppendAsync.cs#L74) * For runtime tests found under `src/tests`, please edit [`issues.targets`](https://github.com/dotnet/runtime/blob/main/src/tests/issues.targets). There are several groups for different types of disable (mono vs. coreclr, different platforms, different scenarios). Add the folder containing the test and issue mimicking any of the samples in the file. There are plenty of possible bugs, e.g. race conditions, where a failure might highlight a real problem and it won't manifest again on a retry. Therefore these steps should be followed for every iteration of the PR build, e.g. before retrying/rebuilding.
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Security.Cryptography.Cng/tests/ECDsaCngProvider.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.Security.Cryptography.EcDsa.Tests { public class ECDsaProvider : IECDsaProvider { public ECDsa Create() { return new ECDsaCng(); } public ECDsa Create(int keySize) { return new ECDsaCng(keySize); } public ECDsa Create(ECCurve curve) { return new ECDsaCng(curve); } public bool IsCurveValid(Oid oid) { // Friendly name required for windows return NativeOidFriendlyNameExists(oid.FriendlyName); } public bool ExplicitCurvesSupported { get { return PlatformDetection.WindowsVersion >= 10; } } private static bool NativeOidFriendlyNameExists(string oidFriendlyName) { if (string.IsNullOrEmpty(oidFriendlyName)) return false; try { // By specifying OidGroup.PublicKeyAlgorithm, no caches are used // Note: this throws when there is no oid value, even when friendly name is valid // so it cannot be used for curves with no oid value such as curve25519 return !string.IsNullOrEmpty(Oid.FromFriendlyName(oidFriendlyName, OidGroup.PublicKeyAlgorithm).FriendlyName); } catch (Exception) { return false; } } } public partial class ECDsaFactory { private static readonly IECDsaProvider s_provider = new ECDsaProvider(); } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. namespace System.Security.Cryptography.EcDsa.Tests { public class ECDsaProvider : IECDsaProvider { public ECDsa Create() { return new ECDsaCng(); } public ECDsa Create(int keySize) { return new ECDsaCng(keySize); } public ECDsa Create(ECCurve curve) { return new ECDsaCng(curve); } public bool IsCurveValid(Oid oid) { // Friendly name required for windows return NativeOidFriendlyNameExists(oid.FriendlyName); } public bool ExplicitCurvesSupported { get { return PlatformDetection.WindowsVersion >= 10; } } private static bool NativeOidFriendlyNameExists(string oidFriendlyName) { if (string.IsNullOrEmpty(oidFriendlyName)) return false; try { // By specifying OidGroup.PublicKeyAlgorithm, no caches are used // Note: this throws when there is no oid value, even when friendly name is valid // so it cannot be used for curves with no oid value such as curve25519 return !string.IsNullOrEmpty(Oid.FromFriendlyName(oidFriendlyName, OidGroup.PublicKeyAlgorithm).FriendlyName); } catch (Exception) { return false; } } } public partial class ECDsaFactory { private static readonly IECDsaProvider s_provider = new ECDsaProvider(); } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/baseservices/threading/generics/Monitor/TryEnter04.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; using System.Threading; public struct ValX1<T> {} public class RefX1<T> {} struct Gen<T> { public static void TryEnterTest() { // #pragma warning disable 219 // Gen<T> inst = new Gen<T>(); // #pragma warning restore Type monitor = typeof(Gen<T>); TestHelper myHelper = new TestHelper(Test_TryEnter04.nThreads); // MonitorDelegateTS[] consumer = new MonitorDelegateTS[Test.nThreads]; // for(int i=0;i<consumer.Length;i++){ // consumer[i] = new MonitorDelegateTS(myHelper.ConsumerTryEnter); // consumer[i].BeginInvoke(monitor,100,null,null); // } for (int i = 0; i < Test_TryEnter04.nThreads; i++) { ThreadPool.QueueUserWorkItem(state => { myHelper.ConsumerTryEnter(monitor, 100); }); } for(int i=0;i<6;i++){ if(myHelper.m_Event.WaitOne(10000))//,true)) break; if(myHelper.Error == true) break; } Test_TryEnter04.Eval(!myHelper.Error); } } public class Test_TryEnter04 { public static int nThreads = 25; public static int counter = 0; public static int Xcounter = 0; public static bool result = true; public static void Eval(bool exp) { counter++; if (!exp) { result = exp; Console.WriteLine("Test Failed at location: " + counter); } } public static int Main() { Gen<int>.TryEnterTest(); /*Gen<double>.TryEnterTest(); Gen<string>.TryEnterTest(); Gen<object>.TryEnterTest(); Gen<Guid>.TryEnterTest(); Gen<int[]>.TryEnterTest(); Gen<double[,]>.TryEnterTest(); Gen<string[][][]>.TryEnterTest(); Gen<object[,,,]>.TryEnterTest(); Gen<Guid[][,,,][]>.TryEnterTest(); Gen<RefX1<int>[]>.TryEnterTest(); Gen<RefX1<double>[,]>.TryEnterTest(); Gen<RefX1<string>[][][]>.TryEnterTest(); Gen<RefX1<object>[,,,]>.TryEnterTest(); Gen<RefX1<Guid>[][,,,][]>.TryEnterTest(); Gen<ValX1<int>[]>.TryEnterTest(); Gen<ValX1<double>[,]>.TryEnterTest(); Gen<ValX1<string>[][][]>.TryEnterTest(); Gen<ValX1<object>[,,,]>.TryEnterTest(); Gen<ValX1<Guid>[][,,,][]>.TryEnterTest();*/ if (result) { Console.WriteLine("Test Passed"); return 100; } else { Console.WriteLine("Test Failed"); return 1; } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; using System.Threading; public struct ValX1<T> {} public class RefX1<T> {} struct Gen<T> { public static void TryEnterTest() { // #pragma warning disable 219 // Gen<T> inst = new Gen<T>(); // #pragma warning restore Type monitor = typeof(Gen<T>); TestHelper myHelper = new TestHelper(Test_TryEnter04.nThreads); // MonitorDelegateTS[] consumer = new MonitorDelegateTS[Test.nThreads]; // for(int i=0;i<consumer.Length;i++){ // consumer[i] = new MonitorDelegateTS(myHelper.ConsumerTryEnter); // consumer[i].BeginInvoke(monitor,100,null,null); // } for (int i = 0; i < Test_TryEnter04.nThreads; i++) { ThreadPool.QueueUserWorkItem(state => { myHelper.ConsumerTryEnter(monitor, 100); }); } for(int i=0;i<6;i++){ if(myHelper.m_Event.WaitOne(10000))//,true)) break; if(myHelper.Error == true) break; } Test_TryEnter04.Eval(!myHelper.Error); } } public class Test_TryEnter04 { public static int nThreads = 25; public static int counter = 0; public static int Xcounter = 0; public static bool result = true; public static void Eval(bool exp) { counter++; if (!exp) { result = exp; Console.WriteLine("Test Failed at location: " + counter); } } public static int Main() { Gen<int>.TryEnterTest(); /*Gen<double>.TryEnterTest(); Gen<string>.TryEnterTest(); Gen<object>.TryEnterTest(); Gen<Guid>.TryEnterTest(); Gen<int[]>.TryEnterTest(); Gen<double[,]>.TryEnterTest(); Gen<string[][][]>.TryEnterTest(); Gen<object[,,,]>.TryEnterTest(); Gen<Guid[][,,,][]>.TryEnterTest(); Gen<RefX1<int>[]>.TryEnterTest(); Gen<RefX1<double>[,]>.TryEnterTest(); Gen<RefX1<string>[][][]>.TryEnterTest(); Gen<RefX1<object>[,,,]>.TryEnterTest(); Gen<RefX1<Guid>[][,,,][]>.TryEnterTest(); Gen<ValX1<int>[]>.TryEnterTest(); Gen<ValX1<double>[,]>.TryEnterTest(); Gen<ValX1<string>[][][]>.TryEnterTest(); Gen<ValX1<object>[,,,]>.TryEnterTest(); Gen<ValX1<Guid>[][,,,][]>.TryEnterTest();*/ if (result) { Console.WriteLine("Test Passed"); return 100; } else { Console.WriteLine("Test Failed"); return 1; } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.IO.FileSystem.Watcher/tests/FileSystemWatcher.Directory.NotifyFilter.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Collections.Generic; using System.Runtime.InteropServices; using Xunit; namespace System.IO.Tests { [ActiveIssue("https://github.com/dotnet/runtime/issues/34583", TestPlatforms.Windows, TargetFrameworkMonikers.Netcoreapp, TestRuntimes.Mono)] public partial class Directory_NotifyFilter_Tests : FileSystemWatcherTest { [GeneratedDllImport("advapi32.dll", EntryPoint = "SetNamedSecurityInfoW", SetLastError = true, StringMarshalling = StringMarshalling.Utf16)] private static partial uint SetSecurityInfoByHandle( string name, uint objectType, uint securityInformation, IntPtr owner, IntPtr group, IntPtr dacl, IntPtr sacl); private const uint ERROR_SUCCESS = 0; private const uint DACL_SECURITY_INFORMATION = 0x00000004; private const uint SE_FILE_OBJECT = 0x1; [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_Attributes(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; var attributes = File.GetAttributes(dir.Path); Action action = () => File.SetAttributes(dir.Path, attributes | FileAttributes.ReadOnly); Action cleanup = () => File.SetAttributes(dir.Path, attributes); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.Attributes) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & NotifyFilters.Security) > 0)) expected |= WatcherChangeTypes.Changed; // Attribute change on OSX is a ChangeOwner operation which passes the Security NotifyFilter. ExpectEvent(watcher, expected, action, cleanup, dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_CreationTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetCreationTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.CreationTime) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_DirectoryName(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { string sourcePath = dir.Path; string targetPath = Path.Combine(testDirectory.Path, "targetDir"); watcher.NotifyFilter = filter; Action action = () => Directory.Move(sourcePath, targetPath); Action cleanup = () => Directory.Move(targetPath, sourcePath); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.DirectoryName) expected |= WatcherChangeTypes.Renamed; ExpectEvent(watcher, expected, action, cleanup, targetPath); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastAccessTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetLastAccessTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.LastAccess) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.LastWrite) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [OuterLoop] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteTime_TwoFilters(NotifyFilters filter) { Assert.All(FilterTypes(), (filter2Arr => { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { filter |= (NotifyFilters)filter2Arr[0]; watcher.NotifyFilter = filter; Action action = () => Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if ((filter & NotifyFilters.LastWrite) > 0) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } })); } [Theory] [MemberData(nameof(FilterTypes))] [PlatformSpecific(TestPlatforms.Windows)] // Uses P/Invokes to set security info public void FileSystemWatcher_Directory_NotifyFilter_Security(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => { // ACL support is not yet available, so pinvoke directly. uint result = SetSecurityInfoByHandle(dir.Path, SE_FILE_OBJECT, DACL_SECURITY_INFORMATION, // Only setting the DACL owner: IntPtr.Zero, group: IntPtr.Zero, dacl: IntPtr.Zero, // full access to everyone sacl: IntPtr.Zero); Assert.Equal(ERROR_SUCCESS, result); }; Action cleanup = () => { // Recreate the Directory. Directory.Delete(dir.Path); Directory.CreateDirectory(dir.Path); }; WatcherChangeTypes expected = 0; if (filter == NotifyFilters.Security) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, dir.Path); } } /// <summary> /// Tests a changed event on a file when filtering for LastWrite and directory name. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteAndFileName() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var file = new TempFile(Path.Combine(testDirectory.Path, "file"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(file.Path))) { NotifyFilters filter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; watcher.NotifyFilter = filter; Action action = () => File.SetLastWriteTime(file.Path, DateTime.Now + TimeSpan.FromSeconds(10)); ExpectEvent(watcher, WatcherChangeTypes.Changed, action, expectedPath: file.Path); } } /// <summary> /// Tests the watcher behavior when two events - a Modification and a Creation - happen closely /// after each other. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_ModifyAndCreate() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; string otherDir = Path.Combine(testDirectory.Path, "dir2"); Action action = () => { Directory.CreateDirectory(otherDir); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); }; Action cleanup = () => Directory.Delete(otherDir); WatcherChangeTypes expected = 0; expected |= WatcherChangeTypes.Created | WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } /// <summary> /// Tests the watcher behavior when two events - a Modification and a Deletion - happen closely /// after each other. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_ModifyAndDelete() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; string otherDir = Path.Combine(testDirectory.Path, "dir2"); Action action = () => { Directory.Delete(otherDir); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); }; Action cleanup = () => { Directory.CreateDirectory(otherDir); }; cleanup(); WatcherChangeTypes expected = 0; expected |= WatcherChangeTypes.Deleted | WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } [Fact] public void FileSystemWatcher_Directory_NotifyFilter_DirectoryNameDoesntTriggerOnFileEvent() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.FileName; string renameDirSource = Path.Combine(testDirectory.Path, "dir2_source"); string renameDirDest = Path.Combine(testDirectory.Path, "dir2_dest"); string otherDir = Path.Combine(testDirectory.Path, "dir3"); Directory.CreateDirectory(renameDirSource); Action action = () => { Directory.CreateDirectory(otherDir); Directory.Move(renameDirSource, renameDirDest); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); Directory.Delete(otherDir); }; Action cleanup = () => { Directory.Move(renameDirDest, renameDirSource); }; WatcherChangeTypes expected = 0; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System.Collections.Generic; using System.Runtime.InteropServices; using Xunit; namespace System.IO.Tests { [ActiveIssue("https://github.com/dotnet/runtime/issues/34583", TestPlatforms.Windows, TargetFrameworkMonikers.Netcoreapp, TestRuntimes.Mono)] public partial class Directory_NotifyFilter_Tests : FileSystemWatcherTest { [GeneratedDllImport("advapi32.dll", EntryPoint = "SetNamedSecurityInfoW", SetLastError = true, StringMarshalling = StringMarshalling.Utf16)] private static partial uint SetSecurityInfoByHandle( string name, uint objectType, uint securityInformation, IntPtr owner, IntPtr group, IntPtr dacl, IntPtr sacl); private const uint ERROR_SUCCESS = 0; private const uint DACL_SECURITY_INFORMATION = 0x00000004; private const uint SE_FILE_OBJECT = 0x1; [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_Attributes(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; var attributes = File.GetAttributes(dir.Path); Action action = () => File.SetAttributes(dir.Path, attributes | FileAttributes.ReadOnly); Action cleanup = () => File.SetAttributes(dir.Path, attributes); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.Attributes) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & NotifyFilters.Security) > 0)) expected |= WatcherChangeTypes.Changed; // Attribute change on OSX is a ChangeOwner operation which passes the Security NotifyFilter. ExpectEvent(watcher, expected, action, cleanup, dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_CreationTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetCreationTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.CreationTime) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_DirectoryName(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { string sourcePath = dir.Path; string targetPath = Path.Combine(testDirectory.Path, "targetDir"); watcher.NotifyFilter = filter; Action action = () => Directory.Move(sourcePath, targetPath); Action cleanup = () => Directory.Move(targetPath, sourcePath); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.DirectoryName) expected |= WatcherChangeTypes.Renamed; ExpectEvent(watcher, expected, action, cleanup, targetPath); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastAccessTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetLastAccessTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.LastAccess) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteTime(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if (filter == NotifyFilters.LastWrite) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } } [Theory] [OuterLoop] [MemberData(nameof(FilterTypes))] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteTime_TwoFilters(NotifyFilters filter) { Assert.All(FilterTypes(), (filter2Arr => { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { filter |= (NotifyFilters)filter2Arr[0]; watcher.NotifyFilter = filter; Action action = () => Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); WatcherChangeTypes expected = 0; if ((filter & NotifyFilters.LastWrite) > 0) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsLinux() && ((filter & LinuxFiltersForAttribute) > 0)) expected |= WatcherChangeTypes.Changed; else if (OperatingSystem.IsMacOS() && ((filter & OSXFiltersForModify) > 0)) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, expectedPath: dir.Path); } })); } [Theory] [MemberData(nameof(FilterTypes))] [PlatformSpecific(TestPlatforms.Windows)] // Uses P/Invokes to set security info public void FileSystemWatcher_Directory_NotifyFilter_Security(NotifyFilters filter) { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(dir.Path))) { watcher.NotifyFilter = filter; Action action = () => { // ACL support is not yet available, so pinvoke directly. uint result = SetSecurityInfoByHandle(dir.Path, SE_FILE_OBJECT, DACL_SECURITY_INFORMATION, // Only setting the DACL owner: IntPtr.Zero, group: IntPtr.Zero, dacl: IntPtr.Zero, // full access to everyone sacl: IntPtr.Zero); Assert.Equal(ERROR_SUCCESS, result); }; Action cleanup = () => { // Recreate the Directory. Directory.Delete(dir.Path); Directory.CreateDirectory(dir.Path); }; WatcherChangeTypes expected = 0; if (filter == NotifyFilters.Security) expected |= WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, dir.Path); } } /// <summary> /// Tests a changed event on a file when filtering for LastWrite and directory name. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_LastWriteAndFileName() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var file = new TempFile(Path.Combine(testDirectory.Path, "file"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, Path.GetFileName(file.Path))) { NotifyFilters filter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; watcher.NotifyFilter = filter; Action action = () => File.SetLastWriteTime(file.Path, DateTime.Now + TimeSpan.FromSeconds(10)); ExpectEvent(watcher, WatcherChangeTypes.Changed, action, expectedPath: file.Path); } } /// <summary> /// Tests the watcher behavior when two events - a Modification and a Creation - happen closely /// after each other. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_ModifyAndCreate() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; string otherDir = Path.Combine(testDirectory.Path, "dir2"); Action action = () => { Directory.CreateDirectory(otherDir); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); }; Action cleanup = () => Directory.Delete(otherDir); WatcherChangeTypes expected = 0; expected |= WatcherChangeTypes.Created | WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } /// <summary> /// Tests the watcher behavior when two events - a Modification and a Deletion - happen closely /// after each other. /// </summary> [Fact] public void FileSystemWatcher_Directory_NotifyFilter_ModifyAndDelete() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.LastWrite | NotifyFilters.DirectoryName; string otherDir = Path.Combine(testDirectory.Path, "dir2"); Action action = () => { Directory.Delete(otherDir); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); }; Action cleanup = () => { Directory.CreateDirectory(otherDir); }; cleanup(); WatcherChangeTypes expected = 0; expected |= WatcherChangeTypes.Deleted | WatcherChangeTypes.Changed; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } [Fact] public void FileSystemWatcher_Directory_NotifyFilter_DirectoryNameDoesntTriggerOnFileEvent() { using (var testDirectory = new TempDirectory(GetTestFilePath())) using (var dir = new TempDirectory(Path.Combine(testDirectory.Path, "dir"))) using (var watcher = new FileSystemWatcher(testDirectory.Path, "*")) { watcher.NotifyFilter = NotifyFilters.FileName; string renameDirSource = Path.Combine(testDirectory.Path, "dir2_source"); string renameDirDest = Path.Combine(testDirectory.Path, "dir2_dest"); string otherDir = Path.Combine(testDirectory.Path, "dir3"); Directory.CreateDirectory(renameDirSource); Action action = () => { Directory.CreateDirectory(otherDir); Directory.Move(renameDirSource, renameDirDest); Directory.SetLastWriteTime(dir.Path, DateTime.Now + TimeSpan.FromSeconds(10)); Directory.Delete(otherDir); }; Action cleanup = () => { Directory.Move(renameDirDest, renameDirSource); }; WatcherChangeTypes expected = 0; ExpectEvent(watcher, expected, action, cleanup, new string[] { otherDir, dir.Path }); } } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/infft12b.txt
@infft12c.txt
@infft12c.txt
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Microsoft.Extensions.Logging/src/ActivityTrackingOptions.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Microsoft.Extensions.Logging { /// <summary> /// Flags to indicate which trace context parts should be included with the logging scopes. /// </summary> [Flags] public enum ActivityTrackingOptions { /// <summary> /// None of the trace context part wil be included in the logging. /// </summary> None = 0x0000, /// <summary> /// Span Id will be included in the logging. /// </summary> SpanId = 0x0001, /// <summary> /// Trace Id will be included in the logging. /// </summary> TraceId = 0x0002, /// <summary> /// Parent Id will be included in the logging. /// </summary> ParentId = 0x0004, /// <summary> /// Trace State will be included in the logging. /// </summary> TraceState = 0x0008, /// <summary> /// Trace flags will be included in the logging. /// </summary> TraceFlags = 0x0010, /// <summary> /// Tags will be included in the logging. /// </summary> Tags = 0x0020, /// <summary> /// Items of baggage will be included in the logging. /// </summary> Baggage = 0x0040 } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Microsoft.Extensions.Logging { /// <summary> /// Flags to indicate which trace context parts should be included with the logging scopes. /// </summary> [Flags] public enum ActivityTrackingOptions { /// <summary> /// None of the trace context part wil be included in the logging. /// </summary> None = 0x0000, /// <summary> /// Span Id will be included in the logging. /// </summary> SpanId = 0x0001, /// <summary> /// Trace Id will be included in the logging. /// </summary> TraceId = 0x0002, /// <summary> /// Parent Id will be included in the logging. /// </summary> ParentId = 0x0004, /// <summary> /// Trace State will be included in the logging. /// </summary> TraceState = 0x0008, /// <summary> /// Trace flags will be included in the logging. /// </summary> TraceFlags = 0x0010, /// <summary> /// Tags will be included in the logging. /// </summary> Tags = 0x0020, /// <summary> /// Items of baggage will be included in the logging. /// </summary> Baggage = 0x0040 } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Text.Encoding/tests/UTF8Encoding/UTF8EncodingGetMaxCharCount.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using Xunit; namespace System.Text.Tests { public class UTF8EncodingGetMaxCharCount { [Theory] [InlineData(0)] [InlineData(1)] [InlineData(10)] [InlineData(int.MaxValue - 1)] public void GetMaxCharCount(int byteCount) { int expected = byteCount + 1; Assert.Equal(expected, new UTF8Encoding(true, true).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(true, false).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(false, true).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(false, false).GetMaxCharCount(byteCount)); } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using Xunit; namespace System.Text.Tests { public class UTF8EncodingGetMaxCharCount { [Theory] [InlineData(0)] [InlineData(1)] [InlineData(10)] [InlineData(int.MaxValue - 1)] public void GetMaxCharCount(int byteCount) { int expected = byteCount + 1; Assert.Equal(expected, new UTF8Encoding(true, true).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(true, false).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(false, true).GetMaxCharCount(byteCount)); Assert.Equal(expected, new UTF8Encoding(false, false).GetMaxCharCount(byteCount)); } } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/Xslt/TestFiles/TestData/xsltc/baseline/bft13.txt
<?xml version="1.0" encoding="utf-8"?>Hello, world!
<?xml version="1.0" encoding="utf-8"?>Hello, world!
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/Microsoft.CSharp/src/Microsoft/CSharp/RuntimeBinder/Semantics/EXPRFLAG.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Microsoft.CSharp.RuntimeBinder.Semantics { [Flags] internal enum EXPRFLAG { // These are specific to various node types. // Order these by value. If you need a new flag, search for the first value that isn't currently valid on your expr kind. // 0x1 EXF_BINOP = 0x1, // On** Many Non Statement Exprs!** This gets its own BIT! // 0x2 EXF_CTOR = 0x2, // Only on EXPRMEMGRP, indicates a constructor. EXF_NEEDSRET = 0x2, // Only on EXPRBLOCK EXF_ASLEAVE = 0x2, // Only on EXPRGOTO, EXPRRETURN, means use leave instead of br instruction EXF_ISFAULT = 0x2, // Only on EXPRTRY, used for fabricated try/fault (no user code) EXF_HASHTABLESWITCH = 0x2, // Only on EXPRFlatSwitch EXF_BOX = 0x2, // Only on EXPRCAST, indicates a boxing operation (value type -> object) EXF_ARRAYCONST = 0x2, // Only on EXPRARRINIT, indicates that we should init using a memory block EXF_MEMBERSET = 0x2, // Only on EXPRFIELD, indicates that the reference is for set purposes EXF_OPENTYPE = 0x2, // Only on EXPRTYPEOF. Indicates that the type is an open type. EXF_LABELREFERENCED = 0x2, // Only on EXPRLABEL. Indicates the label was targeted by a goto. EXF_GENERATEDQMARK = 0x2, // only on EK_QMARK // 0x4 EXF_INDEXER = 0x4, // Only on EXPRMEMGRP, indicates an indexer. EXF_GOTOCASE = 0x4, // Only on EXPRGOTO, means goto case or goto default EXF_REMOVEFINALLY = 0x4, // Only on EXPRTRY, means that the try-finally should be converted to normal code EXF_UNBOX = 0x4, // Only on EXPRCAST, indicates a unboxing operation (object -> value type) EXF_ARRAYALLCONST = 0x4, // Only on EXPRARRINIT, indicated that all elems are constant (must also have ARRAYCONST set) EXF_CTORPREAMBLE = 0x4, // Only on EXPRBLOCK, indicates that the block is the preamble of a constructor - contains field inits and base ctor call EXF_USERLABEL = 0x4, // Only on EXPRLABEL, indicates that this is a source-code label, not a compiler-generated label // 0x8 EXF_OPERATOR = 0x8, // Only on EXPRMEMGRP, indicates an operator. EXF_ISPOSTOP = 0x8, // Only on EXPRMULTI, indicates <x>++ EXF_FINALLYBLOCKED = 0x8, // Only on EXPRTRY, EXPRGOTO, EXPRRETURN, means that FINALLY block end is unreachable EXF_REFCHECK = 0x8, // Only on EXPRCAST, indicates an reference checked cast is required EXF_WRAPASTEMP = 0x8, // Only on EXPRWRAP, indicates that this wrap represents an actual local // 0x10 EXF_LITERALCONST = 0x10, // Only on EXPRCONSTANT, means this was not a folded constant EXF_BADGOTO = 0x10, // Only on EXPRGOTO, indicates an unrealizable goto EXF_RETURNISYIELD = 0x10, // Only on EXPRRETURN, means this is really a yield, and flow continues EXF_ISFINALLY = 0x10, // Only on EXPRTRY EXF_NEWOBJCALL = 0x10, // Only on EXPRCALL and EXPRMEMGRP, to indicate new <...>(...) EXF_INDEXEXPR = 0x10, // Only on EXPRCAST, indicates a special cast for array indexing EXF_REPLACEWRAP = 0x10, // Only on EXPRWRAP, it means the wrap should be replaced with its expr (during rewriting) // 0x20 EXF_UNREALIZEDGOTO = 0x20, // Only on EXPRGOTO, means target unknown EXF_CONSTRAINED = 0x20, // Only on EXPRCALL, EXPRPROP, indicates a call through a method or prop on a type variable or value type EXF_FORCE_BOX = 0x20, // Only on EXPRCAST, GENERICS: indicates a "forcing" boxing operation (if type parameter boxed then nop, i.e. object -> object, else value type -> object) EXF_SIMPLENAME = 0x20, // Only on EXPRMEMGRP, We're binding a dynamic simple name. // 0x40 EXF_ASFINALLYLEAVE = 0x40, // Only on EXPRGOTO, EXPRRETURN, means leave through a finally, ASLEAVE must also be set EXF_BASECALL = 0x40, // Only on EXPRCALL, EXPRFNCPTR, EXPRPROP, EXPREVENT, and EXPRMEMGRP, indicates a "base.XXXX" call EXF_FORCE_UNBOX = 0x40, // Only on EXPRCAST, GENERICS: indicates a "forcing" unboxing operation (if type parameter boxed then castclass, i.e. object -> object, else object -> value type) EXF_ADDRNOCONV = 0x40, // Only on EXPRBINOP, with kind == EK_ADDR, indicates that a conv.u should NOT be emitted. // 0x80 EXF_GOTONOTBLOCKED = 0x80, // Only on EXPRGOTO, means the goto is known to not pass through a finally which does not terminate EXF_DELEGATE = 0x80, // Only on EXPRMEMGRP, indicates an implicit invocation of a delegate: d() vs d.Invoke(). EXF_STATIC_CAST = 0x80, // Only on EXPRCAST, indicates a static cast is required. We implement with stloc, ldloc to a temp of the correct type. // 0x100 EXF_USERCALLABLE = 0x100, // Only on EXPRMEMGRP, indicates a user callable member group. EXF_UNBOXRUNTIME = 0x100, // Only on EXPRCAST, indicates that the runtime binder should unbox this. // 0x200 EXF_NEWSTRUCTASSG = 0x200, // Only on EXPRCALL, indicates that this is a constructor call which assigns to object EXF_GENERATEDSTMT = 0x200, // Only on statement exprs. Indicates that the statement is compiler generated // so we shouldn't report things like "unreachable code" on it. // 0x400 EXF_IMPLICITSTRUCTASSG = 0x400, // Only on EXPRCALL, indicates that this an implicit struct assg call EXF_MARKING = 0x400, // Only on statement exprs. Indicates that we're currently marking // its children for reachability (it's up the stack). //*** The following are usable on multiple node types.*** // 0x000800 and above EXF_UNREACHABLEBEGIN = 0x000800, // indicates an unreachable statement EXF_UNREACHABLEEND = 0x001000, // indicates the end of the statement is unreachable EXF_USEORIGDEBUGINFO = 0x002000, // Only set on EXPRDEBUGNOOP, but tested generally. Indicates foreach node should not be overridden to in token EXF_LASTBRACEDEBUGINFO = 0x004000, // indicates override tree to set debuginfo on last brace EXF_NODEBUGINFO = 0x008000, // indicates no debug info for this statement EXF_IMPLICITTHIS = 0x010000, // indicates a compiler provided this pointer (in the EE, when doing autoexp, this can be anything) EXF_CANTBENULL = 0x020000, // indicate this expression can't ever be null (e.g., "this"). EXF_CHECKOVERFLOW = 0x040000, // indicates that operation should be checked for overflow EXF_PUSH_OP_FIRST = 0x100000, // On any expr, indicates that the first operand must be placed on the stack before // anything else - this is needed for multi-ops involving string concat. EXF_ASSGOP = 0x200000, // On any non stmt exprs, indicates assignment node... EXF_LVALUE = 0x400000, // On any exprs. An lvalue - whether it's legal to assign. // THIS IS THE HIGHEST FLAG: // Indicates that the expression came from a LocalVariableSymbol, FieldSymbol, or PropertySymbol whose type has the same name so // it's OK to use the type instead of the element if using the element would generate an error. EXF_SAMENAMETYPE = 0x800000, EXF_MASK_ANY = EXF_UNREACHABLEBEGIN | EXF_UNREACHABLEEND | EXF_USEORIGDEBUGINFO | EXF_LASTBRACEDEBUGINFO | EXF_NODEBUGINFO | EXF_IMPLICITTHIS | EXF_CANTBENULL | EXF_CHECKOVERFLOW | EXF_PUSH_OP_FIRST | EXF_ASSGOP | EXF_LVALUE | EXF_SAMENAMETYPE, // Used to mask the cast flags off an EXPRCAST. EXF_CAST_ALL = EXF_BOX | EXF_UNBOX | EXF_REFCHECK | EXF_INDEXEXPR | EXF_FORCE_BOX | EXF_FORCE_UNBOX | EXF_STATIC_CAST } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; namespace Microsoft.CSharp.RuntimeBinder.Semantics { [Flags] internal enum EXPRFLAG { // These are specific to various node types. // Order these by value. If you need a new flag, search for the first value that isn't currently valid on your expr kind. // 0x1 EXF_BINOP = 0x1, // On** Many Non Statement Exprs!** This gets its own BIT! // 0x2 EXF_CTOR = 0x2, // Only on EXPRMEMGRP, indicates a constructor. EXF_NEEDSRET = 0x2, // Only on EXPRBLOCK EXF_ASLEAVE = 0x2, // Only on EXPRGOTO, EXPRRETURN, means use leave instead of br instruction EXF_ISFAULT = 0x2, // Only on EXPRTRY, used for fabricated try/fault (no user code) EXF_HASHTABLESWITCH = 0x2, // Only on EXPRFlatSwitch EXF_BOX = 0x2, // Only on EXPRCAST, indicates a boxing operation (value type -> object) EXF_ARRAYCONST = 0x2, // Only on EXPRARRINIT, indicates that we should init using a memory block EXF_MEMBERSET = 0x2, // Only on EXPRFIELD, indicates that the reference is for set purposes EXF_OPENTYPE = 0x2, // Only on EXPRTYPEOF. Indicates that the type is an open type. EXF_LABELREFERENCED = 0x2, // Only on EXPRLABEL. Indicates the label was targeted by a goto. EXF_GENERATEDQMARK = 0x2, // only on EK_QMARK // 0x4 EXF_INDEXER = 0x4, // Only on EXPRMEMGRP, indicates an indexer. EXF_GOTOCASE = 0x4, // Only on EXPRGOTO, means goto case or goto default EXF_REMOVEFINALLY = 0x4, // Only on EXPRTRY, means that the try-finally should be converted to normal code EXF_UNBOX = 0x4, // Only on EXPRCAST, indicates a unboxing operation (object -> value type) EXF_ARRAYALLCONST = 0x4, // Only on EXPRARRINIT, indicated that all elems are constant (must also have ARRAYCONST set) EXF_CTORPREAMBLE = 0x4, // Only on EXPRBLOCK, indicates that the block is the preamble of a constructor - contains field inits and base ctor call EXF_USERLABEL = 0x4, // Only on EXPRLABEL, indicates that this is a source-code label, not a compiler-generated label // 0x8 EXF_OPERATOR = 0x8, // Only on EXPRMEMGRP, indicates an operator. EXF_ISPOSTOP = 0x8, // Only on EXPRMULTI, indicates <x>++ EXF_FINALLYBLOCKED = 0x8, // Only on EXPRTRY, EXPRGOTO, EXPRRETURN, means that FINALLY block end is unreachable EXF_REFCHECK = 0x8, // Only on EXPRCAST, indicates an reference checked cast is required EXF_WRAPASTEMP = 0x8, // Only on EXPRWRAP, indicates that this wrap represents an actual local // 0x10 EXF_LITERALCONST = 0x10, // Only on EXPRCONSTANT, means this was not a folded constant EXF_BADGOTO = 0x10, // Only on EXPRGOTO, indicates an unrealizable goto EXF_RETURNISYIELD = 0x10, // Only on EXPRRETURN, means this is really a yield, and flow continues EXF_ISFINALLY = 0x10, // Only on EXPRTRY EXF_NEWOBJCALL = 0x10, // Only on EXPRCALL and EXPRMEMGRP, to indicate new <...>(...) EXF_INDEXEXPR = 0x10, // Only on EXPRCAST, indicates a special cast for array indexing EXF_REPLACEWRAP = 0x10, // Only on EXPRWRAP, it means the wrap should be replaced with its expr (during rewriting) // 0x20 EXF_UNREALIZEDGOTO = 0x20, // Only on EXPRGOTO, means target unknown EXF_CONSTRAINED = 0x20, // Only on EXPRCALL, EXPRPROP, indicates a call through a method or prop on a type variable or value type EXF_FORCE_BOX = 0x20, // Only on EXPRCAST, GENERICS: indicates a "forcing" boxing operation (if type parameter boxed then nop, i.e. object -> object, else value type -> object) EXF_SIMPLENAME = 0x20, // Only on EXPRMEMGRP, We're binding a dynamic simple name. // 0x40 EXF_ASFINALLYLEAVE = 0x40, // Only on EXPRGOTO, EXPRRETURN, means leave through a finally, ASLEAVE must also be set EXF_BASECALL = 0x40, // Only on EXPRCALL, EXPRFNCPTR, EXPRPROP, EXPREVENT, and EXPRMEMGRP, indicates a "base.XXXX" call EXF_FORCE_UNBOX = 0x40, // Only on EXPRCAST, GENERICS: indicates a "forcing" unboxing operation (if type parameter boxed then castclass, i.e. object -> object, else object -> value type) EXF_ADDRNOCONV = 0x40, // Only on EXPRBINOP, with kind == EK_ADDR, indicates that a conv.u should NOT be emitted. // 0x80 EXF_GOTONOTBLOCKED = 0x80, // Only on EXPRGOTO, means the goto is known to not pass through a finally which does not terminate EXF_DELEGATE = 0x80, // Only on EXPRMEMGRP, indicates an implicit invocation of a delegate: d() vs d.Invoke(). EXF_STATIC_CAST = 0x80, // Only on EXPRCAST, indicates a static cast is required. We implement with stloc, ldloc to a temp of the correct type. // 0x100 EXF_USERCALLABLE = 0x100, // Only on EXPRMEMGRP, indicates a user callable member group. EXF_UNBOXRUNTIME = 0x100, // Only on EXPRCAST, indicates that the runtime binder should unbox this. // 0x200 EXF_NEWSTRUCTASSG = 0x200, // Only on EXPRCALL, indicates that this is a constructor call which assigns to object EXF_GENERATEDSTMT = 0x200, // Only on statement exprs. Indicates that the statement is compiler generated // so we shouldn't report things like "unreachable code" on it. // 0x400 EXF_IMPLICITSTRUCTASSG = 0x400, // Only on EXPRCALL, indicates that this an implicit struct assg call EXF_MARKING = 0x400, // Only on statement exprs. Indicates that we're currently marking // its children for reachability (it's up the stack). //*** The following are usable on multiple node types.*** // 0x000800 and above EXF_UNREACHABLEBEGIN = 0x000800, // indicates an unreachable statement EXF_UNREACHABLEEND = 0x001000, // indicates the end of the statement is unreachable EXF_USEORIGDEBUGINFO = 0x002000, // Only set on EXPRDEBUGNOOP, but tested generally. Indicates foreach node should not be overridden to in token EXF_LASTBRACEDEBUGINFO = 0x004000, // indicates override tree to set debuginfo on last brace EXF_NODEBUGINFO = 0x008000, // indicates no debug info for this statement EXF_IMPLICITTHIS = 0x010000, // indicates a compiler provided this pointer (in the EE, when doing autoexp, this can be anything) EXF_CANTBENULL = 0x020000, // indicate this expression can't ever be null (e.g., "this"). EXF_CHECKOVERFLOW = 0x040000, // indicates that operation should be checked for overflow EXF_PUSH_OP_FIRST = 0x100000, // On any expr, indicates that the first operand must be placed on the stack before // anything else - this is needed for multi-ops involving string concat. EXF_ASSGOP = 0x200000, // On any non stmt exprs, indicates assignment node... EXF_LVALUE = 0x400000, // On any exprs. An lvalue - whether it's legal to assign. // THIS IS THE HIGHEST FLAG: // Indicates that the expression came from a LocalVariableSymbol, FieldSymbol, or PropertySymbol whose type has the same name so // it's OK to use the type instead of the element if using the element would generate an error. EXF_SAMENAMETYPE = 0x800000, EXF_MASK_ANY = EXF_UNREACHABLEBEGIN | EXF_UNREACHABLEEND | EXF_USEORIGDEBUGINFO | EXF_LASTBRACEDEBUGINFO | EXF_NODEBUGINFO | EXF_IMPLICITTHIS | EXF_CANTBENULL | EXF_CHECKOVERFLOW | EXF_PUSH_OP_FIRST | EXF_ASSGOP | EXF_LVALUE | EXF_SAMENAMETYPE, // Used to mask the cast flags off an EXPRCAST. EXF_CAST_ALL = EXF_BOX | EXF_UNBOX | EXF_REFCHECK | EXF_INDEXEXPR | EXF_FORCE_BOX | EXF_FORCE_UNBOX | EXF_STATIC_CAST } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Methodical/MDArray/GaussJordan/structarr_cs_do.csproj
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="structarr.cs" /> </ItemGroup> </Project>
<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> </PropertyGroup> <PropertyGroup> <DebugType>Full</DebugType> <Optimize>True</Optimize> </PropertyGroup> <ItemGroup> <Compile Include="structarr.cs" /> </ItemGroup> </Project>
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/mono/mono/tests/exception5.cs
using System; public class Ex { public static int test (int a) { int res; int fin = 0; try { try { res = 10/a; } catch (DivideByZeroException ex) { res = 34; } catch { res = 22; } } catch { res = 44; } finally { fin += 1; } return fin; } public static int Main () { if (test(0) != 1) return 1; return 0; } }
using System; public class Ex { public static int test (int a) { int res; int fin = 0; try { try { res = 10/a; } catch (DivideByZeroException ex) { res = 34; } catch { res = 22; } } catch { res = 44; } finally { fin += 1; } return fin; } public static int Main () { if (test(0) != 1) return 1; return 0; } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/tests/JIT/Regression/CLR-x86-JIT/V1-M12-Beta2/b72164/b72164.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; internal unsafe class bug1 { public static void Func1(double* a01) { Console.WriteLine("The result should be 12"); Console.WriteLine(*a01 + (*a01 - (*a01 + -5.0))); } public static int Main() { double* a01 = stackalloc double[1]; *a01 = 7; Func1(a01); return 100; } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using System; internal unsafe class bug1 { public static void Func1(double* a01) { Console.WriteLine("The result should be 12"); Console.WriteLine(*a01 + (*a01 - (*a01 + -5.0))); } public static int Main() { double* a01 = stackalloc double[1]; *a01 = 7; Func1(a01); return 100; } }
-1
dotnet/runtime
65,896
Make Is*Project properties unambiguous and improve IsGeneratorProject detection
Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
ViktorHofer
"2022-02-25T16:17:10Z"
"2022-02-28T20:13:23Z"
e652e007df4b7e6a66dbdc4e981496537bd0fcc1
a6f45395354fdcf63ee7f6abdadfb33f32e88459
Make Is*Project properties unambiguous and improve IsGeneratorProject detection. Currently these properties exist which categorize projects: - `IsReferenceAssembly`: The project's parent directory is 'ref' - `IsGeneratorProject`: The project's parent directory is 'gen' - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject property is false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsRuntimeAssembly`: True when all above is false - `IsSourceProject`: True when the project's parent directory is 'src' The IsRuntimeAssembly and IsSourceProject properties are ambiguous and the property names aren't consistent (IsReferenceAssembly vs IsGeneratorProject). Additionally the tree shows that the parent directory of generator projects often isn't 'gen' but the project name in cases when there are multiple generator projects under "gen". This led to such projects being treated incorrectly and developers manually having to set the "IsGeneratorProject" property for such projects. ### Change - **`IsReferenceAssemblyProject`**: The project's parent directory is 'ref' - `IsGeneratorProject`: **The project's parent directory is 'gen' or the parent's parent directory is 'gen'** - `IsTestProject`: The project is located somewhere under a '/tests/' directory and the project name's suffix is either '.Tests' or '.UnitTests'. - `IsTrimmingTestProject`: Same as IsTestProject but the project name's suffix is '.TrimmingTests'. - `IsTestSupportProject`: The project is located somewhere under a '/tests/' directory and the above IsTestProject **and IsTrimmingTestProject** props are false. - `UsingMicrosoftNoTargetsSdk`: The project uses the NoTargets msbuild SDK - `UsingMicrosoftTraversalSdk`: The project uses the Traversal msbuild SDK - `IsSourceProject`: **True when all above is false.**
./src/libraries/System.Private.Xml/tests/XmlReader/ReadContentAs/ReadAsDecimalTests.cs
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using Xunit; namespace System.Xml.Tests { public class DecimalTests { [Fact] public static void ReadContentAsDecimal1() { var reader = Utils.CreateFragmentReader("<Root>44<?a?>.44</Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.44m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal10() { var reader = Utils.CreateFragmentReader("<Root> 00<!-- Comment inbetween-->01<?a?></Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(1m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal11() { var reader = Utils.CreateFragmentReader("<Root> <?a?>0 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(0m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal12() { var reader = Utils.CreateFragmentReader("<Root> 9<![CDATA[9]]>99.9 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(9999.9m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal2() { var reader = Utils.CreateFragmentReader("<Root> -0<?a?>0<!-- Comment inbetween-->5.<![CDATA[5]]> </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(-5.5m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal3() { var reader = Utils.CreateFragmentReader("<Root> 00<!-- Comment inbetween-->01<?a?></Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(1m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal4() { var reader = Utils.CreateFragmentReader("<Root> <?a?>0 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(0m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal5() { var reader = Utils.CreateFragmentReader("<Root> 9<![CDATA[9]]>99.9 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(9999.9m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal6() { var reader = Utils.CreateFragmentReader("<Root>44<?a?>.44</Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.44m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal7() { var reader = Utils.CreateFragmentReader("<Root> 4<?a?>4.5<!-- Comment inbetween-->5 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.55m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal8() { var reader = Utils.CreateFragmentReader("<Root> 4<?a?>4.5<!-- Comment inbetween-->5 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.55m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal9() { var reader = Utils.CreateFragmentReader("<Root> -0<?a?>0<!-- Comment inbetween-->5.<![CDATA[5]]> </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(-5.5m, reader.ReadContentAs(typeof(decimal), null)); } } }
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. using Xunit; namespace System.Xml.Tests { public class DecimalTests { [Fact] public static void ReadContentAsDecimal1() { var reader = Utils.CreateFragmentReader("<Root>44<?a?>.44</Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.44m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal10() { var reader = Utils.CreateFragmentReader("<Root> 00<!-- Comment inbetween-->01<?a?></Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(1m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal11() { var reader = Utils.CreateFragmentReader("<Root> <?a?>0 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(0m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal12() { var reader = Utils.CreateFragmentReader("<Root> 9<![CDATA[9]]>99.9 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(9999.9m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal2() { var reader = Utils.CreateFragmentReader("<Root> -0<?a?>0<!-- Comment inbetween-->5.<![CDATA[5]]> </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(-5.5m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal3() { var reader = Utils.CreateFragmentReader("<Root> 00<!-- Comment inbetween-->01<?a?></Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(1m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal4() { var reader = Utils.CreateFragmentReader("<Root> <?a?>0 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(0m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal5() { var reader = Utils.CreateFragmentReader("<Root> 9<![CDATA[9]]>99.9 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(9999.9m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal6() { var reader = Utils.CreateFragmentReader("<Root>44<?a?>.44</Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.44m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal7() { var reader = Utils.CreateFragmentReader("<Root> 4<?a?>4.5<!-- Comment inbetween-->5 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.55m, reader.ReadContentAsDecimal()); } [Fact] public static void ReadContentAsDecimal8() { var reader = Utils.CreateFragmentReader("<Root> 4<?a?>4.5<!-- Comment inbetween-->5 </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(44.55m, reader.ReadContentAs(typeof(decimal), null)); } [Fact] public static void ReadContentAsDecimal9() { var reader = Utils.CreateFragmentReader("<Root> -0<?a?>0<!-- Comment inbetween-->5.<![CDATA[5]]> </Root>"); reader.PositionOnElement("Root"); reader.Read(); Assert.Equal(-5.5m, reader.ReadContentAs(typeof(decimal), null)); } } }
-1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/mono/mono/mini/method-to-ir.c
/** * \file * Convert CIL to the JIT internal representation * * Author: * Paolo Molaro (lupus@ximian.com) * Dietmar Maurer (dietmar@ximian.com) * * (C) 2002 Ximian, Inc. * Copyright 2003-2010 Novell, Inc (http://www.novell.com) * Copyright 2011 Xamarin, Inc (http://www.xamarin.com) * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include <config.h> #include <glib.h> #include <mono/utils/mono-compiler.h> #include "mini.h" #ifndef DISABLE_JIT #include <signal.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include <math.h> #include <string.h> #include <ctype.h> #ifdef HAVE_SYS_TIME_H #include <sys/time.h> #endif #ifdef HAVE_ALLOCA_H #include <alloca.h> #endif #include <mono/utils/memcheck.h> #include <mono/metadata/abi-details.h> #include <mono/metadata/assembly.h> #include <mono/metadata/assembly-internals.h> #include <mono/metadata/attrdefs.h> #include <mono/metadata/loader.h> #include <mono/metadata/tabledefs.h> #include <mono/metadata/class.h> #include <mono/metadata/class-abi-details.h> #include <mono/metadata/object.h> #include <mono/metadata/exception.h> #include <mono/metadata/exception-internals.h> #include <mono/metadata/opcodes.h> #include <mono/metadata/mono-endian.h> #include <mono/metadata/tokentype.h> #include <mono/metadata/tabledefs.h> #include <mono/metadata/marshal.h> #include <mono/metadata/debug-helpers.h> #include <mono/metadata/debug-internals.h> #include <mono/metadata/gc-internals.h> #include <mono/metadata/threads-types.h> #include <mono/metadata/profiler-private.h> #include <mono/metadata/profiler.h> #include <mono/metadata/monitor.h> #include <mono/utils/mono-memory-model.h> #include <mono/utils/mono-error-internals.h> #include <mono/metadata/mono-basic-block.h> #include <mono/metadata/reflection-internals.h> #include <mono/utils/mono-threads-coop.h> #include <mono/utils/mono-utils-debug.h> #include <mono/utils/mono-logger-internals.h> #include <mono/metadata/verify-internals.h> #include <mono/metadata/icall-decl.h> #include "mono/metadata/icall-signatures.h" #include "trace.h" #include "ir-emit.h" #include "jit-icalls.h" #include <mono/jit/jit.h> #include "seq-points.h" #include "aot-compiler.h" #include "mini-llvm.h" #include "mini-runtime.h" #include "llvmonly-runtime.h" #include "mono/utils/mono-tls-inline.h" #define BRANCH_COST 10 #define CALL_COST 10 /* Used for the JIT */ #define INLINE_LENGTH_LIMIT 20 /* * The aot and jit inline limits should be different, * since aot sees the whole program so we can let opt inline methods for us, * while the jit only sees one method, so we have to inline things ourselves. */ /* Used by LLVM AOT */ #define LLVM_AOT_INLINE_LENGTH_LIMIT 30 /* Used to LLVM JIT */ #define LLVM_JIT_INLINE_LENGTH_LIMIT 100 static const gboolean debug_tailcall = FALSE; // logging static const gboolean debug_tailcall_try_all = FALSE; // consider any call followed by ret gboolean mono_tailcall_print_enabled (void) { return debug_tailcall || MONO_TRACE_IS_TRACED (G_LOG_LEVEL_DEBUG, MONO_TRACE_TAILCALL); } void mono_tailcall_print (const char *format, ...) { if (!mono_tailcall_print_enabled ()) return; va_list args; va_start (args, format); g_printv (format, args); va_end (args); } /* These have 'cfg' as an implicit argument */ #define INLINE_FAILURE(msg) do { \ if ((cfg->method != cfg->current_method) && (cfg->current_method->wrapper_type == MONO_WRAPPER_NONE)) { \ inline_failure (cfg, msg); \ goto exception_exit; \ } \ } while (0) #define CHECK_CFG_EXCEPTION do {\ if (cfg->exception_type != MONO_EXCEPTION_NONE) \ goto exception_exit; \ } while (0) #define FIELD_ACCESS_FAILURE(method, field) do { \ field_access_failure ((cfg), (method), (field)); \ goto exception_exit; \ } while (0) #define GENERIC_SHARING_FAILURE(opcode) do { \ if (cfg->gshared) { \ gshared_failure (cfg, opcode, __FILE__, __LINE__); \ goto exception_exit; \ } \ } while (0) #define GSHAREDVT_FAILURE(opcode) do { \ if (cfg->gsharedvt) { \ gsharedvt_failure (cfg, opcode, __FILE__, __LINE__); \ goto exception_exit; \ } \ } while (0) #define OUT_OF_MEMORY_FAILURE do { \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \ mono_error_set_out_of_memory (cfg->error, ""); \ goto exception_exit; \ } while (0) #define DISABLE_AOT(cfg) do { \ if ((cfg)->verbose_level >= 2) \ printf ("AOT disabled: %s:%d\n", __FILE__, __LINE__); \ (cfg)->disable_aot = TRUE; \ } while (0) #define LOAD_ERROR do { \ break_on_unverified (); \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD); \ goto exception_exit; \ } while (0) #define TYPE_LOAD_ERROR(klass) do { \ cfg->exception_ptr = klass; \ LOAD_ERROR; \ } while (0) #define CHECK_CFG_ERROR do {\ if (!is_ok (cfg->error)) { \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \ goto mono_error_exit; \ } \ } while (0) int mono_op_to_op_imm (int opcode); int mono_op_to_op_imm_noemul (int opcode); static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty); static MonoInst* convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins); /* helper methods signatures */ /* type loading helpers */ static GENERATE_GET_CLASS_WITH_CACHE (iequatable, "System", "IEquatable`1") static GENERATE_GET_CLASS_WITH_CACHE (geqcomparer, "System.Collections.Generic", "GenericEqualityComparer`1"); /* * Instruction metadata */ #ifdef MINI_OP #undef MINI_OP #endif #ifdef MINI_OP3 #undef MINI_OP3 #endif #define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ', #define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3, #define NONE ' ' #define IREG 'i' #define FREG 'f' #define VREG 'v' #define XREG 'x' #if SIZEOF_REGISTER == 8 && SIZEOF_REGISTER == TARGET_SIZEOF_VOID_P #define LREG IREG #else #define LREG 'l' #endif /* keep in sync with the enum in mini.h */ const char mini_ins_info[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 #define MINI_OP(a,b,dest,src1,src2) ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0)), #define MINI_OP3(a,b,dest,src1,src2,src3) ((src3) != NONE ? 3 : ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0))), /* * This should contain the index of the last sreg + 1. This is not the same * as the number of sregs for opcodes like IA64_CMP_EQ_IMM. */ const gint8 mini_ins_sreg_counts[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 guint32 mono_alloc_ireg (MonoCompile *cfg) { return alloc_ireg (cfg); } guint32 mono_alloc_lreg (MonoCompile *cfg) { return alloc_lreg (cfg); } guint32 mono_alloc_freg (MonoCompile *cfg) { return alloc_freg (cfg); } guint32 mono_alloc_preg (MonoCompile *cfg) { return alloc_preg (cfg); } guint32 mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type) { return alloc_dreg (cfg, stack_type); } /* * mono_alloc_ireg_ref: * * Allocate an IREG, and mark it as holding a GC ref. */ guint32 mono_alloc_ireg_ref (MonoCompile *cfg) { return alloc_ireg_ref (cfg); } /* * mono_alloc_ireg_mp: * * Allocate an IREG, and mark it as holding a managed pointer. */ guint32 mono_alloc_ireg_mp (MonoCompile *cfg) { return alloc_ireg_mp (cfg); } /* * mono_alloc_ireg_copy: * * Allocate an IREG with the same GC type as VREG. */ guint32 mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg) { if (vreg_is_ref (cfg, vreg)) return alloc_ireg_ref (cfg); else if (vreg_is_mp (cfg, vreg)) return alloc_ireg_mp (cfg); else return alloc_ireg (cfg); } guint mono_type_to_regmove (MonoCompile *cfg, MonoType *type) { if (m_type_is_byref (type)) return OP_MOVE; type = mini_get_underlying_type (type); handle_enum: switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_MOVE; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_MOVE; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_MOVE; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: return OP_MOVE; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: return OP_MOVE; case MONO_TYPE_I8: case MONO_TYPE_U8: #if SIZEOF_REGISTER == 8 return OP_MOVE; #else return OP_LMOVE; #endif case MONO_TYPE_R4: return cfg->r4fp ? OP_RMOVE : OP_FMOVE; case MONO_TYPE_R8: return OP_FMOVE; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type))) return OP_XMOVE; return OP_VMOVE; case MONO_TYPE_TYPEDBYREF: return OP_VMOVE; case MONO_TYPE_GENERICINST: if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type))) return OP_XMOVE; type = m_class_get_byval_arg (type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_type_var_is_vt (type)) return OP_VMOVE; else return mono_type_to_regmove (cfg, mini_get_underlying_type (type)); default: g_error ("unknown type 0x%02x in type_to_regstore", type->type); } return -1; } void mono_print_bb (MonoBasicBlock *bb, const char *msg) { int i; MonoInst *tree; GString *str = g_string_new (""); g_string_append_printf (str, "%s %d: [IN: ", msg, bb->block_num); for (i = 0; i < bb->in_count; ++i) g_string_append_printf (str, " BB%d(%d)", bb->in_bb [i]->block_num, bb->in_bb [i]->dfn); g_string_append_printf (str, ", OUT: "); for (i = 0; i < bb->out_count; ++i) g_string_append_printf (str, " BB%d(%d)", bb->out_bb [i]->block_num, bb->out_bb [i]->dfn); g_string_append_printf (str, " ]\n"); g_print ("%s", str->str); g_string_free (str, TRUE); for (tree = bb->code; tree; tree = tree->next) mono_print_ins_index (-1, tree); } static MONO_NEVER_INLINE gboolean break_on_unverified (void) { if (mini_debug_options.break_on_unverified) { G_BREAKPOINT (); return TRUE; } return FALSE; } static void clear_cfg_error (MonoCompile *cfg) { mono_error_cleanup (cfg->error); error_init (cfg->error); } static MONO_NEVER_INLINE void field_access_failure (MonoCompile *cfg, MonoMethod *method, MonoClassField *field) { char *method_fname = mono_method_full_name (method, TRUE); char *field_fname = mono_field_full_name (field); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_set_generic_error (cfg->error, "System", "FieldAccessException", "Field `%s' is inaccessible from method `%s'\n", field_fname, method_fname); g_free (method_fname); g_free (field_fname); } static MONO_NEVER_INLINE void inline_failure (MonoCompile *cfg, const char *msg) { if (cfg->verbose_level >= 2) printf ("inline failed: %s\n", msg); mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED); } static MONO_NEVER_INLINE void gshared_failure (MonoCompile *cfg, int opcode, const char *file, int line) { if (cfg->verbose_level > 2) printf ("sharing failed for method %s.%s.%s/%d opcode %s line %d\n", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name (opcode), line); mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); } static MONO_NEVER_INLINE void gsharedvt_failure (MonoCompile *cfg, int opcode, const char *file, int line) { cfg->exception_message = g_strdup_printf ("gsharedvt failed for method %s.%s.%s/%d opcode %s %s:%d", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name ((opcode)), file, line); if (cfg->verbose_level >= 2) printf ("%s\n", cfg->exception_message); mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); } void mini_set_inline_failure (MonoCompile *cfg, const char *msg) { if (cfg->verbose_level >= 2) printf ("inline failed: %s\n", msg); mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED); } /* * When using gsharedvt, some instatiations might be verifiable, and some might be not. i.e. * foo<T> (int i) { ldarg.0; box T; } */ #define UNVERIFIED do { \ if (cfg->gsharedvt) { \ if (cfg->verbose_level > 2) \ printf ("gsharedvt method failed to verify, falling back to instantiation.\n"); \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); \ goto exception_exit; \ } \ break_on_unverified (); \ goto unverified; \ } while (0) #define GET_BBLOCK(cfg,tblock,ip) do { \ (tblock) = cfg->cil_offset_to_bb [(ip) - cfg->cil_start]; \ if (!(tblock)) { \ if ((ip) >= end || (ip) < header->code) UNVERIFIED; \ NEW_BBLOCK (cfg, (tblock)); \ (tblock)->cil_code = (ip); \ ADD_BBLOCK (cfg, (tblock)); \ } \ } while (0) /* Emit conversions so both operands of a binary opcode are of the same type */ static void add_widen_op (MonoCompile *cfg, MonoInst *ins, MonoInst **arg1_ref, MonoInst **arg2_ref) { MonoInst *arg1 = *arg1_ref; MonoInst *arg2 = *arg2_ref; if (cfg->r4fp && ((arg1->type == STACK_R4 && arg2->type == STACK_R8) || (arg1->type == STACK_R8 && arg2->type == STACK_R4))) { MonoInst *conv; /* Mixing r4/r8 is allowed by the spec */ if (arg1->type == STACK_R4) { int dreg = alloc_freg (cfg); EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg1->dreg); conv->type = STACK_R8; ins->sreg1 = dreg; *arg1_ref = conv; } if (arg2->type == STACK_R4) { int dreg = alloc_freg (cfg); EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg2->dreg); conv->type = STACK_R8; ins->sreg2 = dreg; *arg2_ref = conv; } } #if SIZEOF_REGISTER == 8 /* FIXME: Need to add many more cases */ if ((arg1)->type == STACK_PTR && (arg2)->type == STACK_I4) { MonoInst *widen; int dr = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, widen, OP_SEXT_I4, dr, (arg2)->dreg); (ins)->sreg2 = widen->dreg; } #endif } #define ADD_UNOP(op) do { \ MONO_INST_NEW (cfg, ins, (op)); \ sp--; \ ins->sreg1 = sp [0]->dreg; \ type_from_op (cfg, ins, sp [0], NULL); \ CHECK_TYPE (ins); \ (ins)->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); \ MONO_ADD_INS ((cfg)->cbb, (ins)); \ *sp++ = mono_decompose_opcode (cfg, ins); \ } while (0) #define ADD_BINCOND(next_block) do { \ MonoInst *cmp; \ sp -= 2; \ MONO_INST_NEW(cfg, cmp, OP_COMPARE); \ cmp->sreg1 = sp [0]->dreg; \ cmp->sreg2 = sp [1]->dreg; \ add_widen_op (cfg, cmp, &sp [0], &sp [1]); \ type_from_op (cfg, cmp, sp [0], sp [1]); \ CHECK_TYPE (cmp); \ type_from_op (cfg, ins, sp [0], sp [1]); \ ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \ GET_BBLOCK (cfg, tblock, target); \ link_bblock (cfg, cfg->cbb, tblock); \ ins->inst_true_bb = tblock; \ if ((next_block)) { \ link_bblock (cfg, cfg->cbb, (next_block)); \ ins->inst_false_bb = (next_block); \ start_new_bblock = 1; \ } else { \ GET_BBLOCK (cfg, tblock, next_ip); \ link_bblock (cfg, cfg->cbb, tblock); \ ins->inst_false_bb = tblock; \ start_new_bblock = 2; \ } \ if (sp != stack_start) { \ handle_stack_args (cfg, stack_start, sp - stack_start); \ CHECK_UNVERIFIABLE (cfg); \ } \ MONO_ADD_INS (cfg->cbb, cmp); \ MONO_ADD_INS (cfg->cbb, ins); \ } while (0) /* * * link_bblock: Links two basic blocks * * links two basic blocks in the control flow graph, the 'from' * argument is the starting block and the 'to' argument is the block * the control flow ends to after 'from'. */ static void link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to) { MonoBasicBlock **newa; int i, found; #if 0 if (from->cil_code) { if (to->cil_code) printf ("edge from IL%04x to IL_%04x\n", from->cil_code - cfg->cil_code, to->cil_code - cfg->cil_code); else printf ("edge from IL%04x to exit\n", from->cil_code - cfg->cil_code); } else { if (to->cil_code) printf ("edge from entry to IL_%04x\n", to->cil_code - cfg->cil_code); else printf ("edge from entry to exit\n"); } #endif found = FALSE; for (i = 0; i < from->out_count; ++i) { if (to == from->out_bb [i]) { found = TRUE; break; } } if (!found) { newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (from->out_count + 1)); for (i = 0; i < from->out_count; ++i) { newa [i] = from->out_bb [i]; } newa [i] = to; from->out_count++; from->out_bb = newa; } found = FALSE; for (i = 0; i < to->in_count; ++i) { if (from == to->in_bb [i]) { found = TRUE; break; } } if (!found) { newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (to->in_count + 1)); for (i = 0; i < to->in_count; ++i) { newa [i] = to->in_bb [i]; } newa [i] = from; to->in_count++; to->in_bb = newa; } } void mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to) { link_bblock (cfg, from, to); } static void mono_create_spvar_for_region (MonoCompile *cfg, int region); static void mark_bb_in_region (MonoCompile *cfg, guint region, uint32_t start, uint32_t end) { MonoBasicBlock *bb = cfg->cil_offset_to_bb [start]; //start must exist in cil_offset_to_bb as those are il offsets used by EH which should have GET_BBLOCK early. g_assert (bb); if (cfg->verbose_level > 1) g_print ("FIRST BB for %d is BB_%d\n", start, bb->block_num); for (; bb && bb->real_offset < end; bb = bb->next_bb) { //no one claimed this bb, take it. if (bb->region == -1) { bb->region = region; continue; } //current region is an early handler, bail if ((bb->region & (0xf << 4)) != MONO_REGION_TRY) { continue; } //current region is a try, only overwrite if new region is a handler if ((region & (0xf << 4)) != MONO_REGION_TRY) { bb->region = region; } } if (cfg->spvars) mono_create_spvar_for_region (cfg, region); } static void compute_bb_regions (MonoCompile *cfg) { MonoBasicBlock *bb; MonoMethodHeader *header = cfg->header; int i; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) bb->region = -1; for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) mark_bb_in_region (cfg, ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags, clause->data.filter_offset, clause->handler_offset); guint handler_region; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) handler_region = ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags; else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) handler_region = ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags; else handler_region = ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags; mark_bb_in_region (cfg, handler_region, clause->handler_offset, clause->handler_offset + clause->handler_len); mark_bb_in_region (cfg, ((i + 1) << 8) | clause->flags, clause->try_offset, clause->try_offset + clause->try_len); } if (cfg->verbose_level > 2) { MonoBasicBlock *bb; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) g_print ("REGION BB%d IL_%04x ID_%08X\n", bb->block_num, bb->real_offset, bb->region); } } static gboolean ip_in_finally_clause (MonoCompile *cfg, int offset) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT) continue; if (MONO_OFFSET_IN_HANDLER (clause, offset)) return TRUE; } return FALSE; } /* Find clauses between ip and target, from inner to outer */ static GList* mono_find_leave_clauses (MonoCompile *cfg, guchar *ip, guchar *target) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; GList *res = NULL; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (clause, (ip - header->code)) && (!MONO_OFFSET_IN_CLAUSE (clause, (target - header->code)))) { MonoLeaveClause *leave = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoLeaveClause)); leave->index = i; leave->clause = clause; res = g_list_append_mempool (cfg->mempool, res, leave); } } return res; } static void mono_create_spvar_for_region (MonoCompile *cfg, int region) { MonoInst *var; var = (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region)); if (var) return; var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ var->flags |= MONO_INST_VOLATILE; g_hash_table_insert (cfg->spvars, GINT_TO_POINTER (region), var); } MonoInst * mono_find_exvar_for_offset (MonoCompile *cfg, int offset) { return (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset)); } static MonoInst* mono_create_exvar_for_offset (MonoCompile *cfg, int offset) { MonoInst *var; var = (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset)); if (var) return var; var = mono_compile_create_var (cfg, mono_get_object_type (), OP_LOCAL); /* prevent it from being register allocated */ var->flags |= MONO_INST_VOLATILE; g_hash_table_insert (cfg->exvars, GINT_TO_POINTER (offset), var); return var; } /* * Returns the type used in the eval stack when @type is loaded. * FIXME: return a MonoType/MonoClass for the byref and VALUETYPE cases. */ void mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst) { MonoClass *klass; type = mini_get_underlying_type (type); inst->klass = klass = mono_class_from_mono_type_internal (type); if (m_type_is_byref (type)) { inst->type = STACK_MP; return; } handle_enum: switch (type->type) { case MONO_TYPE_VOID: inst->type = STACK_INV; return; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: inst->type = STACK_I4; return; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: inst->type = STACK_PTR; return; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: inst->type = STACK_OBJ; return; case MONO_TYPE_I8: case MONO_TYPE_U8: inst->type = STACK_I8; return; case MONO_TYPE_R4: inst->type = cfg->r4_stack_type; break; case MONO_TYPE_R8: inst->type = STACK_R8; return; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } else { inst->klass = klass; inst->type = STACK_VTYPE; return; } case MONO_TYPE_TYPEDBYREF: inst->klass = mono_defaults.typed_reference_class; inst->type = STACK_VTYPE; return; case MONO_TYPE_GENERICINST: type = m_class_get_byval_arg (type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_is_gsharedvt_type (type)) { g_assert (cfg->gsharedvt); inst->type = STACK_VTYPE; } else { mini_type_to_eval_stack_type (cfg, mini_get_underlying_type (type), inst); } return; default: g_error ("unknown type 0x%02x in eval stack type", type->type); } } /* * The following tables are used to quickly validate the IL code in type_from_op (). */ #define IF_P8(v) (SIZEOF_VOID_P == 8 ? v : STACK_INV) #define IF_P8_I8 IF_P8(STACK_I8) #define IF_P8_PTR IF_P8(STACK_PTR) static const char bin_num_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV}, {STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R8}, {STACK_INV, STACK_MP, STACK_INV, STACK_MP, STACK_INV, STACK_PTR, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4} }; static const char neg_table [] = { STACK_INV, STACK_I4, STACK_I8, STACK_PTR, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4 }; /* reduce the size of this table */ static const char bin_int_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV} }; #define P1 (SIZEOF_VOID_P == 8) static const char bin_comp_table [STACK_MAX] [STACK_MAX] = { /* Inv i L p F & O vt r4 */ {0}, {0, 1, 0, 1, 0, 0, 0, 0}, /* i, int32 */ {0, 0, 1,P1, 0, 0, 0, 0}, /* L, int64 */ {0, 1,P1, 1, 0, 2, 4, 0}, /* p, ptr */ {0, 0, 0, 0, 1, 0, 0, 0, 1}, /* F, R8 */ {0, 0, 0, 2, 0, 1, 0, 0}, /* &, managed pointer */ {0, 0, 0, 4, 0, 0, 3, 0}, /* O, reference */ {0, 0, 0, 0, 0, 0, 0, 0}, /* vt value type */ {0, 0, 0, 0, 1, 0, 0, 0, 1}, /* r, r4 */ }; #undef P1 /* reduce the size of this table */ static const char shift_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, STACK_INV, STACK_I4, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I8, STACK_INV, STACK_I8, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, STACK_INV, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV} }; /* * Tables to map from the non-specific opcode to the matching * type-specific opcode. */ /* handles from CEE_ADD to CEE_SHR_UN (CEE_REM_UN for floats) */ static const guint16 binops_op_map [STACK_MAX] = { 0, OP_IADD-CEE_ADD, OP_LADD-CEE_ADD, OP_PADD-CEE_ADD, OP_FADD-CEE_ADD, OP_PADD-CEE_ADD, 0, 0, OP_RADD-CEE_ADD }; /* handles from CEE_NEG to CEE_CONV_U8 */ static const guint16 unops_op_map [STACK_MAX] = { 0, OP_INEG-CEE_NEG, OP_LNEG-CEE_NEG, OP_PNEG-CEE_NEG, OP_FNEG-CEE_NEG, OP_PNEG-CEE_NEG, 0, 0, OP_RNEG-CEE_NEG }; /* handles from CEE_CONV_U2 to CEE_SUB_OVF_UN */ static const guint16 ovfops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_U2-CEE_CONV_U2, OP_LCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_FCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, 0, OP_RCONV_TO_U2-CEE_CONV_U2 }; /* handles from CEE_CONV_OVF_I1_UN to CEE_CONV_OVF_U_UN */ static const guint16 ovf2ops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_LCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_FCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, 0, 0, OP_RCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN }; /* handles from CEE_CONV_OVF_I1 to CEE_CONV_OVF_U8 */ static const guint16 ovf3ops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_LCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_FCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, 0, 0, OP_RCONV_TO_OVF_I1-CEE_CONV_OVF_I1 }; /* handles from CEE_BEQ to CEE_BLT_UN */ static const guint16 beqops_op_map [STACK_MAX] = { 0, OP_IBEQ-CEE_BEQ, OP_LBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_FBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, 0, OP_FBEQ-CEE_BEQ }; /* handles from CEE_CEQ to CEE_CLT_UN */ static const guint16 ceqops_op_map [STACK_MAX] = { 0, OP_ICEQ-OP_CEQ, OP_LCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_FCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, 0, OP_RCEQ-OP_CEQ }; /* * Sets ins->type (the type on the eval stack) according to the * type of the opcode and the arguments to it. * Invalid IL code is marked by setting ins->type to the invalid value STACK_INV. * * FIXME: this function sets ins->type unconditionally in some cases, but * it should set it to invalid for some types (a conv.x on an object) */ static void type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2) { switch (ins->opcode) { /* binops */ case MONO_CEE_ADD: case MONO_CEE_SUB: case MONO_CEE_MUL: case MONO_CEE_DIV: case MONO_CEE_REM: /* FIXME: check unverifiable args for STACK_MP */ ins->type = bin_num_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case MONO_CEE_DIV_UN: case MONO_CEE_REM_UN: case MONO_CEE_AND: case MONO_CEE_OR: case MONO_CEE_XOR: ins->type = bin_int_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case MONO_CEE_SHL: case MONO_CEE_SHR: case MONO_CEE_SHR_UN: ins->type = shift_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case OP_COMPARE: case OP_LCOMPARE: case OP_ICOMPARE: ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV; if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP)))) ins->opcode = OP_LCOMPARE; else if (src1->type == STACK_R4) ins->opcode = OP_RCOMPARE; else if (src1->type == STACK_R8) ins->opcode = OP_FCOMPARE; else ins->opcode = OP_ICOMPARE; break; case OP_ICOMPARE_IMM: ins->type = bin_comp_table [src1->type] [src1->type] ? STACK_I4 : STACK_INV; if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP)))) ins->opcode = OP_LCOMPARE_IMM; break; case MONO_CEE_BEQ: case MONO_CEE_BGE: case MONO_CEE_BGT: case MONO_CEE_BLE: case MONO_CEE_BLT: case MONO_CEE_BNE_UN: case MONO_CEE_BGE_UN: case MONO_CEE_BGT_UN: case MONO_CEE_BLE_UN: case MONO_CEE_BLT_UN: ins->opcode += beqops_op_map [src1->type]; break; case OP_CEQ: ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV; ins->opcode += ceqops_op_map [src1->type]; break; case OP_CGT: case OP_CGT_UN: case OP_CLT: case OP_CLT_UN: ins->type = (bin_comp_table [src1->type] [src2->type] & 1) ? STACK_I4: STACK_INV; ins->opcode += ceqops_op_map [src1->type]; break; /* unops */ case MONO_CEE_NEG: ins->type = neg_table [src1->type]; ins->opcode += unops_op_map [ins->type]; break; case MONO_CEE_NOT: if (src1->type >= STACK_I4 && src1->type <= STACK_PTR) ins->type = src1->type; else ins->type = STACK_INV; ins->opcode += unops_op_map [ins->type]; break; case MONO_CEE_CONV_I1: case MONO_CEE_CONV_I2: case MONO_CEE_CONV_I4: case MONO_CEE_CONV_U4: ins->type = STACK_I4; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_R_UN: ins->type = STACK_R8; switch (src1->type) { case STACK_I4: case STACK_PTR: ins->opcode = OP_ICONV_TO_R_UN; break; case STACK_I8: ins->opcode = OP_LCONV_TO_R_UN; break; case STACK_R4: ins->opcode = OP_RCONV_TO_R8; break; case STACK_R8: ins->opcode = OP_FMOVE; break; } break; case MONO_CEE_CONV_OVF_I1: case MONO_CEE_CONV_OVF_U1: case MONO_CEE_CONV_OVF_I2: case MONO_CEE_CONV_OVF_U2: case MONO_CEE_CONV_OVF_I4: case MONO_CEE_CONV_OVF_U4: ins->type = STACK_I4; ins->opcode += ovf3ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I_UN: case MONO_CEE_CONV_OVF_U_UN: ins->type = STACK_PTR; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I1_UN: case MONO_CEE_CONV_OVF_I2_UN: case MONO_CEE_CONV_OVF_I4_UN: case MONO_CEE_CONV_OVF_U1_UN: case MONO_CEE_CONV_OVF_U2_UN: case MONO_CEE_CONV_OVF_U4_UN: ins->type = STACK_I4; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_U: ins->type = STACK_PTR; switch (src1->type) { case STACK_I4: ins->opcode = OP_ICONV_TO_U; break; case STACK_PTR: case STACK_MP: case STACK_OBJ: #if TARGET_SIZEOF_VOID_P == 8 ins->opcode = OP_LCONV_TO_U; #else ins->opcode = OP_MOVE; #endif break; case STACK_I8: ins->opcode = OP_LCONV_TO_U; break; case STACK_R8: if (TARGET_SIZEOF_VOID_P == 8) ins->opcode = OP_FCONV_TO_U8; else ins->opcode = OP_FCONV_TO_U4; break; case STACK_R4: if (TARGET_SIZEOF_VOID_P == 8) ins->opcode = OP_RCONV_TO_U8; else ins->opcode = OP_RCONV_TO_U4; break; } break; case MONO_CEE_CONV_I8: case MONO_CEE_CONV_U8: ins->type = STACK_I8; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I8: case MONO_CEE_CONV_OVF_U8: ins->type = STACK_I8; ins->opcode += ovf3ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_U8_UN: case MONO_CEE_CONV_OVF_I8_UN: ins->type = STACK_I8; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_R4: ins->type = cfg->r4_stack_type; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_R8: ins->type = STACK_R8; ins->opcode += unops_op_map [src1->type]; break; case OP_CKFINITE: ins->type = STACK_R8; break; case MONO_CEE_CONV_U2: case MONO_CEE_CONV_U1: ins->type = STACK_I4; ins->opcode += ovfops_op_map [src1->type]; break; case MONO_CEE_CONV_I: case MONO_CEE_CONV_OVF_I: case MONO_CEE_CONV_OVF_U: ins->type = STACK_PTR; ins->opcode += ovfops_op_map [src1->type]; switch (ins->opcode) { case OP_FCONV_TO_I: ins->opcode = TARGET_SIZEOF_VOID_P == 4 ? OP_FCONV_TO_I4 : OP_FCONV_TO_I8; break; case OP_RCONV_TO_I: ins->opcode = TARGET_SIZEOF_VOID_P == 4 ? OP_RCONV_TO_I4 : OP_RCONV_TO_I8; break; default: break; } break; case MONO_CEE_ADD_OVF: case MONO_CEE_ADD_OVF_UN: case MONO_CEE_MUL_OVF: case MONO_CEE_MUL_OVF_UN: case MONO_CEE_SUB_OVF: case MONO_CEE_SUB_OVF_UN: ins->type = bin_num_table [src1->type] [src2->type]; ins->opcode += ovfops_op_map [src1->type]; if (ins->type == STACK_R8) ins->type = STACK_INV; break; case OP_LOAD_MEMBASE: ins->type = STACK_PTR; break; case OP_LOADI1_MEMBASE: case OP_LOADU1_MEMBASE: case OP_LOADI2_MEMBASE: case OP_LOADU2_MEMBASE: case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: ins->type = STACK_PTR; break; case OP_LOADI8_MEMBASE: ins->type = STACK_I8; break; case OP_LOADR4_MEMBASE: ins->type = cfg->r4_stack_type; break; case OP_LOADR8_MEMBASE: ins->type = STACK_R8; break; default: g_error ("opcode 0x%04x not handled in type from op", ins->opcode); break; } if (ins->type == STACK_MP) { if (src1->type == STACK_MP) ins->klass = src1->klass; else ins->klass = mono_defaults.object_class; } } void mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2) { type_from_op (cfg, ins, src1, src2); } static MonoClass* ldind_to_type (int op) { switch (op) { case MONO_CEE_LDIND_I1: return mono_defaults.sbyte_class; case MONO_CEE_LDIND_U1: return mono_defaults.byte_class; case MONO_CEE_LDIND_I2: return mono_defaults.int16_class; case MONO_CEE_LDIND_U2: return mono_defaults.uint16_class; case MONO_CEE_LDIND_I4: return mono_defaults.int32_class; case MONO_CEE_LDIND_U4: return mono_defaults.uint32_class; case MONO_CEE_LDIND_I8: return mono_defaults.int64_class; case MONO_CEE_LDIND_I: return mono_defaults.int_class; case MONO_CEE_LDIND_R4: return mono_defaults.single_class; case MONO_CEE_LDIND_R8: return mono_defaults.double_class; case MONO_CEE_LDIND_REF:return mono_defaults.object_class; //FIXME we should try to return a more specific type default: g_error ("Unknown ldind type %d", op); } } static MonoClass* stind_to_type (int op) { switch (op) { case MONO_CEE_STIND_I1: return mono_defaults.sbyte_class; case MONO_CEE_STIND_I2: return mono_defaults.int16_class; case MONO_CEE_STIND_I4: return mono_defaults.int32_class; case MONO_CEE_STIND_I8: return mono_defaults.int64_class; case MONO_CEE_STIND_I: return mono_defaults.int_class; case MONO_CEE_STIND_R4: return mono_defaults.single_class; case MONO_CEE_STIND_R8: return mono_defaults.double_class; case MONO_CEE_STIND_REF: return mono_defaults.object_class; default: g_error ("Unknown stind type %d", op); } } #if 0 static const char param_table [STACK_MAX] [STACK_MAX] = { {0}, }; static int check_values_to_signature (MonoInst *args, MonoType *this_ins, MonoMethodSignature *sig) { int i; if (sig->hasthis) { switch (args->type) { case STACK_I4: case STACK_I8: case STACK_R8: case STACK_VTYPE: case STACK_INV: return 0; } args++; } for (i = 0; i < sig->param_count; ++i) { switch (args [i].type) { case STACK_INV: return 0; case STACK_MP: if (m_type_is_byref (!sig->params [i])) return 0; continue; case STACK_OBJ: if (m_type_is_byref (sig->params [i])) return 0; switch (m_type_is_byref (sig->params [i])) { case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: break; default: return 0; } continue; case STACK_R8: if (m_type_is_byref (sig->params [i])) return 0; if (sig->params [i]->type != MONO_TYPE_R4 && sig->params [i]->type != MONO_TYPE_R8) return 0; continue; case STACK_PTR: case STACK_I4: case STACK_I8: case STACK_VTYPE: break; } /*if (!param_table [args [i].type] [sig->params [i]->type]) return 0;*/ } return 1; } #endif /* * The got_var contains the address of the Global Offset Table when AOT * compiling. */ MonoInst * mono_get_got_var (MonoCompile *cfg) { if (!cfg->compile_aot || !cfg->backend->need_got_var || cfg->llvm_only) return NULL; if (!cfg->got_var) { cfg->got_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); } return cfg->got_var; } static void mono_create_rgctx_var (MonoCompile *cfg) { if (!cfg->rgctx_var) { cfg->rgctx_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* force the var to be stack allocated */ if (!cfg->llvm_only) cfg->rgctx_var->flags |= MONO_INST_VOLATILE; } } static MonoInst * mono_get_mrgctx_var (MonoCompile *cfg) { g_assert (cfg->gshared); mono_create_rgctx_var (cfg); return cfg->rgctx_var; } static MonoInst * mono_get_vtable_var (MonoCompile *cfg) { g_assert (cfg->gshared); /* The mrgctx and the vtable are stored in the same var */ mono_create_rgctx_var (cfg); return cfg->rgctx_var; } static MonoType* type_from_stack_type (MonoInst *ins) { switch (ins->type) { case STACK_I4: return mono_get_int32_type (); case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class); case STACK_PTR: return mono_get_int_type (); case STACK_R4: return m_class_get_byval_arg (mono_defaults.single_class); case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class); case STACK_MP: return m_class_get_this_arg (ins->klass); case STACK_OBJ: return mono_get_object_type (); case STACK_VTYPE: return m_class_get_byval_arg (ins->klass); default: g_error ("stack type %d to monotype not handled\n", ins->type); } return NULL; } MonoStackType mini_type_to_stack_type (MonoCompile *cfg, MonoType *t) { t = mini_type_get_underlying_type (t); switch (t->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: return STACK_I4; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: return STACK_PTR; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: return STACK_OBJ; case MONO_TYPE_I8: case MONO_TYPE_U8: return STACK_I8; case MONO_TYPE_R4: return (MonoStackType)cfg->r4_stack_type; case MONO_TYPE_R8: return STACK_R8; case MONO_TYPE_VALUETYPE: case MONO_TYPE_TYPEDBYREF: return STACK_VTYPE; case MONO_TYPE_GENERICINST: if (mono_type_generic_inst_is_valuetype (t)) return STACK_VTYPE; else return STACK_OBJ; break; default: g_assert_not_reached (); } return (MonoStackType)-1; } static MonoClass* array_access_to_klass (int opcode) { switch (opcode) { case MONO_CEE_LDELEM_U1: return mono_defaults.byte_class; case MONO_CEE_LDELEM_U2: return mono_defaults.uint16_class; case MONO_CEE_LDELEM_I: case MONO_CEE_STELEM_I: return mono_defaults.int_class; case MONO_CEE_LDELEM_I1: case MONO_CEE_STELEM_I1: return mono_defaults.sbyte_class; case MONO_CEE_LDELEM_I2: case MONO_CEE_STELEM_I2: return mono_defaults.int16_class; case MONO_CEE_LDELEM_I4: case MONO_CEE_STELEM_I4: return mono_defaults.int32_class; case MONO_CEE_LDELEM_U4: return mono_defaults.uint32_class; case MONO_CEE_LDELEM_I8: case MONO_CEE_STELEM_I8: return mono_defaults.int64_class; case MONO_CEE_LDELEM_R4: case MONO_CEE_STELEM_R4: return mono_defaults.single_class; case MONO_CEE_LDELEM_R8: case MONO_CEE_STELEM_R8: return mono_defaults.double_class; case MONO_CEE_LDELEM_REF: case MONO_CEE_STELEM_REF: return mono_defaults.object_class; default: g_assert_not_reached (); } return NULL; } /* * We try to share variables when possible */ static MonoInst * mono_compile_get_interface_var (MonoCompile *cfg, int slot, MonoInst *ins) { MonoInst *res; int pos, vnum; MonoType *type; type = type_from_stack_type (ins); /* inlining can result in deeper stacks */ if (cfg->inline_depth || slot >= cfg->header->max_stack) return mono_compile_create_var (cfg, type, OP_LOCAL); pos = ins->type - 1 + slot * STACK_MAX; switch (ins->type) { case STACK_I4: case STACK_I8: case STACK_R8: case STACK_PTR: case STACK_MP: case STACK_OBJ: if ((vnum = cfg->intvars [pos])) return cfg->varinfo [vnum]; res = mono_compile_create_var (cfg, type, OP_LOCAL); cfg->intvars [pos] = res->inst_c0; break; default: res = mono_compile_create_var (cfg, type, OP_LOCAL); } return res; } static void mono_save_token_info (MonoCompile *cfg, MonoImage *image, guint32 token, gpointer key) { /* * Don't use this if a generic_context is set, since that means AOT can't * look up the method using just the image+token. * table == 0 means this is a reference made from a wrapper. */ if (cfg->compile_aot && !cfg->generic_context && (mono_metadata_token_table (token) > 0)) { MonoJumpInfoToken *jump_info_token = (MonoJumpInfoToken *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoToken)); jump_info_token->image = image; jump_info_token->token = token; g_hash_table_insert (cfg->token_info_hash, key, jump_info_token); } } /* * This function is called to handle items that are left on the evaluation stack * at basic block boundaries. What happens is that we save the values to local variables * and we reload them later when first entering the target basic block (with the * handle_loaded_temps () function). * A single joint point will use the same variables (stored in the array bb->out_stack or * bb->in_stack, if the basic block is before or after the joint point). * * This function needs to be called _before_ emitting the last instruction of * the bb (i.e. before emitting a branch). * If the stack merge fails at a join point, cfg->unverifiable is set. */ static void handle_stack_args (MonoCompile *cfg, MonoInst **sp, int count) { int i, bindex; MonoBasicBlock *bb = cfg->cbb; MonoBasicBlock *outb; MonoInst *inst, **locals; gboolean found; if (!count) return; if (cfg->verbose_level > 3) printf ("%d item(s) on exit from B%d\n", count, bb->block_num); if (!bb->out_scount) { bb->out_scount = count; //printf ("bblock %d has out:", bb->block_num); found = FALSE; for (i = 0; i < bb->out_count; ++i) { outb = bb->out_bb [i]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) continue; //printf (" %d", outb->block_num); if (outb->in_stack) { found = TRUE; bb->out_stack = outb->in_stack; break; } } //printf ("\n"); if (!found) { bb->out_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * count); for (i = 0; i < count; ++i) { /* * try to reuse temps already allocated for this purpouse, if they occupy the same * stack slot and if they are of the same type. * This won't cause conflicts since if 'local' is used to * store one of the values in the in_stack of a bblock, then * the same variable will be used for the same outgoing stack * slot as well. * This doesn't work when inlining methods, since the bblocks * in the inlined methods do not inherit their in_stack from * the bblock they are inlined to. See bug #58863 for an * example. */ bb->out_stack [i] = mono_compile_get_interface_var (cfg, i, sp [i]); } } } for (i = 0; i < bb->out_count; ++i) { outb = bb->out_bb [i]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) continue; if (outb->in_scount) { if (outb->in_scount != bb->out_scount) { cfg->unverifiable = TRUE; return; } continue; /* check they are the same locals */ } outb->in_scount = count; outb->in_stack = bb->out_stack; } locals = bb->out_stack; cfg->cbb = bb; for (i = 0; i < count; ++i) { sp [i] = convert_value (cfg, locals [i]->inst_vtype, sp [i]); EMIT_NEW_TEMPSTORE (cfg, inst, locals [i]->inst_c0, sp [i]); inst->cil_code = sp [i]->cil_code; sp [i] = locals [i]; if (cfg->verbose_level > 3) printf ("storing %d to temp %d\n", i, (int)locals [i]->inst_c0); } /* * It is possible that the out bblocks already have in_stack assigned, and * the in_stacks differ. In this case, we will store to all the different * in_stacks. */ found = TRUE; bindex = 0; while (found) { /* Find a bblock which has a different in_stack */ found = FALSE; while (bindex < bb->out_count) { outb = bb->out_bb [bindex]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) { bindex++; continue; } if (outb->in_stack != locals) { for (i = 0; i < count; ++i) { sp [i] = convert_value (cfg, outb->in_stack [i]->inst_vtype, sp [i]); EMIT_NEW_TEMPSTORE (cfg, inst, outb->in_stack [i]->inst_c0, sp [i]); inst->cil_code = sp [i]->cil_code; sp [i] = locals [i]; if (cfg->verbose_level > 3) printf ("storing %d to temp %d\n", i, (int)outb->in_stack [i]->inst_c0); } locals = outb->in_stack; found = TRUE; break; } bindex ++; } } } MonoInst* mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data) { MonoInst *ins; if (cfg->compile_aot) { MONO_DISABLE_WARNING (4306) // 'type cast': conversion from 'MonoJumpInfoType' to 'MonoInst *' of greater size EMIT_NEW_AOTCONST (cfg, ins, patch_type, data); MONO_RESTORE_WARNING } else { MonoJumpInfo ji; gpointer target; ERROR_DECL (error); ji.type = patch_type; ji.data.target = data; target = mono_resolve_patch_target_ext (cfg->mem_manager, NULL, NULL, &ji, FALSE, error); mono_error_assert_ok (error); EMIT_NEW_PCONST (cfg, ins, target); } return ins; } static MonoInst* mono_create_fast_tls_getter (MonoCompile *cfg, MonoTlsKey key) { int tls_offset = mono_tls_get_tls_offset (key); if (cfg->compile_aot) return NULL; if (tls_offset != -1 && mono_arch_have_fast_tls ()) { MonoInst *ins; MONO_INST_NEW (cfg, ins, OP_TLS_GET); ins->dreg = mono_alloc_preg (cfg); ins->inst_offset = tls_offset; return ins; } return NULL; } static MonoInst* mono_create_tls_get (MonoCompile *cfg, MonoTlsKey key) { MonoInst *fast_tls = NULL; if (!mini_debug_options.use_fallback_tls) fast_tls = mono_create_fast_tls_getter (cfg, key); if (fast_tls) { MONO_ADD_INS (cfg->cbb, fast_tls); return fast_tls; } const MonoJitICallId jit_icall_id = mono_get_tls_key_to_jit_icall_id (key); if (cfg->compile_aot && !cfg->llvm_only) { MonoInst *addr; /* * tls getters are critical pieces of code and we don't want to resolve them * through the standard plt/tramp mechanism since we might expose ourselves * to crashes and infinite recursions. * Therefore the NOCALL part of MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, FALSE in is_plt_patch. */ EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id)); return mini_emit_calli (cfg, mono_icall_sig_ptr, NULL, addr, NULL, NULL); } else { return mono_emit_jit_icall_id (cfg, jit_icall_id, NULL); } } /* * emit_push_lmf: * * Emit IR to push the current LMF onto the LMF stack. */ static void emit_push_lmf (MonoCompile *cfg) { /* * Emit IR to push the LMF: * lmf_addr = <lmf_addr from tls> * lmf->lmf_addr = lmf_addr * lmf->prev_lmf = *lmf_addr * *lmf_addr = lmf */ MonoInst *ins, *lmf_ins; if (!cfg->lmf_ir) return; int lmf_reg, prev_lmf_reg; /* * Store lmf_addr in a variable, so it can be allocated to a global register. */ if (!cfg->lmf_addr_var) cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); if (!cfg->lmf_var) { MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); lmf_var->flags |= MONO_INST_VOLATILE; lmf_var->flags |= MONO_INST_LMF; cfg->lmf_var = lmf_var; } lmf_ins = mono_create_tls_get (cfg, TLS_KEY_LMF_ADDR); g_assert (lmf_ins); lmf_ins->dreg = cfg->lmf_addr_var->dreg; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; prev_lmf_reg = alloc_preg (cfg); /* Save previous_lmf */ EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, cfg->lmf_addr_var->dreg, 0); if (cfg->deopt) /* Mark this as an LMFExt */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_POR_IMM, prev_lmf_reg, prev_lmf_reg, 2); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), prev_lmf_reg); /* Set new lmf */ EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, cfg->lmf_addr_var->dreg, 0, lmf_reg); } /* * emit_pop_lmf: * * Emit IR to pop the current LMF from the LMF stack. */ static void emit_pop_lmf (MonoCompile *cfg) { int lmf_reg, lmf_addr_reg; MonoInst *ins; if (!cfg->lmf_ir) return; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; int prev_lmf_reg; /* * Emit IR to pop the LMF: * *(lmf->lmf_addr) = lmf->prev_lmf */ /* This could be called before emit_push_lmf () */ if (!cfg->lmf_addr_var) cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); lmf_addr_reg = cfg->lmf_addr_var->dreg; prev_lmf_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf)); if (cfg->deopt) /* Clear out the bit set by push_lmf () to mark this as LMFExt */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PXOR_IMM, prev_lmf_reg, prev_lmf_reg, 2); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_addr_reg, 0, prev_lmf_reg); } /* * target_type_is_incompatible: * @cfg: MonoCompile context * * Check that the item @arg on the evaluation stack can be stored * in the target type (can be a local, or field, etc). * The cfg arg can be used to check if we need verification or just * validity checks. * * Returns: non-0 value if arg can't be stored on a target. */ static int target_type_is_incompatible (MonoCompile *cfg, MonoType *target, MonoInst *arg) { MonoType *simple_type; MonoClass *klass; if (m_type_is_byref (target)) { /* FIXME: check that the pointed to types match */ if (arg->type == STACK_MP) { /* This is needed to handle gshared types + ldaddr. We lower the types so we can handle enums and other typedef-like types. */ MonoClass *target_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (mono_class_from_mono_type_internal (target)))); MonoClass *source_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass))); /* if the target is native int& or X* or same type */ if (target->type == MONO_TYPE_I || target->type == MONO_TYPE_PTR || target_class_lowered == source_class_lowered) return 0; /* Both are primitive type byrefs and the source points to a larger type that the destination */ if (MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (target_class_lowered)) && MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (source_class_lowered)) && mono_class_instance_size (target_class_lowered) <= mono_class_instance_size (source_class_lowered)) return 0; return 1; } if (arg->type == STACK_PTR) return 0; return 1; } simple_type = mini_get_underlying_type (target); switch (simple_type->type) { case MONO_TYPE_VOID: return 1; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: if (arg->type != STACK_I4 && arg->type != STACK_PTR) return 1; return 0; case MONO_TYPE_PTR: /* STACK_MP is needed when setting pinned locals */ if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP) #if SIZEOF_VOID_P == 8 if (arg->type != STACK_I8) #endif return 1; return 0; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_FNPTR: /* * Some opcodes like ldloca returns 'transient pointers' which can be stored in * in native int. (#688008). */ if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP) return 1; return 0; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (arg->type != STACK_OBJ) return 1; /* FIXME: check type compatibility */ return 0; case MONO_TYPE_I8: case MONO_TYPE_U8: if (arg->type != STACK_I8) #if SIZEOF_VOID_P == 8 if (arg->type != STACK_PTR) #endif return 1; return 0; case MONO_TYPE_R4: if (arg->type != cfg->r4_stack_type) return 1; return 0; case MONO_TYPE_R8: if (arg->type != STACK_R8) return 1; return 0; case MONO_TYPE_VALUETYPE: if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); if (klass != arg->klass) return 1; return 0; case MONO_TYPE_TYPEDBYREF: if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); if (klass != arg->klass) return 1; return 0; case MONO_TYPE_GENERICINST: if (mono_type_generic_inst_is_valuetype (simple_type)) { MonoClass *target_class; if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); target_class = mono_class_from_mono_type_internal (target); /* The second cases is needed when doing partial sharing */ if (klass != arg->klass && target_class != arg->klass && target_class != mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass)))) return 1; return 0; } else { if (arg->type != STACK_OBJ) return 1; /* FIXME: check type compatibility */ return 0; } case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_type_var_is_vt (simple_type)) { if (arg->type != STACK_VTYPE) return 1; } else { if (arg->type != STACK_OBJ) return 1; } return 0; default: g_error ("unknown type 0x%02x in target_type_is_incompatible", simple_type->type); } return 1; } /* * convert_value: * * Emit some implicit conversions which are not part of the .net spec, but are allowed by MS.NET. */ static MonoInst* convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins) { if (!cfg->r4fp) return ins; type = mini_get_underlying_type (type); switch (type->type) { case MONO_TYPE_R4: if (ins->type == STACK_R8) { int dreg = alloc_freg (cfg); MonoInst *conv; EMIT_NEW_UNALU (cfg, conv, OP_FCONV_TO_R4, dreg, ins->dreg); conv->type = STACK_R4; return conv; } break; case MONO_TYPE_R8: if (ins->type == STACK_R4) { int dreg = alloc_freg (cfg); MonoInst *conv; EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, ins->dreg); conv->type = STACK_R8; return conv; } break; default: break; } return ins; } /* * Prepare arguments for passing to a function call. * Return a non-zero value if the arguments can't be passed to the given * signature. * The type checks are not yet complete and some conversions may need * casts on 32 or 64 bit architectures. * * FIXME: implement this using target_type_is_incompatible () */ static gboolean check_call_signature (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args) { MonoType *simple_type; int i; if (sig->hasthis) { if (args [0]->type != STACK_OBJ && args [0]->type != STACK_MP && args [0]->type != STACK_PTR) return TRUE; args++; } for (i = 0; i < sig->param_count; ++i) { if (m_type_is_byref (sig->params [i])) { if (args [i]->type != STACK_MP && args [i]->type != STACK_PTR) return TRUE; continue; } simple_type = mini_get_underlying_type (sig->params [i]); handle_enum: switch (simple_type->type) { case MONO_TYPE_VOID: return TRUE; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR) return TRUE; continue; case MONO_TYPE_I: case MONO_TYPE_U: if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: if (args [i]->type != STACK_I4 && !(SIZEOF_VOID_P == 8 && args [i]->type == STACK_I8) && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_I8: case MONO_TYPE_U8: if (args [i]->type != STACK_I8 && !(SIZEOF_VOID_P == 8 && (args [i]->type == STACK_I4 || args [i]->type == STACK_PTR))) return TRUE; continue; case MONO_TYPE_R4: if (args [i]->type != cfg->r4_stack_type) return TRUE; continue; case MONO_TYPE_R8: if (args [i]->type != STACK_R8) return TRUE; continue; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (simple_type->data.klass)) { simple_type = mono_class_enum_basetype_internal (simple_type->data.klass); goto handle_enum; } if (args [i]->type != STACK_VTYPE) return TRUE; continue; case MONO_TYPE_TYPEDBYREF: if (args [i]->type != STACK_VTYPE) return TRUE; continue; case MONO_TYPE_GENERICINST: simple_type = m_class_get_byval_arg (simple_type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: /* gsharedvt */ if (args [i]->type != STACK_VTYPE) return TRUE; continue; default: g_error ("unknown type 0x%02x in check_call_signature", simple_type->type); } } return FALSE; } MonoJumpInfo * mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target) { MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (mp, sizeof (MonoJumpInfo)); ji->ip.i = ip; ji->type = type; ji->data.target = target; return ji; } int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass) { if (cfg->gshared) return mono_class_check_context_used (klass); else return 0; } int mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method) { if (cfg->gshared) return mono_method_check_context_used (method); else return 0; } /* * check_method_sharing: * * Check whenever the vtable or an mrgctx needs to be passed when calling CMETHOD. */ static void check_method_sharing (MonoCompile *cfg, MonoMethod *cmethod, gboolean *out_pass_vtable, gboolean *out_pass_mrgctx) { gboolean pass_vtable = FALSE; gboolean pass_mrgctx = FALSE; if (((cmethod->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cmethod->klass)) && (mono_class_is_ginst (cmethod->klass) || mono_class_is_gtd (cmethod->klass))) { gboolean sharable = FALSE; if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) sharable = TRUE; /* * Pass vtable iff target method might * be shared, which means that sharing * is enabled for its class and its * context is sharable (and it's not a * generic method). */ if (sharable && !(mini_method_get_context (cmethod) && mini_method_get_context (cmethod)->method_inst)) pass_vtable = TRUE; } if (mini_method_needs_mrgctx (cmethod)) { if (mini_method_is_default_method (cmethod)) pass_vtable = FALSE; else g_assert (!pass_vtable); if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) { pass_mrgctx = TRUE; } else { if (cfg->gsharedvt && mini_is_gsharedvt_signature (mono_method_signature_internal (cmethod))) pass_mrgctx = TRUE; } } if (out_pass_vtable) *out_pass_vtable = pass_vtable; if (out_pass_mrgctx) *out_pass_mrgctx = pass_mrgctx; } static gboolean direct_icalls_enabled (MonoCompile *cfg, MonoMethod *method) { if (cfg->gen_sdb_seq_points || cfg->disable_direct_icalls) return FALSE; if (method && cfg->compile_aot && mono_aot_direct_icalls_enabled_for_method (cfg, method)) return TRUE; /* LLVM on amd64 can't handle calls to non-32 bit addresses */ #ifdef TARGET_AMD64 if (cfg->compile_llvm && !cfg->llvm_only) return FALSE; #endif return FALSE; } MonoInst* mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args) { /* * Call the jit icall without a wrapper if possible. * The wrapper is needed to be able to do stack walks for asynchronously suspended * threads when debugging. */ if (direct_icalls_enabled (cfg, NULL)) { int costs; if (!info->wrapper_method) { info->wrapper_method = mono_marshal_get_icall_wrapper (info, TRUE); mono_memory_barrier (); } /* * Inline the wrapper method, which is basically a call to the C icall, and * an exception check. */ costs = inline_method (cfg, info->wrapper_method, NULL, args, NULL, il_offset, TRUE, NULL); g_assert (costs > 0); g_assert (!MONO_TYPE_IS_VOID (info->sig->ret)); return args [0]; } return mono_emit_jit_icall_id (cfg, mono_jit_icall_info_id (info), args); } static MonoInst* mono_emit_widen_call_res (MonoCompile *cfg, MonoInst *ins, MonoMethodSignature *fsig) { if (!MONO_TYPE_IS_VOID (fsig->ret)) { if ((fsig->pinvoke || LLVM_ENABLED) && !m_type_is_byref (fsig->ret)) { int widen_op = -1; /* * Native code might return non register sized integers * without initializing the upper bits. */ switch (mono_type_to_load_membase (cfg, fsig->ret)) { case OP_LOADI1_MEMBASE: widen_op = OP_ICONV_TO_I1; break; case OP_LOADU1_MEMBASE: widen_op = OP_ICONV_TO_U1; break; case OP_LOADI2_MEMBASE: widen_op = OP_ICONV_TO_I2; break; case OP_LOADU2_MEMBASE: widen_op = OP_ICONV_TO_U2; break; default: break; } if (widen_op != -1) { int dreg = alloc_preg (cfg); MonoInst *widen; EMIT_NEW_UNALU (cfg, widen, widen_op, dreg, ins->dreg); widen->type = ins->type; ins = widen; } } } return ins; } static MonoInst* emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type); static void emit_method_access_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee) { MonoInst *args [2]; args [0] = emit_get_rgctx_method (cfg, mono_method_check_context_used (caller), caller, MONO_RGCTX_INFO_METHOD); args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (callee), callee, MONO_RGCTX_INFO_METHOD); mono_emit_jit_icall (cfg, mono_throw_method_access, args); } static void emit_bad_image_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee) { mono_emit_jit_icall (cfg, mono_throw_bad_image, NULL); } static void emit_not_supported_failure (MonoCompile *cfg) { mono_emit_jit_icall (cfg, mono_throw_not_supported, NULL); } static void emit_invalid_program_with_msg (MonoCompile *cfg, MonoError *error_msg, MonoMethod *caller, MonoMethod *callee) { g_assert (!is_ok (error_msg)); char *str = mono_mem_manager_strdup (cfg->mem_manager, mono_error_get_message (error_msg)); MonoInst *iargs[1]; if (cfg->compile_aot) EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str); else EMIT_NEW_PCONST (cfg, iargs [0], str); mono_emit_jit_icall (cfg, mono_throw_invalid_program, iargs); } // FIXME Consolidate the multiple functions named get_method_nofail. static MonoMethod* get_method_nofail (MonoClass *klass, const char *method_name, int num_params, int flags) { MonoMethod *method; ERROR_DECL (error); method = mono_class_get_method_from_name_checked (klass, method_name, num_params, flags, error); mono_error_assert_ok (error); g_assertf (method, "Could not lookup method %s in %s", method_name, m_class_get_name (klass)); return method; } MonoMethod* mini_get_memcpy_method (void) { static MonoMethod *memcpy_method = NULL; if (!memcpy_method) { memcpy_method = get_method_nofail (mono_defaults.string_class, "memcpy", 3, 0); if (!memcpy_method) g_error ("Old corlib found. Install a new one"); } return memcpy_method; } MonoInst* mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value) { MonoInst *store; /* * Add a release memory barrier so the object contents are flushed * to memory before storing the reference into another object. */ if (!mini_debug_options.weak_memory_model) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE (cfg, store, OP_STORE_MEMBASE_REG, ptr->dreg, 0, value->dreg); mini_emit_write_barrier (cfg, ptr, value); return store; } void mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value) { int card_table_shift_bits; target_mgreg_t card_table_mask; guint8 *card_table; MonoInst *dummy_use; int nursery_shift_bits; size_t nursery_size; if (!cfg->gen_write_barriers) return; //method->wrapper_type != MONO_WRAPPER_WRITE_BARRIER && !MONO_INS_IS_PCONST_NULL (sp [1]) card_table = mono_gc_get_target_card_table (&card_table_shift_bits, &card_table_mask); mono_gc_get_nursery (&nursery_shift_bits, &nursery_size); if (cfg->backend->have_card_table_wb && !cfg->compile_aot && card_table && nursery_shift_bits > 0 && !COMPILE_LLVM (cfg)) { MonoInst *wbarrier; MONO_INST_NEW (cfg, wbarrier, OP_CARD_TABLE_WBARRIER); wbarrier->sreg1 = ptr->dreg; wbarrier->sreg2 = value->dreg; MONO_ADD_INS (cfg->cbb, wbarrier); } else if (card_table) { int offset_reg = alloc_preg (cfg); int card_reg; MonoInst *ins; /* * We emit a fast light weight write barrier. This always marks cards as in the concurrent * collector case, so, for the serial collector, it might slightly slow down nursery * collections. We also expect that the host system and the target system have the same card * table configuration, which is the case if they have the same pointer size. */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, offset_reg, ptr->dreg, card_table_shift_bits); if (card_table_mask) MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, offset_reg, offset_reg, card_table_mask); /*We can't use PADD_IMM since the cardtable might end up in high addresses and amd64 doesn't support * IMM's larger than 32bits. */ ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_GC_CARD_TABLE_ADDR, NULL); card_reg = ins->dreg; MONO_EMIT_NEW_BIALU (cfg, OP_PADD, offset_reg, offset_reg, card_reg); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, offset_reg, 0, 1); } else { MonoMethod *write_barrier = mono_gc_get_write_barrier (); mono_emit_method_call (cfg, write_barrier, &ptr, NULL); } EMIT_NEW_DUMMY_USE (cfg, dummy_use, value); } MonoMethod* mini_get_memset_method (void) { static MonoMethod *memset_method = NULL; if (!memset_method) { memset_method = get_method_nofail (mono_defaults.string_class, "memset", 3, 0); if (!memset_method) g_error ("Old corlib found. Install a new one"); } return memset_method; } void mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass) { MonoInst *iargs [3]; int n; guint32 align; MonoMethod *memset_method; MonoInst *size_ins = NULL; MonoInst *bzero_ins = NULL; static MonoMethod *bzero_method; /* FIXME: Optimize this for the case when dest is an LDADDR */ mono_class_init_internal (klass); if (mini_is_gsharedvt_klass (klass)) { size_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_VALUE_SIZE); bzero_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_BZERO); if (!bzero_method) bzero_method = get_method_nofail (mono_defaults.string_class, "bzero_aligned_1", 2, 0); g_assert (bzero_method); iargs [0] = dest; iargs [1] = size_ins; mini_emit_calli (cfg, mono_method_signature_internal (bzero_method), iargs, bzero_ins, NULL, NULL); return; } klass = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (klass))); n = mono_class_value_size (klass, &align); if (n <= TARGET_SIZEOF_VOID_P * 8) { mini_emit_memset (cfg, dest->dreg, 0, n, 0, align); } else { memset_method = mini_get_memset_method (); iargs [0] = dest; EMIT_NEW_ICONST (cfg, iargs [1], 0); EMIT_NEW_ICONST (cfg, iargs [2], n); mono_emit_method_call (cfg, memset_method, iargs, NULL); } } static gboolean context_used_is_mrgctx (MonoCompile *cfg, int context_used) { /* gshared dim methods use an mrgctx */ if (mini_method_is_default_method (cfg->method)) return context_used != 0; return context_used & MONO_GENERIC_CONTEXT_USED_METHOD; } /* * emit_get_rgctx: * * Emit IR to return either the vtable or the mrgctx. */ static MonoInst* emit_get_rgctx (MonoCompile *cfg, int context_used) { MonoMethod *method = cfg->method; g_assert (cfg->gshared); /* Data whose context contains method type vars is stored in the mrgctx */ if (context_used_is_mrgctx (cfg, context_used)) { MonoInst *mrgctx_loc, *mrgctx_var; g_assert (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX); if (!mini_method_is_default_method (method)) g_assert (method->is_inflated && mono_method_get_context (method)->method_inst); if (cfg->llvm_only) { mrgctx_var = mono_get_mrgctx_var (cfg); } else { /* Volatile */ mrgctx_loc = mono_get_mrgctx_var (cfg); g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0); } return mrgctx_var; } /* * The rest of the entries are stored in vtable->runtime_generic_context so * have to return a vtable. */ if (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX) { MonoInst *mrgctx_loc, *mrgctx_var, *vtable_var; int vtable_reg; /* We are passed an mrgctx, return mrgctx->class_vtable */ if (cfg->llvm_only) { mrgctx_var = mono_get_mrgctx_var (cfg); } else { mrgctx_loc = mono_get_mrgctx_var (cfg); g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0); } vtable_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, vtable_var, OP_LOAD_MEMBASE, vtable_reg, mrgctx_var->dreg, MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable)); vtable_var->type = STACK_PTR; return vtable_var; } else if (cfg->rgctx_access == MONO_RGCTX_ACCESS_VTABLE) { MonoInst *vtable_loc, *vtable_var; /* We are passed a vtable, return it */ if (cfg->llvm_only) { vtable_var = mono_get_vtable_var (cfg); } else { vtable_loc = mono_get_vtable_var (cfg); g_assert (vtable_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, vtable_var, vtable_loc->inst_c0); } vtable_var->type = STACK_PTR; return vtable_var; } else { MonoInst *ins, *this_ins; int vtable_reg; /* We are passed a this pointer, return this->vtable */ EMIT_NEW_VARLOAD (cfg, this_ins, cfg->this_arg, mono_get_object_type ()); vtable_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, vtable_reg, this_ins->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable)); return ins; } } static MonoJumpInfoRgctxEntry * mono_patch_info_rgctx_entry_new (MonoMemPool *mp, MonoMethod *method, gboolean in_mrgctx, MonoJumpInfoType patch_type, gconstpointer patch_data, MonoRgctxInfoType info_type) { MonoJumpInfoRgctxEntry *res = (MonoJumpInfoRgctxEntry *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfoRgctxEntry)); if (in_mrgctx) res->d.method = method; else res->d.klass = method->klass; res->in_mrgctx = in_mrgctx; res->data = (MonoJumpInfo *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfo)); res->data->type = patch_type; res->data->data.target = patch_data; res->info_type = info_type; return res; } static MonoInst* emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type); static MonoInst* emit_rgctx_fetch_inline (MonoCompile *cfg, MonoInst *rgctx, MonoJumpInfoRgctxEntry *entry) { MonoInst *call; MonoInst *slot_ins; EMIT_NEW_AOTCONST (cfg, slot_ins, MONO_PATCH_INFO_RGCTX_SLOT_INDEX, entry); // Can't add basic blocks during interp entry mode if (cfg->disable_inline_rgctx_fetch || cfg->interp_entry_only) { MonoInst *args [2] = { rgctx, slot_ins }; if (entry->in_mrgctx) call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args); else call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args); return call; } MonoBasicBlock *slowpath_bb, *end_bb; MonoInst *ins, *res; int rgctx_reg, res_reg; /* * rgctx = vtable->runtime_generic_context; * if (rgctx) { * val = rgctx [slot + 1]; * if (val) * return val; * } * <slowpath> */ NEW_BBLOCK (cfg, end_bb); NEW_BBLOCK (cfg, slowpath_bb); if (entry->in_mrgctx) { rgctx_reg = rgctx->dreg; } else { rgctx_reg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, rgctx_reg, rgctx->dreg, MONO_STRUCT_OFFSET (MonoVTable, runtime_generic_context)); // FIXME: Avoid this check by allocating the table when the vtable is created etc. MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rgctx_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb); } int table_size = mono_class_rgctx_get_array_size (0, entry->in_mrgctx); if (entry->in_mrgctx) table_size -= MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / TARGET_SIZEOF_VOID_P; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, slot_ins->dreg, table_size - 1); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBGE, slowpath_bb); int shifted_slot_reg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_ISHL_IMM, shifted_slot_reg, slot_ins->dreg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2); int addr_reg = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, addr_reg, rgctx_reg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, addr_reg, addr_reg, shifted_slot_reg); int val_reg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, val_reg, addr_reg, TARGET_SIZEOF_VOID_P + (entry->in_mrgctx ? MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT : 0)); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, val_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb); res_reg = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, val_reg); res = ins; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, slowpath_bb); slowpath_bb->out_of_line = TRUE; MonoInst *args[2] = { rgctx, slot_ins }; if (entry->in_mrgctx) call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args); else call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, call->dreg); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); return res; } /* * emit_rgctx_fetch: * * Emit IR to load the value of the rgctx entry ENTRY from the rgctx. */ static MonoInst* emit_rgctx_fetch (MonoCompile *cfg, int context_used, MonoJumpInfoRgctxEntry *entry) { MonoInst *rgctx = emit_get_rgctx (cfg, context_used); if (cfg->llvm_only) return emit_rgctx_fetch_inline (cfg, rgctx, entry); else return mini_emit_abs_call (cfg, MONO_PATCH_INFO_RGCTX_FETCH, entry, mono_icall_sig_ptr_ptr, &rgctx); } /* * mini_emit_get_rgctx_klass: * * Emit IR to load the property RGCTX_TYPE of KLASS. If context_used is 0, emit * normal constants, else emit a load from the rgctx. */ MonoInst* mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used, MonoClass *klass, MonoRgctxInfoType rgctx_type) { if (!context_used) { MonoInst *ins; switch (rgctx_type) { case MONO_RGCTX_INFO_KLASS: EMIT_NEW_CLASSCONST (cfg, ins, klass); return ins; case MONO_RGCTX_INFO_VTABLE: { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; EMIT_NEW_VTABLECONST (cfg, ins, vtable); return ins; } default: g_assert_not_reached (); } } // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return mini_emit_get_gsharedvt_info_klass (cfg, klass, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_CLASS, klass, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); mono_error_exit: return NULL; } static MonoInst* emit_get_rgctx_sig (MonoCompile *cfg, int context_used, MonoMethodSignature *sig, MonoRgctxInfoType rgctx_type) { MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_SIGNATURE, sig, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } static MonoInst* emit_get_rgctx_gsharedvt_call (MonoCompile *cfg, int context_used, MonoMethodSignature *sig, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { MonoJumpInfoGSharedVtCall *call_info; MonoJumpInfoRgctxEntry *entry; call_info = (MonoJumpInfoGSharedVtCall *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoGSharedVtCall)); call_info->sig = sig; call_info->method = cmethod; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_CALL, call_info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } /* * emit_get_rgctx_virt_method: * * Return data for method VIRT_METHOD for a receiver of type KLASS. */ static MonoInst* emit_get_rgctx_virt_method (MonoCompile *cfg, int context_used, MonoClass *klass, MonoMethod *virt_method, MonoRgctxInfoType rgctx_type) { MonoJumpInfoVirtMethod *info; MonoJumpInfoRgctxEntry *entry; if (context_used == -1) context_used = mono_class_check_context_used (klass) | mono_method_check_context_used (virt_method); info = (MonoJumpInfoVirtMethod *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoVirtMethod)); info->klass = klass; info->method = virt_method; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_VIRT_METHOD, info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } static MonoInst* emit_get_rgctx_gsharedvt_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoGSharedVtMethodInfo *info) { MonoJumpInfoRgctxEntry *entry; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_METHOD, info, MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO); return emit_rgctx_fetch (cfg, context_used, entry); } /* * emit_get_rgctx_method: * * Emit IR to load the property RGCTX_TYPE of CMETHOD. If context_used is 0, emit * normal constants, else emit a load from the rgctx. */ static MonoInst* emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { if (context_used == -1) context_used = mono_method_check_context_used (cmethod); if (!context_used) { MonoInst *ins; switch (rgctx_type) { case MONO_RGCTX_INFO_METHOD: EMIT_NEW_METHODCONST (cfg, ins, cmethod); return ins; case MONO_RGCTX_INFO_METHOD_RGCTX: EMIT_NEW_METHOD_RGCTX_CONST (cfg, ins, cmethod); return ins; case MONO_RGCTX_INFO_METHOD_FTNDESC: EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_FTNDESC, cmethod); return ins; case MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY: EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_LLVMONLY_INTERP_ENTRY, cmethod); return ins; default: g_assert_not_reached (); } } else { // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return emit_get_gsharedvt_info (cfg, cmethod, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_METHODCONST, cmethod, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } } static MonoInst* emit_get_rgctx_field (MonoCompile *cfg, int context_used, MonoClassField *field, MonoRgctxInfoType rgctx_type) { // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return emit_get_gsharedvt_info (cfg, field, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_FIELD, field, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } MonoInst* mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { return emit_get_rgctx_method (cfg, context_used, cmethod, rgctx_type); } static int get_gsharedvt_info_slot (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type) { MonoGSharedVtMethodInfo *info = cfg->gsharedvt_info; MonoRuntimeGenericContextInfoTemplate *template_; int i, idx; g_assert (info); for (i = 0; i < info->num_entries; ++i) { MonoRuntimeGenericContextInfoTemplate *otemplate = &info->entries [i]; if (otemplate->info_type == rgctx_type && otemplate->data == data && rgctx_type != MONO_RGCTX_INFO_LOCAL_OFFSET) return i; } if (info->num_entries == info->count_entries) { MonoRuntimeGenericContextInfoTemplate *new_entries; int new_count_entries = info->count_entries ? info->count_entries * 2 : 16; new_entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * new_count_entries); memcpy (new_entries, info->entries, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries); info->entries = new_entries; info->count_entries = new_count_entries; } idx = info->num_entries; template_ = &info->entries [idx]; template_->info_type = rgctx_type; template_->data = data; info->num_entries ++; return idx; } /* * emit_get_gsharedvt_info: * * This is similar to emit_get_rgctx_.., but loads the data from the gsharedvt info var instead of calling an rgctx fetch trampoline. */ static MonoInst* emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type) { MonoInst *ins; int idx, dreg; idx = get_gsharedvt_info_slot (cfg, data, rgctx_type); /* Load info->entries [idx] */ dreg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, cfg->gsharedvt_info_var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P)); return ins; } MonoInst* mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type) { return emit_get_gsharedvt_info (cfg, m_class_get_byval_arg (klass), rgctx_type); } /* * On return the caller must check @klass for load errors. */ static void emit_class_init (MonoCompile *cfg, MonoClass *klass) { MonoInst *vtable_arg; int context_used; context_used = mini_class_check_context_used (cfg, klass); if (context_used) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE); } else { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); if (!is_ok (cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable); } if (!COMPILE_LLVM (cfg) && cfg->backend->have_op_generic_class_init) { MonoInst *ins; /* * Using an opcode instead of emitting IR here allows the hiding of the call inside the opcode, * so this doesn't have to clobber any regs and it doesn't break basic blocks. */ MONO_INST_NEW (cfg, ins, OP_GENERIC_CLASS_INIT); ins->sreg1 = vtable_arg->dreg; MONO_ADD_INS (cfg->cbb, ins); } else { int inited_reg; MonoBasicBlock *inited_bb; inited_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, inited_reg, vtable_arg->dreg, MONO_STRUCT_OFFSET (MonoVTable, initialized)); NEW_BBLOCK (cfg, inited_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, inited_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBNE_UN, inited_bb); cfg->cbb->out_of_line = TRUE; mono_emit_jit_icall (cfg, mono_generic_class_init, &vtable_arg); MONO_START_BB (cfg, inited_bb); } } static void emit_seq_point (MonoCompile *cfg, MonoMethod *method, guint8* ip, gboolean intr_loc, gboolean nonempty_stack) { MonoInst *ins; if (cfg->gen_seq_points && cfg->method == method) { NEW_SEQ_POINT (cfg, ins, ip - cfg->header->code, intr_loc); if (nonempty_stack) ins->flags |= MONO_INST_NONEMPTY_STACK; MONO_ADD_INS (cfg->cbb, ins); cfg->last_seq_point = ins; } } void mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check) { if (mini_debug_options.better_cast_details) { int vtable_reg = alloc_preg (cfg); int klass_reg = alloc_preg (cfg); MonoBasicBlock *is_null_bb = NULL; MonoInst *tls_get; if (null_check) { NEW_BBLOCK (cfg, is_null_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, obj_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb); } tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS); if (!tls_get) { fprintf (stderr, "error: --debug=casts not supported on this platform.\n."); exit (1); } MONO_EMIT_NEW_LOAD_MEMBASE (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable)); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), klass_reg); MonoInst *class_ins = mini_emit_get_rgctx_klass (cfg, mini_class_check_context_used (cfg, klass), klass, MONO_RGCTX_INFO_KLASS); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_to), class_ins->dreg); if (null_check) MONO_START_BB (cfg, is_null_bb); } } void mini_reset_cast_details (MonoCompile *cfg) { /* Reset the variables holding the cast details */ if (mini_debug_options.better_cast_details) { MonoInst *tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS); /* It is enough to reset the from field */ MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), 0); } } /* * On return the caller must check @array_class for load errors */ static void mini_emit_check_array_type (MonoCompile *cfg, MonoInst *obj, MonoClass *array_class) { int vtable_reg = alloc_preg (cfg); int context_used; context_used = mini_class_check_context_used (cfg, array_class); mini_save_cast_details (cfg, array_class, obj->dreg, FALSE); MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable)); if (context_used) { MonoInst *vtable_ins; vtable_ins = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vtable_ins->dreg); } else { if (cfg->compile_aot) { int vt_reg; MonoVTable *vtable; if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } vt_reg = alloc_preg (cfg); MONO_EMIT_NEW_VTABLECONST (cfg, vt_reg, vtable); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vt_reg); } else { MonoVTable *vtable; if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, vtable_reg, (gssize)vtable); } } MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "ArrayTypeMismatchException"); mini_reset_cast_details (cfg); } /** * Handles unbox of a Nullable<T>. If context_used is non zero, then shared * generic code is generated. */ static MonoInst* handle_unbox_nullable (MonoCompile* cfg, MonoInst* val, MonoClass* klass, int context_used) { MonoMethod* method; if (m_class_is_enumtype (mono_class_get_nullable_param_internal (klass))) method = get_method_nofail (klass, "UnboxExact", 1, 0); else method = get_method_nofail (klass, "Unbox", 1, 0); g_assert (method); if (context_used) { MonoInst *rgctx, *addr; /* FIXME: What if the class is shared? We might not have to get the address of the method from the RGCTX. */ if (cfg->llvm_only) { addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_METHOD_FTNDESC); cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, mono_method_signature_internal (method)); return mini_emit_llvmonly_calli (cfg, mono_method_signature_internal (method), &val, addr); } else { addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); rgctx = emit_get_rgctx (cfg, context_used); return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx); } } else { gboolean pass_vtable, pass_mrgctx; MonoInst *rgctx_arg = NULL; check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx); g_assert (!pass_mrgctx); if (pass_vtable) { MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); mono_error_assert_ok (cfg->error); EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable); } return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg); } } MonoInst* mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used) { MonoInst *add; int obj_reg; int vtable_reg = alloc_dreg (cfg ,STACK_PTR); int klass_reg = alloc_dreg (cfg ,STACK_PTR); int eclass_reg = alloc_dreg (cfg ,STACK_PTR); int rank_reg = alloc_dreg (cfg ,STACK_I4); obj_reg = val->dreg; MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable)); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, rank_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, rank)); /* FIXME: generics */ g_assert (m_class_get_rank (klass) == 0); // Check rank == 0 MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rank_reg, 0); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass)); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, eclass_reg, klass_reg, m_class_offsetof_element_class ()); if (context_used) { MonoInst *element_class; /* This assertion is from the unboxcast insn */ g_assert (m_class_get_rank (klass) == 0); element_class = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_ELEMENT_KLASS); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, eclass_reg, element_class->dreg); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); } else { mini_save_cast_details (cfg, m_class_get_element_class (klass), obj_reg, FALSE); mini_emit_class_check (cfg, eclass_reg, m_class_get_element_class (klass)); mini_reset_cast_details (cfg); } NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), obj_reg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, add); add->type = STACK_MP; add->klass = klass; return add; } static MonoInst* handle_unbox_gsharedvt (MonoCompile *cfg, MonoClass *klass, MonoInst *obj) { MonoInst *addr, *klass_inst, *is_ref, *args[16]; MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb; MonoInst *ins; int dreg, addr_reg; klass_inst = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_KLASS); /* obj */ args [0] = obj; /* klass */ args [1] = klass_inst; /* CASTCLASS */ obj = mono_emit_jit_icall (cfg, mono_object_castclass_unbox, args); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, is_nullable_bb); NEW_BBLOCK (cfg, end_bb); is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb); /* This will contain either the address of the unboxed vtype, or an address of the temporary where the ref is stored */ addr_reg = alloc_dreg (cfg, STACK_MP); /* Non-ref case */ /* UNBOX */ NEW_BIALU_IMM (cfg, addr, OP_ADD_IMM, addr_reg, obj->dreg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, addr); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); /* Save the ref to a temporary */ dreg = alloc_ireg (cfg); EMIT_NEW_VARLOADA_VREG (cfg, addr, dreg, m_class_get_byval_arg (klass)); addr->dreg = addr_reg; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, obj->dreg); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Nullable case */ MONO_START_BB (cfg, is_nullable_bb); { MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX); MonoInst *unbox_call; MonoMethodSignature *unbox_sig; unbox_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *))); unbox_sig->ret = m_class_get_byval_arg (klass); unbox_sig->param_count = 1; unbox_sig->params [0] = mono_get_object_type (); if (cfg->llvm_only) unbox_call = mini_emit_llvmonly_calli (cfg, unbox_sig, &obj, addr); else unbox_call = mini_emit_calli (cfg, unbox_sig, &obj, addr, NULL, NULL); EMIT_NEW_VARLOADA_VREG (cfg, addr, unbox_call->dreg, m_class_get_byval_arg (klass)); addr->dreg = addr_reg; } MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* End */ MONO_START_BB (cfg, end_bb); /* LDOBJ */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr_reg, 0); return ins; } /* * Returns NULL and set the cfg exception on error. */ static MonoInst* handle_alloc (MonoCompile *cfg, MonoClass *klass, gboolean for_box, int context_used) { MonoInst *iargs [2]; MonoJitICallId alloc_ftn; if (mono_class_get_flags (klass) & TYPE_ATTRIBUTE_ABSTRACT) { char* full_name = mono_type_get_full_name (klass); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_set_member_access (cfg->error, "Cannot create an abstract class: %s", full_name); g_free (full_name); return NULL; } if (context_used) { gboolean known_instance_size = !mini_is_gsharedvt_klass (klass); MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, known_instance_size); iargs [0] = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE); alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific; if (managed_alloc) { if (known_instance_size) { int size = mono_class_instance_size (klass); if (size < MONO_ABI_SIZEOF (MonoObject)) g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass)); EMIT_NEW_ICONST (cfg, iargs [1], size); } return mono_emit_method_call (cfg, managed_alloc, iargs, NULL); } return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs); } if (cfg->compile_aot && cfg->cbb->out_of_line && m_class_get_type_token (klass) && m_class_get_image (klass) == mono_defaults.corlib && !mono_class_is_ginst (klass)) { /* This happens often in argument checking code, eg. throw new FooException... */ /* Avoid relocations and save some space by calling a helper function specialized to mscorlib */ EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (m_class_get_type_token (klass))); alloc_ftn = MONO_JIT_ICALL_mono_helper_newobj_mscorlib; } else { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); if (!is_ok (cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return NULL; } MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, TRUE); if (managed_alloc) { int size = mono_class_instance_size (klass); if (size < MONO_ABI_SIZEOF (MonoObject)) g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass)); EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); EMIT_NEW_ICONST (cfg, iargs [1], size); return mono_emit_method_call (cfg, managed_alloc, iargs, NULL); } alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific; EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); } return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs); } /* * Returns NULL and set the cfg exception on error. */ MonoInst* mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used) { MonoInst *alloc, *ins; if (G_UNLIKELY (m_class_is_byreflike (klass))) { mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Cannot box IsByRefLike type '%s.%s'", m_class_get_name_space (klass), m_class_get_name (klass)); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return NULL; } if (mono_class_is_nullable (klass)) { MonoMethod* method = get_method_nofail (klass, "Box", 1, 0); if (context_used) { if (cfg->llvm_only) { MonoMethodSignature *sig = mono_method_signature_internal (method); MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_METHOD_FTNDESC); cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig); return mini_emit_llvmonly_calli (cfg, sig, &val, addr); } else { /* FIXME: What if the class is shared? We might not have to get the method address from the RGCTX. */ MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); MonoInst *rgctx = emit_get_rgctx (cfg, context_used); return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx); } } else { gboolean pass_vtable, pass_mrgctx; MonoInst *rgctx_arg = NULL; check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx); g_assert (!pass_mrgctx); if (pass_vtable) { MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); mono_error_assert_ok (cfg->error); EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable); } return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg); } } if (mini_is_gsharedvt_klass (klass)) { MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb; MonoInst *res, *is_ref, *src_var, *addr; int dreg; dreg = alloc_ireg (cfg); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, is_nullable_bb); NEW_BBLOCK (cfg, end_bb); is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb); /* Non-ref case */ alloc = handle_alloc (cfg, klass, TRUE, context_used); if (!alloc) return NULL; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg); ins->opcode = OP_STOREV_MEMBASE; EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, alloc->dreg); res->type = STACK_OBJ; res->klass = klass; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); /* val is a vtype, so has to load the value manually */ src_var = get_vreg_to_inst (cfg, val->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, val->dreg); EMIT_NEW_VARLOADA (cfg, addr, src_var, src_var->inst_vtype); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, addr->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Nullable case */ MONO_START_BB (cfg, is_nullable_bb); { MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_BOX); MonoInst *box_call; MonoMethodSignature *box_sig; /* * klass is Nullable<T>, need to call Nullable<T>.Box () using a gsharedvt signature, but we cannot * construct that method at JIT time, so have to do things by hand. */ box_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *))); box_sig->ret = mono_get_object_type (); box_sig->param_count = 1; box_sig->params [0] = m_class_get_byval_arg (klass); if (cfg->llvm_only) box_call = mini_emit_llvmonly_calli (cfg, box_sig, &val, addr); else box_call = mini_emit_calli (cfg, box_sig, &val, addr, NULL, NULL); EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, box_call->dreg); res->type = STACK_OBJ; res->klass = klass; } MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); return res; } alloc = handle_alloc (cfg, klass, TRUE, context_used); if (!alloc) return NULL; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg); return alloc; } static gboolean method_needs_stack_walk (MonoCompile *cfg, MonoMethod *cmethod) { if (cmethod->klass == mono_defaults.systemtype_class) { if (!strcmp (cmethod->name, "GetType")) return TRUE; } /* * In corelib code, methods which need to do a stack walk declare a StackCrawlMark local and pass it as an * arguments until it reaches an icall. Its hard to detect which methods do that especially with * StackCrawlMark.LookForMyCallersCaller, so for now, just hardcode the classes which contain the public * methods whose caller is needed. */ if (mono_is_corlib_image (m_class_get_image (cmethod->klass))) { const char *cname = m_class_get_name (cmethod->klass); if (!strcmp (cname, "Assembly") || !strcmp (cname, "AssemblyLoadContext") || (!strcmp (cname, "Activator"))) { if (!strcmp (cmethod->name, "op_Equality")) return FALSE; return TRUE; } } return FALSE; } G_GNUC_UNUSED MonoInst* mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag) { MonoType *enum_type = mono_type_get_underlying_type (m_class_get_byval_arg (klass)); guint32 load_opc = mono_type_to_load_membase (cfg, enum_type); gboolean is_i4; switch (enum_type->type) { case MONO_TYPE_I8: case MONO_TYPE_U8: #if SIZEOF_REGISTER == 8 case MONO_TYPE_I: case MONO_TYPE_U: #endif is_i4 = FALSE; break; default: is_i4 = TRUE; break; } { MonoInst *load = NULL, *and_, *cmp, *ceq; int enum_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg); int and_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg); int dest_reg = alloc_ireg (cfg); if (enum_this) { EMIT_NEW_LOAD_MEMBASE (cfg, load, load_opc, enum_reg, enum_this->dreg, 0); } else { g_assert (enum_val_reg != -1); enum_reg = enum_val_reg; } EMIT_NEW_BIALU (cfg, and_, is_i4 ? OP_IAND : OP_LAND, and_reg, enum_reg, enum_flag->dreg); EMIT_NEW_BIALU (cfg, cmp, is_i4 ? OP_ICOMPARE : OP_LCOMPARE, -1, and_reg, enum_flag->dreg); EMIT_NEW_UNALU (cfg, ceq, is_i4 ? OP_ICEQ : OP_LCEQ, dest_reg, -1); ceq->type = STACK_I4; if (!is_i4) { load = load ? mono_decompose_opcode (cfg, load) : NULL; and_ = mono_decompose_opcode (cfg, and_); cmp = mono_decompose_opcode (cfg, cmp); ceq = mono_decompose_opcode (cfg, ceq); } return ceq; } } static void emit_set_deopt_il_offset (MonoCompile *cfg, int offset) { MonoInst *ins; if (!(cfg->deopt && cfg->method == cfg->current_method)) return; EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, ins->dreg, MONO_STRUCT_OFFSET (MonoMethodILState, il_offset), offset); } static MonoInst* emit_get_rgctx_dele_tramp (MonoCompile *cfg, int context_used, MonoClass *klass, MonoMethod *virt_method, gboolean _virtual, MonoRgctxInfoType rgctx_type) { MonoDelegateClassMethodPair *info; MonoJumpInfoRgctxEntry *entry; info = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair)); info->klass = klass; info->method = virt_method; info->is_virtual = _virtual; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } /* * Returns NULL and set the cfg exception on error. */ static G_GNUC_UNUSED MonoInst* handle_delegate_ctor (MonoCompile *cfg, MonoClass *klass, MonoInst *target, MonoMethod *method, int target_method_context_used, int invoke_context_used, gboolean virtual_) { MonoInst *ptr; int dreg; gpointer trampoline; MonoInst *obj, *tramp_ins; guint8 **code_slot; if (virtual_ && !cfg->llvm_only) { MonoMethod *invoke = mono_get_delegate_invoke_internal (klass); g_assert (invoke); //FIXME verify & fix any issue with removing invoke_context_used restriction if (invoke_context_used || !mono_get_delegate_virtual_invoke_impl (mono_method_signature_internal (invoke), target_method_context_used ? NULL : method)) return NULL; } obj = handle_alloc (cfg, klass, FALSE, invoke_context_used); if (!obj) return NULL; /* Inline the contents of mono_delegate_ctor */ /* Set target field */ /* Optimize away setting of NULL target */ if (!MONO_INS_IS_PCONST_NULL (target)) { if (!(method->flags & METHOD_ATTRIBUTE_STATIC)) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); } if (!mini_debug_options.weak_memory_model) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target), target->dreg); if (cfg->gen_write_barriers) { dreg = alloc_preg (cfg); EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target)); mini_emit_write_barrier (cfg, ptr, target); } } /* Set method field */ if (!(target_method_context_used || invoke_context_used) && !cfg->llvm_only) { //If compiling with gsharing enabled, it's faster to load method the delegate trampoline info than to use a rgctx slot MonoInst *method_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), method_ins->dreg); } if (cfg->llvm_only) { if (virtual_) { MonoInst *args [ ] = { obj, target, emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD) }; mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate_virtual, args); return obj; } } /* * To avoid looking up the compiled code belonging to the target method * in mono_delegate_trampoline (), we allocate a per-domain memory slot to * store it, and we fill it after the method has been compiled. */ if (!method->dynamic && !cfg->llvm_only) { MonoInst *code_slot_ins; if (target_method_context_used) { code_slot_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD_DELEGATE_CODE); } else { MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm; jit_mm_lock (jit_mm); if (!jit_mm->method_code_hash) jit_mm->method_code_hash = g_hash_table_new (NULL, NULL); code_slot = (guint8 **)g_hash_table_lookup (jit_mm->method_code_hash, method); if (!code_slot) { code_slot = (guint8 **)mono_mem_manager_alloc0 (jit_mm->mem_manager, sizeof (gpointer)); g_hash_table_insert (jit_mm->method_code_hash, method, code_slot); } jit_mm_unlock (jit_mm); code_slot_ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_METHOD_CODE_SLOT, method); } MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_code), code_slot_ins->dreg); } if (target_method_context_used || invoke_context_used) { tramp_ins = emit_get_rgctx_dele_tramp (cfg, target_method_context_used | invoke_context_used, klass, method, virtual_, MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO); //This is emited as a contant store for the non-shared case. //We copy from the delegate trampoline info as it's faster than a rgctx fetch dreg = alloc_preg (cfg); if (!cfg->llvm_only) { MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), dreg); } } else if (cfg->compile_aot) { MonoDelegateClassMethodPair *del_tramp; del_tramp = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair)); del_tramp->klass = klass; del_tramp->method = method; del_tramp->is_virtual = virtual_; EMIT_NEW_AOTCONST (cfg, tramp_ins, MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, del_tramp); } else { if (virtual_) trampoline = mono_create_delegate_virtual_trampoline (klass, method); else trampoline = mono_create_delegate_trampoline_info (klass, method); EMIT_NEW_PCONST (cfg, tramp_ins, trampoline); } if (cfg->llvm_only) { MonoInst *args [ ] = { obj, tramp_ins }; mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate, args); return obj; } /* Set invoke_impl field */ if (virtual_) { MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), tramp_ins->dreg); } else { dreg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, invoke_impl)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), dreg); dreg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method_ptr)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), dreg); } dreg = alloc_preg (cfg); MONO_EMIT_NEW_ICONST (cfg, dreg, virtual_ ? 1 : 0); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_is_virtual), dreg); /* All the checks which are in mono_delegate_ctor () are done by the delegate trampoline */ return obj; } /* * handle_constrained_gsharedvt_call: * * Handle constrained calls where the receiver is a gsharedvt type. * Return the instruction representing the call. Set the cfg exception on failure. */ static MonoInst* handle_constrained_gsharedvt_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, MonoClass *constrained_class, gboolean *ref_emit_widen) { MonoInst *ins = NULL; gboolean emit_widen = *ref_emit_widen; gboolean supported; /* * Constrained calls need to behave differently at runtime dependending on whenever the receiver is instantiated as ref type or as a vtype. * This is hard to do with the current call code, since we would have to emit a branch and two different calls. So instead, we * pack the arguments into an array, and do the rest of the work in in an icall. */ supported = ((cmethod->klass == mono_defaults.object_class) || mono_class_is_interface (cmethod->klass) || (!m_class_is_valuetype (cmethod->klass) && m_class_get_image (cmethod->klass) != mono_defaults.corlib)); if (supported) supported = (MONO_TYPE_IS_VOID (fsig->ret) || MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_IS_REFERENCE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret)) || mini_is_gsharedvt_type (fsig->ret)); if (supported) { if (fsig->param_count == 0 || (!fsig->hasthis && fsig->param_count == 1)) { supported = TRUE; } else { supported = TRUE; for (int i = 0; i < fsig->param_count; ++i) { if (!(m_type_is_byref (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_IS_REFERENCE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i]) || mini_is_gsharedvt_type (fsig->params [i]))) supported = FALSE; } } } if (supported) { MonoInst *args [5]; /* * This case handles calls to * - object:ToString()/Equals()/GetHashCode(), * - System.IComparable<T>:CompareTo() * - System.IEquatable<T>:Equals () * plus some simple interface calls enough to support AsyncTaskMethodBuilder. */ if (fsig->hasthis) args [0] = sp [0]; else EMIT_NEW_PCONST (cfg, args [0], NULL); args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (cmethod), cmethod, MONO_RGCTX_INFO_METHOD); args [2] = mini_emit_get_rgctx_klass (cfg, mono_class_check_context_used (constrained_class), constrained_class, MONO_RGCTX_INFO_KLASS); /* !fsig->hasthis is for the wrapper for the Object.GetType () icall or static virtual methods */ if ((fsig->hasthis || m_method_is_static (cmethod)) && fsig->param_count) { /* Call mono_gsharedvt_constrained_call (gpointer mp, MonoMethod *cmethod, MonoClass *klass, gboolean *deref_args, gpointer *args) */ gboolean has_gsharedvt = FALSE; for (int i = 0; i < fsig->param_count; ++i) { if (mini_is_gsharedvt_type (fsig->params [i])) has_gsharedvt = TRUE; } /* Pass an array of bools which signal whenever the corresponding argument is a gsharedvt ref type */ if (has_gsharedvt) { MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = fsig->param_count; MONO_ADD_INS (cfg->cbb, ins); args [3] = ins; } else { EMIT_NEW_PCONST (cfg, args [3], 0); } /* Pass the arguments using a localloc-ed array using the format expected by runtime_invoke () */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = fsig->param_count * sizeof (target_mgreg_t); MONO_ADD_INS (cfg->cbb, ins); args [4] = ins; for (int i = 0; i < fsig->param_count; ++i) { int addr_reg; if (mini_is_gsharedvt_type (fsig->params [i])) { MonoInst *is_deref; int deref_arg_reg; ins = mini_emit_get_gsharedvt_info_klass (cfg, mono_class_from_mono_type_internal (fsig->params [i]), MONO_RGCTX_INFO_CLASS_BOX_TYPE); deref_arg_reg = alloc_preg (cfg); /* deref_arg = BOX_TYPE != MONO_GSHAREDVT_BOX_TYPE_VTYPE */ EMIT_NEW_BIALU_IMM (cfg, is_deref, OP_ISUB_IMM, deref_arg_reg, ins->dreg, 1); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, args [3]->dreg, i, is_deref->dreg); } else if (has_gsharedvt) { MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, args [3]->dreg, i, 0); } MonoInst *arg = sp [i + fsig->hasthis]; if (mini_is_gsharedvt_type (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i])) { EMIT_NEW_VARLOADA_VREG (cfg, ins, arg->dreg, fsig->params [i]); addr_reg = ins->dreg; EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), addr_reg); } else { EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), arg->dreg); } } } else { EMIT_NEW_ICONST (cfg, args [3], 0); EMIT_NEW_ICONST (cfg, args [4], 0); } ins = mono_emit_jit_icall (cfg, mono_gsharedvt_constrained_call, args); emit_widen = FALSE; if (mini_is_gsharedvt_type (fsig->ret)) { ins = handle_unbox_gsharedvt (cfg, mono_class_from_mono_type_internal (fsig->ret), ins); } else if (MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret))) { MonoInst *add; /* Unbox */ NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), ins->dreg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, add); /* Load value */ NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, add->dreg, 0); MONO_ADD_INS (cfg->cbb, ins); /* ins represents the call result */ } } else { GSHAREDVT_FAILURE (CEE_CALLVIRT); } *ref_emit_widen = emit_widen; return ins; exception_exit: return NULL; } static void mono_emit_load_got_addr (MonoCompile *cfg) { MonoInst *getaddr, *dummy_use; if (!cfg->got_var || cfg->got_var_allocated) return; MONO_INST_NEW (cfg, getaddr, OP_LOAD_GOTADDR); getaddr->cil_code = cfg->header->code; getaddr->dreg = cfg->got_var->dreg; /* Add it to the start of the first bblock */ if (cfg->bb_entry->code) { getaddr->next = cfg->bb_entry->code; cfg->bb_entry->code = getaddr; } else MONO_ADD_INS (cfg->bb_entry, getaddr); cfg->got_var_allocated = TRUE; /* * Add a dummy use to keep the got_var alive, since real uses might * only be generated by the back ends. * Add it to end_bblock, so the variable's lifetime covers the whole * method. * It would be better to make the usage of the got var explicit in all * cases when the backend needs it (i.e. calls, throw etc.), so this * wouldn't be needed. */ NEW_DUMMY_USE (cfg, dummy_use, cfg->got_var); MONO_ADD_INS (cfg->bb_exit, dummy_use); } static MonoMethod* get_constrained_method (MonoCompile *cfg, MonoImage *image, guint32 token, MonoMethod *cil_method, MonoClass *constrained_class, MonoGenericContext *generic_context) { MonoMethod *cmethod = cil_method; gboolean constrained_is_generic_param = m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR || m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR; if (cfg->current_method->wrapper_type != MONO_WRAPPER_NONE) { if (cfg->verbose_level > 2) printf ("DM Constrained call to %s\n", mono_type_get_full_name (constrained_class)); if (!(constrained_is_generic_param && cfg->gshared)) { cmethod = mono_get_method_constrained_with_method (image, cil_method, constrained_class, generic_context, cfg->error); CHECK_CFG_ERROR; } } else { if (cfg->verbose_level > 2) printf ("Constrained call to %s\n", mono_type_get_full_name (constrained_class)); if (constrained_is_generic_param && cfg->gshared) { /* * This is needed since get_method_constrained can't find * the method in klass representing a type var. * The type var is guaranteed to be a reference type in this * case. */ if (!mini_is_gsharedvt_klass (constrained_class)) g_assert (!m_class_is_valuetype (cmethod->klass)); } else { cmethod = mono_get_method_constrained_checked (image, token, constrained_class, generic_context, &cil_method, cfg->error); CHECK_CFG_ERROR; } } return cmethod; mono_error_exit: return NULL; } static gboolean method_does_not_return (MonoMethod *method) { // FIXME: Under netcore, these are decorated with the [DoesNotReturn] attribute return m_class_get_image (method->klass) == mono_defaults.corlib && !strcmp (m_class_get_name (method->klass), "ThrowHelper") && strstr (method->name, "Throw") == method->name && !method->is_inflated; } static int inline_limit, llvm_jit_inline_limit, llvm_aot_inline_limit; static gboolean inline_limit_inited; static gboolean mono_method_check_inlining (MonoCompile *cfg, MonoMethod *method) { MonoMethodHeaderSummary header; MonoVTable *vtable; int limit; #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK MonoMethodSignature *sig = mono_method_signature_internal (method); int i; #endif if (cfg->disable_inline) return FALSE; if (cfg->gsharedvt) return FALSE; if (cfg->inline_depth > 10) return FALSE; if (!mono_method_get_header_summary (method, &header)) return FALSE; /*runtime, icall and pinvoke are checked by summary call*/ if ((method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || (method->iflags & METHOD_IMPL_ATTRIBUTE_SYNCHRONIZED) || header.has_clauses) return FALSE; if (method->flags & METHOD_ATTRIBUTE_REQSECOBJ) /* Used to mark methods containing StackCrawlMark locals */ return FALSE; /* also consider num_locals? */ /* Do the size check early to avoid creating vtables */ if (!inline_limit_inited) { char *inlinelimit; if ((inlinelimit = g_getenv ("MONO_INLINELIMIT"))) { inline_limit = atoi (inlinelimit); llvm_jit_inline_limit = inline_limit; llvm_aot_inline_limit = inline_limit; g_free (inlinelimit); } else { inline_limit = INLINE_LENGTH_LIMIT; llvm_jit_inline_limit = LLVM_JIT_INLINE_LENGTH_LIMIT; llvm_aot_inline_limit = LLVM_AOT_INLINE_LENGTH_LIMIT; } inline_limit_inited = TRUE; } if (COMPILE_LLVM (cfg)) { if (cfg->compile_aot) limit = llvm_aot_inline_limit; else limit = llvm_jit_inline_limit; } else { limit = inline_limit; } if (header.code_size >= limit && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING)) return FALSE; /* * if we can initialize the class of the method right away, we do, * otherwise we don't allow inlining if the class needs initialization, * since it would mean inserting a call to mono_runtime_class_init() * inside the inlined code */ if (cfg->gshared && m_class_has_cctor (method->klass) && mini_class_check_context_used (cfg, method->klass)) return FALSE; { /* The AggressiveInlining hint is a good excuse to force that cctor to run. */ if ((cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) || method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) { if (m_class_has_cctor (method->klass)) { ERROR_DECL (error); vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } if (!cfg->compile_aot) { if (!mono_runtime_class_init_full (vtable, error)) { mono_error_cleanup (error); return FALSE; } } } } else if (mono_class_is_before_field_init (method->klass)) { if (cfg->run_cctors && m_class_has_cctor (method->klass)) { ERROR_DECL (error); /*FIXME it would easier and lazier to just use mono_class_try_get_vtable */ if (!m_class_get_runtime_vtable (method->klass)) /* No vtable created yet */ return FALSE; vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } /* This makes so that inline cannot trigger */ /* .cctors: too many apps depend on them */ /* running with a specific order... */ if (! vtable->initialized) return FALSE; if (!mono_runtime_class_init_full (vtable, error)) { mono_error_cleanup (error); return FALSE; } } } else if (mono_class_needs_cctor_run (method->klass, NULL)) { ERROR_DECL (error); if (!m_class_get_runtime_vtable (method->klass)) /* No vtable created yet */ return FALSE; vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } if (!vtable->initialized) return FALSE; } } #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK if (mono_arch_is_soft_float ()) { /* FIXME: */ if (sig->ret && sig->ret->type == MONO_TYPE_R4) return FALSE; for (i = 0; i < sig->param_count; ++i) if (!m_type_is_byref (sig->params [i]) && sig->params [i]->type == MONO_TYPE_R4) return FALSE; } #endif if (g_list_find (cfg->dont_inline, method)) return FALSE; if (mono_profiler_get_call_instrumentation_flags (method)) return FALSE; if (mono_profiler_coverage_instrumentation_enabled (method)) return FALSE; if (method_does_not_return (method)) return FALSE; return TRUE; } static gboolean mini_field_access_needs_cctor_run (MonoCompile *cfg, MonoMethod *method, MonoClass *klass, MonoVTable *vtable) { if (!cfg->compile_aot) { g_assert (vtable); if (vtable->initialized) return FALSE; } if (mono_class_is_before_field_init (klass)) { if (cfg->method == method) return FALSE; } if (!mono_class_needs_cctor_run (klass, method)) return FALSE; if (! (method->flags & METHOD_ATTRIBUTE_STATIC) && (klass == method->klass)) /* The initialization is already done before the method is called */ return FALSE; return TRUE; } int mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index) { int index_reg = index->dreg; int index2_reg; #if SIZEOF_REGISTER == 8 /* The array reg is 64 bits but the index reg is only 32 */ if (COMPILE_LLVM (cfg)) { /* * abcrem can't handle the OP_SEXT_I4, so add this after abcrem, * during OP_BOUNDS_CHECK decomposition, and in the implementation * of OP_X86_LEA for llvm. */ index2_reg = index_reg; } else { index2_reg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, index2_reg, index_reg); } #else if (index->type == STACK_I8) { index2_reg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_LCONV_TO_I4, index2_reg, index_reg); } else { index2_reg = index_reg; } #endif return index2_reg; } MonoInst* mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded) { MonoInst *ins; guint32 size; int mult_reg, add_reg, array_reg, index2_reg, bounds_reg, lower_bound_reg, realidx2_reg; int context_used; if (mini_is_gsharedvt_variable_klass (klass)) { size = -1; } else { mono_class_init_internal (klass); size = mono_class_array_element_size (klass); } mult_reg = alloc_preg (cfg); array_reg = arr->dreg; realidx2_reg = index2_reg = mini_emit_sext_index_reg (cfg, index); if (bounded) { bounds_reg = alloc_preg (cfg); lower_bound_reg = alloc_preg (cfg); realidx2_reg = alloc_preg (cfg); MonoBasicBlock *is_null_bb = NULL; NEW_BBLOCK (cfg, is_null_bb); // gint32 lower_bound = 0; // if (arr->bounds) // lower_bound = arr->bounds.lower_bound; // realidx2 = index2 - lower_bound; MONO_EMIT_NEW_PCONST (cfg, lower_bound_reg, NULL); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds)); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, bounds_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, lower_bound_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_START_BB (cfg, is_null_bb); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2_reg, lower_bound_reg); } if (bcheck) MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, realidx2_reg); #if defined(TARGET_X86) || defined(TARGET_AMD64) if (size == 1 || size == 2 || size == 4 || size == 8) { static const int fast_log2 [] = { 1, 0, 1, -1, 2, -1, -1, -1, 3 }; EMIT_NEW_X86_LEA (cfg, ins, array_reg, realidx2_reg, fast_log2 [size], MONO_STRUCT_OFFSET (MonoArray, vector)); ins->klass = klass; ins->type = STACK_MP; return ins; } #endif add_reg = alloc_ireg_mp (cfg); if (size == -1) { MonoInst *rgctx_ins; /* gsharedvt */ g_assert (cfg->gshared); context_used = mini_class_check_context_used (cfg, klass); g_assert (context_used); rgctx_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE); MONO_EMIT_NEW_BIALU (cfg, OP_IMUL, mult_reg, realidx2_reg, rgctx_ins->dreg); } else { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_MUL_IMM, mult_reg, realidx2_reg, size); } MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, array_reg, mult_reg); NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector)); ins->klass = klass; ins->type = STACK_MP; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* mini_emit_ldelema_2_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index_ins1, MonoInst *index_ins2) { int bounds_reg = alloc_preg (cfg); int add_reg = alloc_ireg_mp (cfg); int mult_reg = alloc_preg (cfg); int mult2_reg = alloc_preg (cfg); int low1_reg = alloc_preg (cfg); int low2_reg = alloc_preg (cfg); int high1_reg = alloc_preg (cfg); int high2_reg = alloc_preg (cfg); int realidx1_reg = alloc_preg (cfg); int realidx2_reg = alloc_preg (cfg); int sum_reg = alloc_preg (cfg); int index1, index2; MonoInst *ins; guint32 size; mono_class_init_internal (klass); size = mono_class_array_element_size (klass); index1 = index_ins1->dreg; index2 = index_ins2->dreg; #if SIZEOF_REGISTER == 8 /* The array reg is 64 bits but the index reg is only 32 */ if (COMPILE_LLVM (cfg)) { /* Not needed */ } else { int tmpreg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index1); index1 = tmpreg; tmpreg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index2); index2 = tmpreg; } #else // FIXME: Do we need to do something here for i8 indexes, like in ldelema_1_ins ? #endif /* range checking */ MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds)); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low1_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx1_reg, index1, low1_reg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high1_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, length)); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high1_reg, realidx1_reg); MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException"); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low2_reg, bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2, low2_reg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high2_reg, bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, length)); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high2_reg, realidx2_reg); MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException"); MONO_EMIT_NEW_BIALU (cfg, OP_PMUL, mult_reg, high2_reg, realidx1_reg); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, mult_reg, realidx2_reg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PMUL_IMM, mult2_reg, sum_reg, size); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, mult2_reg, arr->dreg); NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector)); ins->type = STACK_MP; ins->klass = klass; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* mini_emit_ldelema_ins (MonoCompile *cfg, MonoMethod *cmethod, MonoInst **sp, guchar *ip, gboolean is_set) { int rank; MonoInst *addr; MonoMethod *addr_method; int element_size; MonoClass *eclass = m_class_get_element_class (cmethod->klass); gboolean bounded = m_class_get_byval_arg (cmethod->klass) ? m_class_get_byval_arg (cmethod->klass)->type == MONO_TYPE_ARRAY : FALSE; rank = mono_method_signature_internal (cmethod)->param_count - (is_set? 1: 0); if (rank == 1) return mini_emit_ldelema_1_ins (cfg, eclass, sp [0], sp [1], TRUE, bounded); /* emit_ldelema_2 depends on OP_LMUL */ if (!cfg->backend->emulate_mul_div && rank == 2 && (cfg->opt & MONO_OPT_INTRINS) && !mini_is_gsharedvt_variable_klass (eclass)) { return mini_emit_ldelema_2_ins (cfg, eclass, sp [0], sp [1], sp [2]); } if (mini_is_gsharedvt_variable_klass (eclass)) element_size = 0; else element_size = mono_class_array_element_size (eclass); addr_method = mono_marshal_get_array_address (rank, element_size); addr = mono_emit_method_call (cfg, addr_method, sp, NULL); return addr; } static gboolean mini_class_is_reference (MonoClass *klass) { return mini_type_is_reference (m_class_get_byval_arg (klass)); } MonoInst* mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks) { if (safety_checks && mini_class_is_reference (klass) && !(MONO_INS_IS_PCONST_NULL (sp [2]))) { MonoClass *obj_array = mono_array_class_get_cached (mono_defaults.object_class); MonoMethod *helper; MonoInst *iargs [3]; if (sp [0]->type != STACK_OBJ) return NULL; if (sp [2]->type != STACK_OBJ) return NULL; iargs [2] = sp [2]; iargs [1] = sp [1]; iargs [0] = sp [0]; MonoClass *array_class = sp [0]->klass; if (array_class && m_class_get_rank (array_class) == 1) { MonoClass *eclass = m_class_get_element_class (array_class); if (m_class_is_sealed (eclass)) { helper = mono_marshal_get_virtual_stelemref (array_class); /* Make a non-virtual call if possible */ return mono_emit_method_call (cfg, helper, iargs, NULL); } } helper = mono_marshal_get_virtual_stelemref (obj_array); if (!helper->slot) mono_class_setup_vtable (obj_array); g_assert (helper->slot); return mono_emit_method_call (cfg, helper, iargs, sp [0]); } else { MonoInst *ins; if (mini_is_gsharedvt_variable_klass (klass)) { MonoInst *addr; // FIXME-VT: OP_ICONST optimization addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg); ins->opcode = OP_STOREV_MEMBASE; } else if (sp [1]->opcode == OP_ICONST) { int array_reg = sp [0]->dreg; int index_reg = sp [1]->dreg; int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector); if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg) && sp [1]->inst_c0 < 0) MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg); if (safety_checks) MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset, sp [2]->dreg); } else { MonoInst *addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], safety_checks, FALSE); if (!mini_debug_options.weak_memory_model && mini_class_is_reference (klass)) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg); if (mini_class_is_reference (klass)) mini_emit_write_barrier (cfg, addr, sp [2]); } return ins; } } MonoInst* mini_emit_memory_barrier (MonoCompile *cfg, int kind) { MonoInst *ins = NULL; MONO_INST_NEW (cfg, ins, OP_MEMORY_BARRIER); MONO_ADD_INS (cfg->cbb, ins); ins->backend.memory_barrier_kind = kind; return ins; } /* * This entry point could be used later for arbitrary method * redirection. */ inline static MonoInst* mini_redirect_call (MonoCompile *cfg, MonoMethod *method, MonoMethodSignature *signature, MonoInst **args, MonoInst *this_ins) { if (method->klass == mono_defaults.string_class) { /* managed string allocation support */ if (strcmp (method->name, "FastAllocateString") == 0) { MonoInst *iargs [2]; MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); MonoMethod *managed_alloc = NULL; mono_error_assert_ok (cfg->error); /*Should not fail since it System.String*/ #ifndef MONO_CROSS_COMPILE managed_alloc = mono_gc_get_managed_allocator (method->klass, FALSE, FALSE); #endif if (!managed_alloc) return NULL; EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); iargs [1] = args [0]; return mono_emit_method_call (cfg, managed_alloc, iargs, this_ins); } } return NULL; } static void mono_save_args (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **sp) { MonoInst *store, *temp; int i; for (i = 0; i < sig->param_count + sig->hasthis; ++i) { MonoType *argtype = (sig->hasthis && (i == 0)) ? type_from_stack_type (*sp) : sig->params [i - sig->hasthis]; /* * FIXME: We should use *args++ = sp [0], but that would mean the arg * would be different than the MonoInst's used to represent arguments, and * the ldelema implementation can't deal with that. * Solution: When ldelema is used on an inline argument, create a var for * it, emit ldelema on that var, and emit the saving code below in * inline_method () if needed. */ temp = mono_compile_create_var (cfg, argtype, OP_LOCAL); cfg->args [i] = temp; /* This uses cfg->args [i] which is set by the preceding line */ EMIT_NEW_ARGSTORE (cfg, store, i, *sp); store->cil_code = sp [0]->cil_code; sp++; } } #define MONO_INLINE_CALLED_LIMITED_METHODS 1 #define MONO_INLINE_CALLER_LIMITED_METHODS 1 #if (MONO_INLINE_CALLED_LIMITED_METHODS) static gboolean check_inline_called_method_name_limit (MonoMethod *called_method) { int strncmp_result; static const char *limit = NULL; if (limit == NULL) { const char *limit_string = g_getenv ("MONO_INLINE_CALLED_METHOD_NAME_LIMIT"); if (limit_string != NULL) limit = limit_string; else limit = ""; } if (limit [0] != '\0') { char *called_method_name = mono_method_full_name (called_method, TRUE); strncmp_result = strncmp (called_method_name, limit, strlen (limit)); g_free (called_method_name); //return (strncmp_result <= 0); return (strncmp_result == 0); } else { return TRUE; } } #endif #if (MONO_INLINE_CALLER_LIMITED_METHODS) static gboolean check_inline_caller_method_name_limit (MonoMethod *caller_method) { int strncmp_result; static const char *limit = NULL; if (limit == NULL) { const char *limit_string = g_getenv ("MONO_INLINE_CALLER_METHOD_NAME_LIMIT"); if (limit_string != NULL) { limit = limit_string; } else { limit = ""; } } if (limit [0] != '\0') { char *caller_method_name = mono_method_full_name (caller_method, TRUE); strncmp_result = strncmp (caller_method_name, limit, strlen (limit)); g_free (caller_method_name); //return (strncmp_result <= 0); return (strncmp_result == 0); } else { return TRUE; } } #endif void mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype) { static double r8_0 = 0.0; static float r4_0 = 0.0; MonoInst *ins; int t; rtype = mini_get_underlying_type (rtype); t = rtype->type; if (m_type_is_byref (rtype)) { MONO_EMIT_NEW_PCONST (cfg, dreg, NULL); } else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) { MONO_EMIT_NEW_ICONST (cfg, dreg, 0); } else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) { MONO_EMIT_NEW_I8CONST (cfg, dreg, 0); } else if (cfg->r4fp && t == MONO_TYPE_R4) { MONO_INST_NEW (cfg, ins, OP_R4CONST); ins->type = STACK_R4; ins->inst_p0 = (void*)&r4_0; ins->dreg = dreg; MONO_ADD_INS (cfg->cbb, ins); } else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) { MONO_INST_NEW (cfg, ins, OP_R8CONST); ins->type = STACK_R8; ins->inst_p0 = (void*)&r8_0; ins->dreg = dreg; MONO_ADD_INS (cfg->cbb, ins); } else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) || ((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) { MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype)); } else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) { MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype)); } else { MONO_EMIT_NEW_PCONST (cfg, dreg, NULL); } } static void emit_dummy_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype) { int t; rtype = mini_get_underlying_type (rtype); t = rtype->type; if (m_type_is_byref (rtype)) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_PCONST); } else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_ICONST); } else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_I8CONST); } else if (cfg->r4fp && t == MONO_TYPE_R4) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R4CONST); } else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R8CONST); } else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) || ((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO); } else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO); } else { mini_emit_init_rvar (cfg, dreg, rtype); } } /* If INIT is FALSE, emit dummy initialization statements to keep the IR valid */ static void emit_init_local (MonoCompile *cfg, int local, MonoType *type, gboolean init) { MonoInst *var = cfg->locals [local]; if (COMPILE_SOFT_FLOAT (cfg)) { MonoInst *store; int reg = alloc_dreg (cfg, (MonoStackType)var->type); mini_emit_init_rvar (cfg, reg, type); EMIT_NEW_LOCSTORE (cfg, store, local, cfg->cbb->last_ins); } else { if (init) mini_emit_init_rvar (cfg, var->dreg, type); else emit_dummy_init_rvar (cfg, var->dreg, type); } } int mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always) { return inline_method (cfg, cmethod, fsig, sp, ip, real_offset, inline_always, NULL); } /* * inline_method: * * Return the cost of inlining CMETHOD, or zero if it should not be inlined. */ static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty) { ERROR_DECL (error); MonoInst *ins, *rvar = NULL; MonoMethodHeader *cheader; MonoBasicBlock *ebblock, *sbblock; int i, costs; MonoInst **prev_locals, **prev_args; MonoType **prev_arg_types; guint prev_real_offset; GHashTable *prev_cbb_hash; MonoBasicBlock **prev_cil_offset_to_bb; MonoBasicBlock *prev_cbb; const guchar *prev_ip; guchar *prev_cil_start; guint32 prev_cil_offset_to_bb_len; MonoMethod *prev_current_method; MonoGenericContext *prev_generic_context; gboolean ret_var_set, prev_ret_var_set, prev_disable_inline, virtual_ = FALSE; g_assert (cfg->exception_type == MONO_EXCEPTION_NONE); #if (MONO_INLINE_CALLED_LIMITED_METHODS) if ((! inline_always) && ! check_inline_called_method_name_limit (cmethod)) return 0; #endif #if (MONO_INLINE_CALLER_LIMITED_METHODS) if ((! inline_always) && ! check_inline_caller_method_name_limit (cfg->method)) return 0; #endif if (!fsig) fsig = mono_method_signature_internal (cmethod); if (cfg->verbose_level > 2) printf ("INLINE START %p %s -> %s\n", cmethod, mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE)); if (!cmethod->inline_info) { cfg->stat_inlineable_methods++; cmethod->inline_info = 1; } if (is_empty) *is_empty = FALSE; /* allocate local variables */ cheader = mono_method_get_header_checked (cmethod, error); if (!cheader) { if (inline_always) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_move (cfg->error, error); } else { mono_error_cleanup (error); } return 0; } if (is_empty && cheader->code_size == 1 && cheader->code [0] == CEE_RET) *is_empty = TRUE; /* allocate space to store the return value */ if (!MONO_TYPE_IS_VOID (fsig->ret)) { rvar = mono_compile_create_var (cfg, fsig->ret, OP_LOCAL); } prev_locals = cfg->locals; cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, cheader->num_locals * sizeof (MonoInst*)); for (i = 0; i < cheader->num_locals; ++i) cfg->locals [i] = mono_compile_create_var (cfg, cheader->locals [i], OP_LOCAL); /* allocate start and end blocks */ /* This is needed so if the inline is aborted, we can clean up */ NEW_BBLOCK (cfg, sbblock); sbblock->real_offset = real_offset; NEW_BBLOCK (cfg, ebblock); ebblock->block_num = cfg->num_bblocks++; ebblock->real_offset = real_offset; prev_args = cfg->args; prev_arg_types = cfg->arg_types; prev_ret_var_set = cfg->ret_var_set; prev_real_offset = cfg->real_offset; prev_cbb_hash = cfg->cbb_hash; prev_cil_offset_to_bb = cfg->cil_offset_to_bb; prev_cil_offset_to_bb_len = cfg->cil_offset_to_bb_len; prev_cil_start = cfg->cil_start; prev_ip = cfg->ip; prev_cbb = cfg->cbb; prev_current_method = cfg->current_method; prev_generic_context = cfg->generic_context; prev_disable_inline = cfg->disable_inline; cfg->ret_var_set = FALSE; cfg->inline_depth ++; if (ip && *ip == CEE_CALLVIRT && !(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) virtual_ = TRUE; costs = mono_method_to_ir (cfg, cmethod, sbblock, ebblock, rvar, sp, real_offset, virtual_); ret_var_set = cfg->ret_var_set; cfg->real_offset = prev_real_offset; cfg->cbb_hash = prev_cbb_hash; cfg->cil_offset_to_bb = prev_cil_offset_to_bb; cfg->cil_offset_to_bb_len = prev_cil_offset_to_bb_len; cfg->cil_start = prev_cil_start; cfg->ip = prev_ip; cfg->locals = prev_locals; cfg->args = prev_args; cfg->arg_types = prev_arg_types; cfg->current_method = prev_current_method; cfg->generic_context = prev_generic_context; cfg->ret_var_set = prev_ret_var_set; cfg->disable_inline = prev_disable_inline; cfg->inline_depth --; if ((costs >= 0 && costs < 60) || inline_always || (costs >= 0 && (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))) { if (cfg->verbose_level > 2) printf ("INLINE END %s -> %s\n", mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE)); mono_error_assert_ok (cfg->error); cfg->stat_inlined_methods++; /* always add some code to avoid block split failures */ MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (prev_cbb, ins); prev_cbb->next_bb = sbblock; link_bblock (cfg, prev_cbb, sbblock); /* * Get rid of the begin and end bblocks if possible to aid local * optimizations. */ if (prev_cbb->out_count == 1) mono_merge_basic_blocks (cfg, prev_cbb, sbblock); if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] != ebblock)) mono_merge_basic_blocks (cfg, prev_cbb, prev_cbb->out_bb [0]); if ((ebblock->in_count == 1) && ebblock->in_bb [0]->out_count == 1) { MonoBasicBlock *prev = ebblock->in_bb [0]; if (prev->next_bb == ebblock) { mono_merge_basic_blocks (cfg, prev, ebblock); cfg->cbb = prev; if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] == prev)) { mono_merge_basic_blocks (cfg, prev_cbb, prev); cfg->cbb = prev_cbb; } } else { /* There could be a bblock after 'prev', and making 'prev' the current bb could cause problems */ cfg->cbb = ebblock; } } else { /* * Its possible that the rvar is set in some prev bblock, but not in others. * (#1835). */ if (rvar) { MonoBasicBlock *bb; for (i = 0; i < ebblock->in_count; ++i) { bb = ebblock->in_bb [i]; if (bb->last_ins && bb->last_ins->opcode == OP_NOT_REACHED) { cfg->cbb = bb; mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret); } } } cfg->cbb = ebblock; } if (rvar) { /* * If the inlined method contains only a throw, then the ret var is not * set, so set it to a dummy value. */ if (!ret_var_set) mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret); EMIT_NEW_TEMPLOAD (cfg, ins, rvar->inst_c0); *sp++ = ins; } cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader); return costs + 1; } else { if (cfg->verbose_level > 2) { const char *msg = mono_error_get_message (cfg->error); printf ("INLINE ABORTED %s (cost %d) %s\n", mono_method_full_name (cmethod, TRUE), costs, msg ? msg : ""); } cfg->exception_type = MONO_EXCEPTION_NONE; clear_cfg_error (cfg); /* This gets rid of the newly added bblocks */ cfg->cbb = prev_cbb; } cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader); return 0; } /* * Some of these comments may well be out-of-date. * Design decisions: we do a single pass over the IL code (and we do bblock * splitting/merging in the few cases when it's required: a back jump to an IL * address that was not already seen as bblock starting point). * Code is validated as we go (full verification is still better left to metadata/verify.c). * Complex operations are decomposed in simpler ones right away. We need to let the * arch-specific code peek and poke inside this process somehow (except when the * optimizations can take advantage of the full semantic info of coarse opcodes). * All the opcodes of the form opcode.s are 'normalized' to opcode. * MonoInst->opcode initially is the IL opcode or some simplification of that * (OP_LOAD, OP_STORE). The arch-specific code may rearrange it to an arch-specific * opcode with value bigger than OP_LAST. * At this point the IR can be handed over to an interpreter, a dumb code generator * or to the optimizing code generator that will translate it to SSA form. * * Profiling directed optimizations. * We may compile by default with few or no optimizations and instrument the code * or the user may indicate what methods to optimize the most either in a config file * or through repeated runs where the compiler applies offline the optimizations to * each method and then decides if it was worth it. */ #define CHECK_TYPE(ins) if (!(ins)->type) UNVERIFIED #define CHECK_STACK(num) if ((sp - stack_start) < (num)) UNVERIFIED #define CHECK_STACK_OVF() if (((sp - stack_start) + 1) > header->max_stack) UNVERIFIED #define CHECK_ARG(num) if ((unsigned)(num) >= (unsigned)num_args) UNVERIFIED #define CHECK_LOCAL(num) if ((unsigned)(num) >= (unsigned)header->num_locals) UNVERIFIED #define CHECK_OPSIZE(size) if ((size) < 1 || ip + (size) > end) UNVERIFIED #define CHECK_UNVERIFIABLE(cfg) if (cfg->unverifiable) UNVERIFIED #define CHECK_TYPELOAD(klass) if (!(klass) || mono_class_has_failure (klass)) TYPE_LOAD_ERROR ((klass)) /* offset from br.s -> br like opcodes */ #define BIG_BRANCH_OFFSET 13 static gboolean ip_in_bb (MonoCompile *cfg, MonoBasicBlock *bb, const guint8* ip) { MonoBasicBlock *b = cfg->cil_offset_to_bb [ip - cfg->cil_start]; return b == NULL || b == bb; } static int get_basic_blocks (MonoCompile *cfg, MonoMethodHeader* header, guint real_offset, guchar *start, guchar *end, guchar **pos) { guchar *ip = start; guchar *target; int i; guint cli_addr; MonoBasicBlock *bblock; const MonoOpcode *opcode; while (ip < end) { cli_addr = ip - start; i = mono_opcode_value ((const guint8 **)&ip, end); if (i < 0) UNVERIFIED; opcode = &mono_opcodes [i]; switch (opcode->argument) { case MonoInlineNone: ip++; break; case MonoInlineString: case MonoInlineType: case MonoInlineField: case MonoInlineMethod: case MonoInlineTok: case MonoInlineSig: case MonoShortInlineR: case MonoInlineI: ip += 5; break; case MonoInlineVar: ip += 3; break; case MonoShortInlineVar: case MonoShortInlineI: ip += 2; break; case MonoShortInlineBrTarget: target = start + cli_addr + 2 + (signed char)ip [1]; GET_BBLOCK (cfg, bblock, target); ip += 2; if (ip < end) GET_BBLOCK (cfg, bblock, ip); break; case MonoInlineBrTarget: target = start + cli_addr + 5 + (gint32)read32 (ip + 1); GET_BBLOCK (cfg, bblock, target); ip += 5; if (ip < end) GET_BBLOCK (cfg, bblock, ip); break; case MonoInlineSwitch: { guint32 n = read32 (ip + 1); guint32 j; ip += 5; cli_addr += 5 + 4 * n; target = start + cli_addr; GET_BBLOCK (cfg, bblock, target); for (j = 0; j < n; ++j) { target = start + cli_addr + (gint32)read32 (ip); GET_BBLOCK (cfg, bblock, target); ip += 4; } break; } case MonoInlineR: case MonoInlineI8: ip += 9; break; default: g_assert_not_reached (); } if (i == CEE_THROW) { guchar *bb_start = ip - 1; /* Find the start of the bblock containing the throw */ bblock = NULL; while ((bb_start >= start) && !bblock) { bblock = cfg->cil_offset_to_bb [(bb_start) - start]; bb_start --; } if (bblock) bblock->out_of_line = 1; } } return 0; unverified: exception_exit: *pos = ip; return 1; } static MonoMethod * mini_get_method_allow_open (MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error) { MonoMethod *method; error_init (error); if (m->wrapper_type != MONO_WRAPPER_NONE) { method = (MonoMethod *)mono_method_get_wrapper_data (m, token); if (context) { method = mono_class_inflate_generic_method_checked (method, context, error); } } else { method = mono_get_method_checked (m_class_get_image (m->klass), token, klass, context, error); } return method; } static MonoMethod * mini_get_method (MonoCompile *cfg, MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context) { ERROR_DECL (error); MonoMethod *method = mini_get_method_allow_open (m, token, klass, context, cfg ? cfg->error : error); if (method && cfg && !cfg->gshared && mono_class_is_open_constructed_type (m_class_get_byval_arg (method->klass))) { mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Method with open type while not compiling gshared"); method = NULL; } if (!method && !cfg) mono_error_cleanup (error); /* FIXME don't swallow the error */ return method; } static MonoMethodSignature* mini_get_signature (MonoMethod *method, guint32 token, MonoGenericContext *context, MonoError *error) { MonoMethodSignature *fsig; error_init (error); if (method->wrapper_type != MONO_WRAPPER_NONE) { fsig = (MonoMethodSignature *)mono_method_get_wrapper_data (method, token); } else { fsig = mono_metadata_parse_signature_checked (m_class_get_image (method->klass), token, error); return_val_if_nok (error, NULL); } if (context) { fsig = mono_inflate_generic_signature(fsig, context, error); } return fsig; } /* * Return the original method is a wrapper is specified. We can only access * the custom attributes from the original method. */ static MonoMethod* get_original_method (MonoMethod *method) { if (method->wrapper_type == MONO_WRAPPER_NONE) return method; /* native code (which is like Critical) can call any managed method XXX FIXME XXX to validate all usages */ if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) return NULL; /* in other cases we need to find the original method */ return mono_marshal_method_from_wrapper (method); } static guchar* il_read_op (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op) // If ip is desired_il_op, return the next ip, else NULL. { if (G_LIKELY (ip < end) && G_UNLIKELY (*ip == first_byte)) { MonoOpcodeEnum il_op = MonoOpcodeEnum_Invalid; // mono_opcode_value_and_size updates ip, but not in the expected way. const guchar *temp_ip = ip; const int size = mono_opcode_value_and_size (&temp_ip, end, &il_op); return (G_LIKELY (size > 0) && G_UNLIKELY (il_op == desired_il_op)) ? (ip + size) : NULL; } return NULL; } static guchar* il_read_op_and_token (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, guint32 *token) { ip = il_read_op (ip, end, first_byte, desired_il_op); if (ip) *token = read32 (ip - 4); // could be +1 or +2 from start return ip; } static guchar* il_read_branch_and_target (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, int size, guchar **target) { ip = il_read_op (ip, end, first_byte, desired_il_op); if (ip) { gint32 delta = 0; switch (size) { case 1: delta = (signed char)ip [-1]; break; case 4: delta = (gint32)read32 (ip - 4); break; } // FIXME verify it is within the function and start of an instruction. *target = ip + delta; return ip; } return NULL; } #define il_read_brtrue(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE, MONO_CEE_BRTRUE, 4, target)) #define il_read_brtrue_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE_S, MONO_CEE_BRTRUE_S, 1, target)) #define il_read_brfalse(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE, MONO_CEE_BRFALSE, 4, target)) #define il_read_brfalse_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE_S, MONO_CEE_BRFALSE_S, 1, target)) #define il_read_dup(ip, end) (il_read_op (ip, end, CEE_DUP, MONO_CEE_DUP)) #define il_read_newobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_NEW_OBJ, MONO_CEE_NEWOBJ, token)) #define il_read_ldtoken(ip, end, token) (il_read_op_and_token (ip, end, CEE_LDTOKEN, MONO_CEE_LDTOKEN, token)) #define il_read_call(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALL, MONO_CEE_CALL, token)) #define il_read_callvirt(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALLVIRT, MONO_CEE_CALLVIRT, token)) #define il_read_initobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_INITOBJ, token)) #define il_read_constrained(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_CONSTRAINED_, token)) #define il_read_unbox_any(ip, end, token) (il_read_op_and_token (ip, end, CEE_UNBOX_ANY, MONO_CEE_UNBOX_ANY, token)) /* * Check that the IL instructions at ip are the array initialization * sequence and return the pointer to the data and the size. */ static const char* initialize_array_data (MonoCompile *cfg, MonoMethod *method, gboolean aot, guchar *ip, guchar *end, MonoClass *klass, guint32 len, int *out_size, guint32 *out_field_token, MonoOpcodeEnum *il_op, guchar **next_ip) { /* * newarr[System.Int32] * dup * ldtoken field valuetype ... * call void class [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle) */ guint32 token; guint32 field_token; if ((ip = il_read_dup (ip, end)) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_ldtoken (ip, end, &field_token)) && IS_FIELD_DEF (field_token) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_call (ip, end, &token))) { ERROR_DECL (error); guint32 rva; const char *data_ptr; int size = 0; MonoMethod *cmethod; MonoClass *dummy_class; MonoClassField *field = mono_field_from_token_checked (m_class_get_image (method->klass), field_token, &dummy_class, NULL, error); int dummy_align; if (!field) { mono_error_cleanup (error); /* FIXME don't swallow the error */ return NULL; } *out_field_token = field_token; cmethod = mini_get_method (NULL, method, token, NULL, NULL); if (!cmethod) return NULL; if (strcmp (cmethod->name, "InitializeArray") || strcmp (m_class_get_name (cmethod->klass), "RuntimeHelpers") || m_class_get_image (cmethod->klass) != mono_defaults.corlib) return NULL; switch (mini_get_underlying_type (m_class_get_byval_arg (klass))->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: size = 1; break; /* we need to swap on big endian, so punt. Should we handle R4 and R8 as well? */ #if TARGET_BYTE_ORDER == G_LITTLE_ENDIAN case MONO_TYPE_I2: case MONO_TYPE_U2: size = 2; break; case MONO_TYPE_I4: case MONO_TYPE_U4: case MONO_TYPE_R4: size = 4; break; case MONO_TYPE_R8: case MONO_TYPE_I8: case MONO_TYPE_U8: size = 8; break; #endif default: return NULL; } size *= len; if (size > mono_type_size (field->type, &dummy_align)) return NULL; *out_size = size; /*g_print ("optimized in %s: size: %d, numelems: %d\n", method->name, size, newarr->inst_newa_len->inst_c0);*/ MonoImage *method_klass_image = m_class_get_image (method->klass); if (!image_is_dynamic (method_klass_image)) { guint32 field_index = mono_metadata_token_index (field_token); mono_metadata_field_info (method_klass_image, field_index - 1, NULL, &rva, NULL); data_ptr = mono_image_rva_map (method_klass_image, rva); /*g_print ("field: 0x%08x, rva: %d, rva_ptr: %p\n", read32 (ip + 2), rva, data_ptr);*/ /* for aot code we do the lookup on load */ if (aot && data_ptr) data_ptr = (const char *)GUINT_TO_POINTER (rva); } else { /*FIXME is it possible to AOT a SRE assembly not meant to be saved? */ g_assert (!aot); data_ptr = mono_field_get_data (field); } if (!data_ptr) return NULL; *il_op = MONO_CEE_CALL; *next_ip = ip; return data_ptr; } return NULL; } static void set_exception_type_from_invalid_il (MonoCompile *cfg, MonoMethod *method, guchar *ip) { ERROR_DECL (error); char *method_fname = mono_method_full_name (method, TRUE); char *method_code; MonoMethodHeader *header = mono_method_get_header_checked (method, error); if (!header) { method_code = g_strdup_printf ("could not parse method body due to %s", mono_error_get_message (error)); mono_error_cleanup (error); } else if (header->code_size == 0) method_code = g_strdup ("method body is empty."); else method_code = mono_disasm_code_one (NULL, method, ip, NULL); mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Invalid IL code in %s: %s\n", method_fname, method_code)); g_free (method_fname); g_free (method_code); cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header); } guint32 mono_type_to_stloc_coerce (MonoType *type) { if (m_type_is_byref (type)) return 0; type = mini_get_underlying_type (type); handle_enum: switch (type->type) { case MONO_TYPE_I1: return OP_ICONV_TO_I1; case MONO_TYPE_U1: return OP_ICONV_TO_U1; case MONO_TYPE_I2: return OP_ICONV_TO_I2; case MONO_TYPE_U2: return OP_ICONV_TO_U2; case MONO_TYPE_I4: case MONO_TYPE_U4: case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: case MONO_TYPE_I8: case MONO_TYPE_U8: case MONO_TYPE_R4: case MONO_TYPE_R8: case MONO_TYPE_TYPEDBYREF: case MONO_TYPE_GENERICINST: return 0; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } return 0; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: //TODO I believe we don't need to handle gsharedvt as there won't be match and, for example, u1 is not covariant to u32 return 0; default: g_error ("unknown type 0x%02x in mono_type_to_stloc_coerce", type->type); } return -1; } static void emit_stloc_ir (MonoCompile *cfg, MonoInst **sp, MonoMethodHeader *header, int n) { MonoInst *ins; guint32 coerce_op = mono_type_to_stloc_coerce (header->locals [n]); if (coerce_op) { if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) { if (cfg->verbose_level > 2) printf ("Found existing coercing is enough for stloc\n"); } else { MONO_INST_NEW (cfg, ins, coerce_op); ins->dreg = alloc_ireg (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->klass = mono_class_from_mono_type_internal (header->locals [n]); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } } guint32 opcode = mono_type_to_regmove (cfg, header->locals [n]); if (!cfg->deopt && (opcode == OP_MOVE) && cfg->cbb->last_ins == sp [0] && ((sp [0]->opcode == OP_ICONST) || (sp [0]->opcode == OP_I8CONST))) { /* Optimize reg-reg moves away */ /* * Can't optimize other opcodes, since sp[0] might point to * the last ins of a decomposed opcode. */ sp [0]->dreg = (cfg)->locals [n]->dreg; } else { EMIT_NEW_LOCSTORE (cfg, ins, n, *sp); } } static void emit_starg_ir (MonoCompile *cfg, MonoInst **sp, int n) { MonoInst *ins; guint32 coerce_op = mono_type_to_stloc_coerce (cfg->arg_types [n]); if (coerce_op) { if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) { if (cfg->verbose_level > 2) printf ("Found existing coercing is enough for starg\n"); } else { MONO_INST_NEW (cfg, ins, coerce_op); ins->dreg = alloc_ireg (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->klass = mono_class_from_mono_type_internal (cfg->arg_types [n]); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } } EMIT_NEW_ARGSTORE (cfg, ins, n, *sp); } /* * ldloca inhibits many optimizations so try to get rid of it in common * cases. */ static guchar * emit_optimized_ldloca_ir (MonoCompile *cfg, guchar *ip, guchar *end, int local) { guint32 token; MonoClass *klass; MonoType *type; guchar *start = ip; if ((ip = il_read_initobj (ip, end, &token)) && ip_in_bb (cfg, cfg->cbb, start + 1)) { /* From the INITOBJ case */ klass = mini_get_class (cfg->current_method, token, cfg->generic_context); CHECK_TYPELOAD (klass); type = mini_get_underlying_type (m_class_get_byval_arg (klass)); emit_init_local (cfg, local, type, TRUE); return ip; } exception_exit: return NULL; } static MonoInst* handle_call_res_devirt (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *call_res) { /* * Devirt EqualityComparer.Default.Equals () calls for some types. * The corefx code excepts these calls to be devirtualized. * This depends on the implementation of EqualityComparer.Default, which is * in mcs/class/referencesource/mscorlib/system/collections/generic/equalitycomparer.cs */ if (m_class_get_image (cmethod->klass) == mono_defaults.corlib && !strcmp (m_class_get_name (cmethod->klass), "EqualityComparer`1") && !strcmp (cmethod->name, "get_Default")) { MonoType *param_type = mono_class_get_generic_class (cmethod->klass)->context.class_inst->type_argv [0]; MonoClass *inst; MonoGenericContext ctx; ERROR_DECL (error); memset (&ctx, 0, sizeof (ctx)); MonoType *args [ ] = { param_type }; ctx.class_inst = mono_metadata_get_generic_inst (1, args); inst = mono_class_inflate_generic_class_checked (mono_class_get_iequatable_class (), &ctx, error); mono_error_assert_ok (error); /* EqualityComparer<T>.Default returns specific types depending on T */ // FIXME: Add more /* 1. Implements IEquatable<T> */ /* * Can't use this for string/byte as it might use a different comparer: * * // Specialize type byte for performance reasons * if (t == typeof(byte)) { * return (EqualityComparer<T>)(object)(new ByteEqualityComparer()); * } * #if MOBILE * // Breaks .net serialization compatibility * if (t == typeof (string)) * return (EqualityComparer<T>)(object)new InternalStringComparer (); * #endif */ if (mono_class_is_assignable_from_internal (inst, mono_class_from_mono_type_internal (param_type)) && param_type->type != MONO_TYPE_U1 && param_type->type != MONO_TYPE_STRING) { MonoInst *typed_objref; MonoClass *gcomparer_inst; memset (&ctx, 0, sizeof (ctx)); args [0] = param_type; ctx.class_inst = mono_metadata_get_generic_inst (1, args); MonoClass *gcomparer = mono_class_get_geqcomparer_class (); g_assert (gcomparer); gcomparer_inst = mono_class_inflate_generic_class_checked (gcomparer, &ctx, error); if (is_ok (error)) { MONO_INST_NEW (cfg, typed_objref, OP_TYPED_OBJREF); typed_objref->type = STACK_OBJ; typed_objref->dreg = alloc_ireg_ref (cfg); typed_objref->sreg1 = call_res->dreg; typed_objref->klass = gcomparer_inst; MONO_ADD_INS (cfg->cbb, typed_objref); call_res = typed_objref; /* Force decompose */ cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; } } } return call_res; } static gboolean is_exception_class (MonoClass *klass) { if (G_LIKELY (m_class_get_supertypes (klass))) return mono_class_has_parent_fast (klass, mono_defaults.exception_class); while (klass) { if (klass == mono_defaults.exception_class) return TRUE; klass = m_class_get_parent (klass); } return FALSE; } /* * is_jit_optimizer_disabled: * * Determine whenever M's assembly has a DebuggableAttribute with the * IsJITOptimizerDisabled flag set. */ static gboolean is_jit_optimizer_disabled (MonoMethod *m) { MonoAssembly *ass = m_class_get_image (m->klass)->assembly; g_assert (ass); if (ass->jit_optimizer_disabled_inited) return ass->jit_optimizer_disabled; return mono_assembly_is_jit_optimizer_disabled (ass); } gboolean mono_is_supported_tailcall_helper (gboolean value, const char *svalue) { if (!value) mono_tailcall_print ("%s %s\n", __func__, svalue); return value; } static gboolean mono_is_not_supported_tailcall_helper (gboolean value, const char *svalue, MonoMethod *method, MonoMethod *cmethod) { // Return value, printing if it inhibits tailcall. if (value && mono_tailcall_print_enabled ()) { const char *lparen = strchr (svalue, ' ') ? "(" : ""; const char *rparen = *lparen ? ")" : ""; mono_tailcall_print ("%s %s -> %s %s%s%s:%d\n", __func__, method->name, cmethod->name, lparen, svalue, rparen, value); } return value; } #define IS_NOT_SUPPORTED_TAILCALL(x) (mono_is_not_supported_tailcall_helper((x), #x, method, cmethod)) static gboolean is_supported_tailcall (MonoCompile *cfg, const guint8 *ip, MonoMethod *method, MonoMethod *cmethod, MonoMethodSignature *fsig, gboolean virtual_, gboolean extra_arg, gboolean *ptailcall_calli) { // Some checks apply to "regular", some to "calli", some to both. // To ease burden on caller, always compute regular and calli. gboolean tailcall = TRUE; gboolean tailcall_calli = TRUE; if (IS_NOT_SUPPORTED_TAILCALL (virtual_ && !cfg->backend->have_op_tailcall_membase)) tailcall = FALSE; if (IS_NOT_SUPPORTED_TAILCALL (!cfg->backend->have_op_tailcall_reg)) tailcall_calli = FALSE; if (!tailcall && !tailcall_calli) goto exit; // FIXME in calli, there is no type for for the this parameter, // so we assume it might be valuetype; in future we should issue a range // check, so rule out pointing to frame (for other reference parameters also) if ( IS_NOT_SUPPORTED_TAILCALL (cmethod && fsig->hasthis && m_class_is_valuetype (cmethod->klass)) // This might point to the current method's stack. Emit range check? || IS_NOT_SUPPORTED_TAILCALL (cmethod && (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) || IS_NOT_SUPPORTED_TAILCALL (fsig->pinvoke) // i.e. if !cmethod (calli) || IS_NOT_SUPPORTED_TAILCALL (cfg->method->save_lmf) || IS_NOT_SUPPORTED_TAILCALL (!cmethod && fsig->hasthis) // FIXME could be valuetype to current frame; range check || IS_NOT_SUPPORTED_TAILCALL (cmethod && cmethod->wrapper_type && cmethod->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) // http://www.mono-project.com/docs/advanced/runtime/docs/generic-sharing/ // // 1. Non-generic non-static methods of reference types have access to the // RGCTX via the "this" argument (this->vtable->rgctx). // 2. a Non-generic static methods of reference types and b. non-generic methods // of value types need to be passed a pointer to the caller's class's VTable in the MONO_ARCH_RGCTX_REG register. // 3. Generic methods need to be passed a pointer to the MRGCTX in the MONO_ARCH_RGCTX_REG register // // That is what vtable_arg is here (always?). // // Passing vtable_arg uses (requires?) a volatile non-parameter register, // such as AMD64 rax, r10, r11, or the return register on many architectures. // ARM32 does not always clearly have such a register. ARM32's return register // is a parameter register. // iPhone could use r9 except on old systems. iPhone/ARM32 is not particularly // important. Linux/arm32 is less clear. // ARM32's scratch r12 might work but only with much collateral change. // // Imagine F1 calls F2, and F2 tailcalls F3. // F2 and F3 are managed. F1 is native. // Without a tailcall, F2 can save and restore everything needed for F1. // However if the extra parameter were in a non-volatile, such as ARM32 V5/R8, // F3 cannot easily restore it for F1, in the current scheme. The current // scheme where the extra parameter is not merely an extra parameter, but // passed "outside of the ABI". // // If all native to managed transitions are intercepted and wrapped (w/o tailcall), // then they can preserve this register and the rest of the managed callgraph // treat it as volatile. // // Interface method dispatch has the same problem (imt_arg). || IS_NOT_SUPPORTED_TAILCALL (extra_arg && !cfg->backend->have_volatile_non_param_register) || IS_NOT_SUPPORTED_TAILCALL (cfg->gsharedvt) ) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } for (int i = 0; i < fsig->param_count; ++i) { if (IS_NOT_SUPPORTED_TAILCALL (m_type_is_byref (fsig->params [i]) || fsig->params [i]->type == MONO_TYPE_PTR || fsig->params [i]->type == MONO_TYPE_FNPTR)) { tailcall_calli = FALSE; tailcall = FALSE; // These can point to the current method's stack. Emit range check? goto exit; } } MonoMethodSignature *caller_signature; MonoMethodSignature *callee_signature; caller_signature = mono_method_signature_internal (method); callee_signature = cmethod ? mono_method_signature_internal (cmethod) : fsig; g_assert (caller_signature); g_assert (callee_signature); // Require an exact match on return type due to various conversions in emit_move_return_value that would be skipped. // The main troublesome conversions are double <=> float. // CoreCLR allows some conversions here, such as integer truncation. // As well I <=> I[48] and U <=> U[48] would be ok, for matching size. if (IS_NOT_SUPPORTED_TAILCALL (mini_get_underlying_type (caller_signature->ret)->type != mini_get_underlying_type (callee_signature->ret)->type) || IS_NOT_SUPPORTED_TAILCALL (!mono_arch_tailcall_supported (cfg, caller_signature, callee_signature, virtual_))) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } /* Debugging support */ #if 0 if (!mono_debug_count ()) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } #endif // See check_sp in mini_emit_calli_full. if (tailcall_calli && IS_NOT_SUPPORTED_TAILCALL (mini_should_check_stack_pointer (cfg))) tailcall_calli = FALSE; exit: mono_tailcall_print ("tail.%s %s -> %s tailcall:%d tailcall_calli:%d gshared:%d extra_arg:%d virtual_:%d\n", mono_opcode_name (*ip), method->name, cmethod ? cmethod->name : "calli", tailcall, tailcall_calli, cfg->gshared, extra_arg, virtual_); *ptailcall_calli = tailcall_calli; return tailcall; } /* * is_addressable_valuetype_load * * Returns true if a previous load can be done without doing an extra copy, given the new instruction ip and the type of the object being loaded ldtype */ static gboolean is_addressable_valuetype_load (MonoCompile* cfg, guint8* ip, MonoType* ldtype) { /* Avoid loading a struct just to load one of its fields */ gboolean is_load_instruction = (*ip == CEE_LDFLD); gboolean is_in_previous_bb = ip_in_bb(cfg, cfg->cbb, ip); gboolean is_struct = MONO_TYPE_ISSTRUCT(ldtype); return is_load_instruction && is_in_previous_bb && is_struct; } /* * handle_ctor_call: * * Handle calls made to ctors from NEWOBJ opcodes. */ static void handle_ctor_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used, MonoInst **sp, guint8 *ip, int *inline_costs) { MonoInst *vtable_arg = NULL, *callvirt_this_arg = NULL, *ins; if (cmethod && (ins = mini_emit_inst_for_ctor (cfg, cmethod, fsig, sp))) { g_assert (MONO_TYPE_IS_VOID (fsig->ret)); CHECK_CFG_EXCEPTION; return; } if (mono_class_generic_sharing_enabled (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE)) { MonoRgctxAccess access = mini_get_rgctx_access_for_method (cmethod); if (access == MONO_RGCTX_ACCESS_MRGCTX) { mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX); } else if (access == MONO_RGCTX_ACCESS_VTABLE) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); } else { g_assert (access == MONO_RGCTX_ACCESS_THIS); } } /* Avoid virtual calls to ctors if possible */ if ((cfg->opt & MONO_OPT_INLINE) && cmethod && !context_used && !vtable_arg && mono_method_check_inlining (cfg, cmethod) && !mono_class_is_subclass_of_internal (cmethod->klass, mono_defaults.exception_class, FALSE)) { int costs; if ((costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, FALSE, NULL))) { cfg->real_offset += 5; *inline_costs += costs - 5; } else { INLINE_FAILURE ("inline failure"); // FIXME-VT: Clean this up if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE(*ip); mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, NULL); } } else if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { MonoInst *addr; addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE); if (cfg->llvm_only) { // FIXME: Avoid initializing vtable_arg mini_emit_llvmonly_calli (cfg, fsig, sp, addr); } else { mini_emit_calli (cfg, fsig, sp, addr, NULL, vtable_arg); } } else if (context_used && ((!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) || !mono_class_generic_sharing_enabled (cmethod->klass)) || cfg->gsharedvt)) { MonoInst *cmethod_addr; /* Generic calls made out of gsharedvt methods cannot be patched, so use an indirect call */ if (cfg->llvm_only) { MonoInst *addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC); mini_emit_llvmonly_calli (cfg, fsig, sp, addr); } else { cmethod_addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); mini_emit_calli (cfg, fsig, sp, cmethod_addr, NULL, vtable_arg); } } else { INLINE_FAILURE ("ctor call"); ins = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, vtable_arg); } exception_exit: mono_error_exit: return; } typedef struct { MonoMethod *method; gboolean inst_tailcall; } HandleCallData; /* * handle_constrained_call: * * Handle constrained calls. Return a MonoInst* representing the call or NULL. * May overwrite sp [0] and modify the ref_... parameters. */ static MonoInst* handle_constrained_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoClass *constrained_class, MonoInst **sp, HandleCallData *cdata, MonoMethod **ref_cmethod, gboolean *ref_virtual, gboolean *ref_emit_widen) { MonoInst *ins, *addr; MonoMethod *method = cdata->method; gboolean constrained_partial_call = FALSE; gboolean constrained_is_generic_param = m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR || m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR; MonoType *gshared_constraint = NULL; if (constrained_is_generic_param && cfg->gshared) { if (!mini_is_gsharedvt_klass (constrained_class)) { g_assert (!m_class_is_valuetype (cmethod->klass)); if (!mini_type_is_reference (m_class_get_byval_arg (constrained_class))) constrained_partial_call = TRUE; MonoType *t = m_class_get_byval_arg (constrained_class); MonoGenericParam *gparam = t->data.generic_param; gshared_constraint = gparam->gshared_constraint; } } if (mini_is_gsharedvt_klass (constrained_class)) { if ((cmethod->klass != mono_defaults.object_class) && m_class_is_valuetype (constrained_class) && m_class_is_valuetype (cmethod->klass)) { /* The 'Own method' case below */ } else if (m_class_get_image (cmethod->klass) != mono_defaults.corlib && !mono_class_is_interface (cmethod->klass) && !m_class_is_valuetype (cmethod->klass)) { /* 'The type parameter is instantiated as a reference type' case below. */ } else { ins = handle_constrained_gsharedvt_call (cfg, cmethod, fsig, sp, constrained_class, ref_emit_widen); CHECK_CFG_EXCEPTION; g_assert (ins); if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_class %s -> %s\n", method->name, cmethod->name); return ins; } } if (m_method_is_static (cmethod)) { /* Call to an abstract static method, handled normally */ return NULL; } else if (constrained_partial_call) { gboolean need_box = TRUE; /* * The receiver is a valuetype, but the exact type is not known at compile time. This means the * called method is not known at compile time either. The called method could end up being * one of the methods on the parent classes (object/valuetype/enum), in which case we need * to box the receiver. * A simple solution would be to box always and make a normal virtual call, but that would * be bad performance wise. */ if (mono_class_is_interface (cmethod->klass) && mono_class_is_ginst (cmethod->klass) && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT)) { /* * The parent classes implement no generic interfaces, so the called method will be a vtype method, so no boxing necessary. */ /* If the method is not abstract, it's a default interface method, and we need to box */ need_box = FALSE; } if (gshared_constraint && MONO_TYPE_IS_PRIMITIVE (gshared_constraint) && cmethod->klass == mono_defaults.object_class && !strcmp (cmethod->name, "GetHashCode")) { /* * The receiver is constrained to a primitive type or an enum with the same basetype. * Enum.GetHashCode () returns the hash code of the underlying type (see comments in Enum.cs), * so the constrained call can be replaced with a normal call to the basetype GetHashCode () * method. */ MonoClass *gshared_constraint_class = mono_class_from_mono_type_internal (gshared_constraint); cmethod = get_method_nofail (gshared_constraint_class, cmethod->name, 0, 0); g_assert (cmethod); *ref_cmethod = cmethod; *ref_virtual = FALSE; if (cfg->verbose_level) printf (" -> %s\n", mono_method_get_full_name (cmethod)); return NULL; } if (!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class)) { /* The called method is not virtual, i.e. Object:GetType (), the receiver is a vtype, has to box */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } else if (need_box) { MonoInst *box_type; MonoBasicBlock *is_ref_bb, *end_bb; MonoInst *nonbox_call, *addr; /* * Determine at runtime whenever the called method is defined on object/valuetype/enum, and emit a boxing call * if needed. * FIXME: It is possible to inline the called method in a lot of cases, i.e. for T_INT, * the no-box case goes to a method in Int32, while the box case goes to a method in Enum. */ addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, end_bb); box_type = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, box_type->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); /* Non-ref case */ if (cfg->llvm_only) /* addr is an ftndesc in this case */ nonbox_call = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else nonbox_call = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; if (cfg->llvm_only) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); cfg->cbb = end_bb; nonbox_call->dreg = ins->dreg; if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_partial_need_box %s -> %s\n", method->name, cmethod->name); return ins; } else { g_assert (mono_class_is_interface (cmethod->klass)); addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); if (cfg->llvm_only) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_partial %s -> %s\n", method->name, cmethod->name); return ins; } } else if (!m_class_is_valuetype (constrained_class)) { int dreg = alloc_ireg_ref (cfg); /* * The type parameter is instantiated as a reference * type. We have a managed pointer on the stack, so * we need to dereference it here. */ EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, 0); ins->type = STACK_OBJ; sp [0] = ins; } else if (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class) { /* * The type parameter is instantiated as a valuetype, * but that type doesn't override the method we're * calling, so we need to box `this'. */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } else { if (cmethod->klass != constrained_class) { /* Enums/default interface methods */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } *ref_virtual = FALSE; } exception_exit: return NULL; } static void emit_setret (MonoCompile *cfg, MonoInst *val) { MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (cfg->method)->ret); MonoInst *ins; if (mini_type_to_stind (cfg, ret_type) == CEE_STOBJ) { MonoInst *ret_addr; if (!cfg->vret_addr) { EMIT_NEW_VARSTORE (cfg, ins, cfg->ret, ret_type, val); } else { EMIT_NEW_RETLOADA (cfg, ret_addr); MonoClass *ret_class = mono_class_from_mono_type_internal (ret_type); if (MONO_CLASS_IS_SIMD (cfg, ret_class)) EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ret_addr->dreg, 0, val->dreg); else EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREV_MEMBASE, ret_addr->dreg, 0, val->dreg); ins->klass = ret_class; } } else { #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK if (COMPILE_SOFT_FLOAT (cfg) && !m_type_is_byref (ret_type) && ret_type->type == MONO_TYPE_R4) { MonoInst *conv; MonoInst *iargs [ ] = { val }; conv = mono_emit_jit_icall (cfg, mono_fload_r4_arg, iargs); mono_arch_emit_setret (cfg, cfg->method, conv); } else { mono_arch_emit_setret (cfg, cfg->method, val); } #else mono_arch_emit_setret (cfg, cfg->method, val); #endif } } /* * Emit a call to enter the interpreter for methods with filter clauses. */ static void emit_llvmonly_interp_entry (MonoCompile *cfg, MonoMethodHeader *header) { MonoInst *ins; MonoInst **iargs; MonoMethodSignature *sig = mono_method_signature_internal (cfg->method); MonoInst *ftndesc; cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig); /* * Emit a call to the interp entry function. We emit it here instead of the llvm backend since * calling conventions etc. are easier to handle here. The LLVM backend will only emit the * entry/exit bblocks. */ g_assert (cfg->cbb == cfg->bb_init); if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (sig)) { /* * Would have to generate a gsharedvt out wrapper which calls the interp entry wrapper, but * the gsharedvt out wrapper might not exist if the caller is also a gsharedvt method since * the concrete signature of the call might not exist in the program. * So transition directly to the interpreter without the wrappers. */ MonoInst *args_ins; MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = sig->param_count * sizeof (target_mgreg_t); MONO_ADD_INS (cfg->cbb, ins); args_ins = ins; for (int i = 0; i < sig->hasthis + sig->param_count; ++i) { MonoInst *arg_addr_ins; EMIT_NEW_VARLOADA ((cfg), arg_addr_ins, cfg->args [i], cfg->arg_types [i]); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args_ins->dreg, i * sizeof (target_mgreg_t), arg_addr_ins->dreg); } MonoInst *ret_var = NULL; MonoInst *ret_arg_ins; if (!MONO_TYPE_IS_VOID (sig->ret)) { ret_var = mono_compile_create_var (cfg, sig->ret, OP_LOCAL); EMIT_NEW_VARLOADA (cfg, ret_arg_ins, ret_var, sig->ret); } else { EMIT_NEW_PCONST (cfg, ret_arg_ins, NULL); } iargs = g_newa (MonoInst*, 3); iargs [0] = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_INTERP_METHOD); iargs [1] = ret_arg_ins; iargs [2] = args_ins; mono_emit_jit_icall_id (cfg, MONO_JIT_ICALL_mini_llvmonly_interp_entry_gsharedvt, iargs); if (!MONO_TYPE_IS_VOID (sig->ret)) EMIT_NEW_VARLOAD (cfg, ins, ret_var, sig->ret); else ins = NULL; } else { /* Obtain the interp entry function */ ftndesc = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY); /* Call it */ iargs = g_newa (MonoInst*, sig->param_count + 1); for (int i = 0; i < sig->param_count + sig->hasthis; ++i) EMIT_NEW_ARGLOAD (cfg, iargs [i], i); ins = mini_emit_llvmonly_calli (cfg, sig, iargs, ftndesc); } /* Do a normal return */ if (cfg->ret) { emit_setret (cfg, ins); /* * Since only bb_entry/bb_exit is emitted if interp_entry_only is set, * its possible that the return value becomes an OP_PHI node whose inputs * are not emitted. Make it volatile to prevent that. */ cfg->ret->flags |= MONO_INST_VOLATILE; } MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = cfg->bb_exit; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, cfg->bb_exit); } typedef union _MonoOpcodeParameter { gint32 i32; gint64 i64; float f; double d; guchar *branch_target; } MonoOpcodeParameter; typedef struct _MonoOpcodeInfo { guint constant : 4; // private gint pops : 3; // public -1 means variable gint pushes : 3; // public -1 means variable } MonoOpcodeInfo; static const MonoOpcodeInfo* mono_opcode_decode (guchar *ip, guint op_size, MonoOpcodeEnum il_op, MonoOpcodeParameter *parameter) { #define Push0 (0) #define Pop0 (0) #define Push1 (1) #define Pop1 (1) #define PushI (1) #define PopI (1) #define PushI8 (1) #define PopI8 (1) #define PushRef (1) #define PopRef (1) #define PushR4 (1) #define PopR4 (1) #define PushR8 (1) #define PopR8 (1) #define VarPush (-1) #define VarPop (-1) static const MonoOpcodeInfo mono_opcode_info [ ] = { #define OPDEF(name, str, pops, pushes, param, param_constant, a, b, c, flow) {param_constant + 1, pops, pushes }, #include "mono/cil/opcode.def" #undef OPDEF }; #undef Push0 #undef Pop0 #undef Push1 #undef Pop1 #undef PushI #undef PopI #undef PushI8 #undef PopI8 #undef PushRef #undef PopRef #undef PushR4 #undef PopR4 #undef PushR8 #undef PopR8 #undef VarPush #undef VarPop gint32 delta; guchar *next_ip = ip + op_size; const MonoOpcodeInfo *info = &mono_opcode_info [il_op]; switch (mono_opcodes [il_op].argument) { case MonoInlineNone: parameter->i32 = (int)info->constant - 1; break; case MonoInlineString: case MonoInlineType: case MonoInlineField: case MonoInlineMethod: case MonoInlineTok: case MonoInlineSig: case MonoShortInlineR: case MonoInlineI: parameter->i32 = read32 (next_ip - 4); // FIXME check token type? break; case MonoShortInlineI: parameter->i32 = (signed char)next_ip [-1]; break; case MonoInlineVar: parameter->i32 = read16 (next_ip - 2); break; case MonoShortInlineVar: parameter->i32 = next_ip [-1]; break; case MonoInlineR: case MonoInlineI8: parameter->i64 = read64 (next_ip - 8); break; case MonoShortInlineBrTarget: delta = (signed char)next_ip [-1]; goto branch_target; case MonoInlineBrTarget: delta = (gint32)read32 (next_ip - 4); branch_target: parameter->branch_target = delta + next_ip; break; case MonoInlineSwitch: // complicated break; default: g_error ("%s %d %d\n", __func__, il_op, mono_opcodes [il_op].argument); } return info; } /* * mono_method_to_ir: * * Translate the .net IL into linear IR. * * @start_bblock: if not NULL, the starting basic block, used during inlining. * @end_bblock: if not NULL, the ending basic block, used during inlining. * @return_var: if not NULL, the place where the return value is stored, used during inlining. * @inline_args: if not NULL, contains the arguments to the inline call * @inline_offset: if not zero, the real offset from the inline call, or zero otherwise. * @is_virtual_call: whether this method is being called as a result of a call to callvirt * * This method is used to turn ECMA IL into Mono's internal Linear IR * reprensetation. It is used both for entire methods, as well as * inlining existing methods. In the former case, the @start_bblock, * @end_bblock, @return_var, @inline_args are all set to NULL, and the * inline_offset is set to zero. * * Returns: the inline cost, or -1 if there was an error processing this method. */ int mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock, MonoInst *return_var, MonoInst **inline_args, guint inline_offset, gboolean is_virtual_call) { ERROR_DECL (error); // Buffer to hold parameters to mono_new_array, instead of varargs. MonoInst *array_new_localalloc_ins = NULL; MonoInst *ins, **sp, **stack_start; MonoBasicBlock *tblock = NULL; MonoBasicBlock *init_localsbb = NULL, *init_localsbb2 = NULL; MonoSimpleBasicBlock *bb = NULL, *original_bb = NULL; MonoMethod *method_definition; MonoInst **arg_array; MonoMethodHeader *header; MonoImage *image; guint32 token, ins_flag; MonoClass *klass; MonoClass *constrained_class = NULL; gboolean save_last_error = FALSE; guchar *ip, *end, *target, *err_pos; MonoMethodSignature *sig; MonoGenericContext *generic_context = NULL; MonoGenericContainer *generic_container = NULL; MonoType **param_types; int i, n, start_new_bblock, dreg; int num_calls = 0, inline_costs = 0; guint num_args; GSList *class_inits = NULL; gboolean dont_verify, dont_verify_stloc, readonly = FALSE; int context_used; gboolean init_locals, seq_points, skip_dead_blocks; gboolean sym_seq_points = FALSE; MonoDebugMethodInfo *minfo; MonoBitSet *seq_point_locs = NULL; MonoBitSet *seq_point_set_locs = NULL; const char *ovf_exc = NULL; gboolean emitted_funccall_seq_point = FALSE; gboolean detached_before_ret = FALSE; gboolean ins_has_side_effect; if (!cfg->disable_inline) cfg->disable_inline = (method->iflags & METHOD_IMPL_ATTRIBUTE_NOOPTIMIZATION) || is_jit_optimizer_disabled (method); cfg->current_method = method; image = m_class_get_image (method->klass); /* serialization and xdomain stuff may need access to private fields and methods */ dont_verify = FALSE; dont_verify |= method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; /* bug #77896 */ dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP; dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP_INVOKE; /* still some type unsafety issues in marshal wrappers... (unknown is PtrToStructure) */ dont_verify_stloc = method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_OTHER; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_STELEMREF; header = mono_method_get_header_checked (method, cfg->error); if (!header) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); goto exception_exit; } else { cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header); } generic_container = mono_method_get_generic_container (method); sig = mono_method_signature_internal (method); num_args = sig->hasthis + sig->param_count; ip = (guchar*)header->code; cfg->cil_start = ip; end = ip + header->code_size; cfg->stat_cil_code_size += header->code_size; seq_points = cfg->gen_seq_points && cfg->method == method; if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) { /* We could hit a seq point before attaching to the JIT (#8338) */ seq_points = FALSE; } if (method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); if (info->subtype == WRAPPER_SUBTYPE_INTERP_IN) { /* We could hit a seq point before attaching to the JIT (#8338) */ seq_points = FALSE; } } if (cfg->prof_coverage) { if (cfg->compile_aot) g_error ("Coverage profiling is not supported with AOT."); INLINE_FAILURE ("coverage profiling"); cfg->coverage_info = mono_profiler_coverage_alloc (cfg->method, header->code_size); } if ((cfg->gen_sdb_seq_points && cfg->method == method) || cfg->prof_coverage) { minfo = mono_debug_lookup_method (method); if (minfo) { MonoSymSeqPoint *sps; int i, n_il_offsets; mono_debug_get_seq_points (minfo, NULL, NULL, NULL, &sps, &n_il_offsets); seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); sym_seq_points = TRUE; for (i = 0; i < n_il_offsets; ++i) { if (sps [i].il_offset < header->code_size) mono_bitset_set_fast (seq_point_locs, sps [i].il_offset); } g_free (sps); MonoDebugMethodAsyncInfo* asyncMethod = mono_debug_lookup_method_async_debug_info (method); if (asyncMethod) { for (i = 0; asyncMethod != NULL && i < asyncMethod->num_awaits; i++) { mono_bitset_set_fast (seq_point_locs, asyncMethod->resume_offsets[i]); mono_bitset_set_fast (seq_point_locs, asyncMethod->yield_offsets[i]); } mono_debug_free_method_async_debug_info (asyncMethod); } } else if (!method->wrapper_type && !method->dynamic && mono_debug_image_has_debug_info (m_class_get_image (method->klass))) { /* Methods without line number info like auto-generated property accessors */ seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); sym_seq_points = TRUE; } } /* * Methods without init_locals set could cause asserts in various passes * (#497220). To work around this, we emit dummy initialization opcodes * (OP_DUMMY_ICONST etc.) which generate no code. These are only supported * on some platforms. */ if (cfg->opt & MONO_OPT_UNSAFE) init_locals = header->init_locals; else init_locals = TRUE; method_definition = method; while (method_definition->is_inflated) { MonoMethodInflated *imethod = (MonoMethodInflated *) method_definition; method_definition = imethod->declaring; } if (sig->is_inflated) generic_context = mono_method_get_context (method); else if (generic_container) generic_context = &generic_container->context; cfg->generic_context = generic_context; if (!cfg->gshared) g_assert (!sig->has_type_parameters); if (sig->generic_param_count && method->wrapper_type == MONO_WRAPPER_NONE) { g_assert (method->is_inflated); g_assert (mono_method_get_context (method)->method_inst); } if (method->is_inflated && mono_method_get_context (method)->method_inst) g_assert (sig->generic_param_count); if (cfg->method == method) { cfg->real_offset = 0; } else { cfg->real_offset = inline_offset; } cfg->cil_offset_to_bb = (MonoBasicBlock **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoBasicBlock*) * header->code_size); cfg->cil_offset_to_bb_len = header->code_size; if (cfg->verbose_level > 2) printf ("method to IR %s\n", mono_method_full_name (method, TRUE)); param_types = (MonoType **)mono_mempool_alloc (cfg->mempool, sizeof (MonoType*) * num_args); if (sig->hasthis) param_types [0] = m_class_is_valuetype (method->klass) ? m_class_get_this_arg (method->klass) : m_class_get_byval_arg (method->klass); for (n = 0; n < sig->param_count; ++n) param_types [n + sig->hasthis] = sig->params [n]; cfg->arg_types = param_types; cfg->dont_inline = g_list_prepend (cfg->dont_inline, method); if (cfg->method == method) { /* ENTRY BLOCK */ NEW_BBLOCK (cfg, start_bblock); cfg->bb_entry = start_bblock; start_bblock->cil_code = NULL; start_bblock->cil_length = 0; /* EXIT BLOCK */ NEW_BBLOCK (cfg, end_bblock); cfg->bb_exit = end_bblock; end_bblock->cil_code = NULL; end_bblock->cil_length = 0; end_bblock->flags |= BB_INDIRECT_JUMP_TARGET; g_assert (cfg->num_bblocks == 2); arg_array = cfg->args; if (header->num_clauses) { cfg->spvars = g_hash_table_new (NULL, NULL); cfg->exvars = g_hash_table_new (NULL, NULL); } cfg->clause_is_dead = mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * header->num_clauses); /* handle exception clauses */ for (i = 0; i < header->num_clauses; ++i) { MonoBasicBlock *try_bb; MonoExceptionClause *clause = &header->clauses [i]; GET_BBLOCK (cfg, try_bb, ip + clause->try_offset); try_bb->real_offset = clause->try_offset; try_bb->try_start = TRUE; GET_BBLOCK (cfg, tblock, ip + clause->handler_offset); tblock->real_offset = clause->handler_offset; tblock->flags |= BB_EXCEPTION_HANDLER; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) mono_create_exvar_for_offset (cfg, clause->handler_offset); /* * Linking the try block with the EH block hinders inlining as we won't be able to * merge the bblocks from inlining and produce an artificial hole for no good reason. */ if (COMPILE_LLVM (cfg)) link_bblock (cfg, try_bb, tblock); if (*(ip + clause->handler_offset) == CEE_POP) tblock->flags |= BB_EXCEPTION_DEAD_OBJ; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER || clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) { MONO_INST_NEW (cfg, ins, OP_START_HANDLER); MONO_ADD_INS (tblock, ins); if (seq_points && clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FILTER) { /* finally clauses already have a seq point */ /* seq points for filter clauses are emitted below */ NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE); MONO_ADD_INS (tblock, ins); } /* todo: is a fault block unsafe to optimize? */ if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) tblock->flags |= BB_EXCEPTION_UNSAFE; } /*printf ("clause try IL_%04x to IL_%04x handler %d at IL_%04x to IL_%04x\n", clause->try_offset, clause->try_offset + clause->try_len, clause->flags, clause->handler_offset, clause->handler_offset + clause->handler_len); while (p < end) { printf ("%s", mono_disasm_code_one (NULL, method, p, &p)); }*/ /* catch and filter blocks get the exception object on the stack */ if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { /* mostly like handle_stack_args (), but just sets the input args */ /* printf ("handling clause at IL_%04x\n", clause->handler_offset); */ tblock->in_scount = 1; tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*)); tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset); cfg->cbb = tblock; #ifdef MONO_CONTEXT_SET_LLVM_EXC_REG /* The EH code passes in the exception in a register to both JITted and LLVM compiled code */ if (!cfg->compile_llvm) { MONO_INST_NEW (cfg, ins, OP_GET_EX_OBJ); ins->dreg = tblock->in_stack [0]->dreg; MONO_ADD_INS (tblock, ins); } #else MonoInst *dummy_use; /* * Add a dummy use for the exvar so its liveness info will be * correct. */ EMIT_NEW_DUMMY_USE (cfg, dummy_use, tblock->in_stack [0]); #endif if (seq_points && clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE); MONO_ADD_INS (tblock, ins); } if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { GET_BBLOCK (cfg, tblock, ip + clause->data.filter_offset); tblock->flags |= BB_EXCEPTION_HANDLER; tblock->real_offset = clause->data.filter_offset; tblock->in_scount = 1; tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*)); /* The filter block shares the exvar with the handler block */ tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset); MONO_INST_NEW (cfg, ins, OP_START_HANDLER); MONO_ADD_INS (tblock, ins); } } if (clause->flags != MONO_EXCEPTION_CLAUSE_FILTER && clause->data.catch_class && cfg->gshared && mono_class_check_context_used (clause->data.catch_class)) { /* * In shared generic code with catch * clauses containing type variables * the exception handling code has to * be able to get to the rgctx. * Therefore we have to make sure that * the vtable/mrgctx argument (for * static or generic methods) or the * "this" argument (for non-static * methods) are live. */ if ((method->flags & METHOD_ATTRIBUTE_STATIC) || mini_method_get_context (method)->method_inst || m_class_is_valuetype (method->klass)) { mono_get_vtable_var (cfg); } else { MonoInst *dummy_use; EMIT_NEW_DUMMY_USE (cfg, dummy_use, arg_array [0]); } } } } else { arg_array = g_newa (MonoInst*, num_args); cfg->cbb = start_bblock; cfg->args = arg_array; mono_save_args (cfg, sig, inline_args); } if (cfg->method == method && cfg->self_init && cfg->compile_aot && !COMPILE_LLVM (cfg)) { MonoMethod *wrapper; MonoInst *args [2]; int idx; /* * Emit code to initialize this method by calling the init wrapper emitted by LLVM. * This is not efficient right now, but its only used for the methods which fail * LLVM compilation. * FIXME: Optimize this */ g_assert (!cfg->gshared); wrapper = mono_marshal_get_aot_init_wrapper (AOT_INIT_METHOD); /* Emit this into the entry bb so it comes before the GC safe point which depends on an inited GOT */ cfg->cbb = cfg->bb_entry; idx = mono_aot_get_method_index (cfg->method); EMIT_NEW_ICONST (cfg, args [0], idx); /* Dummy */ EMIT_NEW_ICONST (cfg, args [1], 0); mono_emit_method_call (cfg, wrapper, args, NULL); } if (cfg->llvm_only && cfg->interp && cfg->method == method && !cfg->deopt) { if (header->num_clauses) { for (int i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; /* Finally clauses are checked after the remove_finally pass */ if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY) cfg->interp_entry_only = TRUE; } } } /* we use a separate basic block for the initialization code */ NEW_BBLOCK (cfg, init_localsbb); if (cfg->method == method) cfg->bb_init = init_localsbb; init_localsbb->real_offset = cfg->real_offset; start_bblock->next_bb = init_localsbb; link_bblock (cfg, start_bblock, init_localsbb); init_localsbb2 = init_localsbb; cfg->cbb = init_localsbb; if (cfg->gsharedvt && cfg->method == method) { MonoGSharedVtMethodInfo *info; MonoInst *var, *locals_var; int dreg; info = (MonoGSharedVtMethodInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoGSharedVtMethodInfo)); info->method = cfg->method; info->count_entries = 16; info->entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries); cfg->gsharedvt_info = info; var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ //var->flags |= MONO_INST_VOLATILE; cfg->gsharedvt_info_var = var; ins = emit_get_rgctx_gsharedvt_method (cfg, mini_method_check_context_used (cfg, method), method, info); MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, var->dreg, ins->dreg); /* Allocate locals */ locals_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ //locals_var->flags |= MONO_INST_VOLATILE; cfg->gsharedvt_locals_var = locals_var; dreg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, dreg, var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, locals_size)); MONO_INST_NEW (cfg, ins, OP_LOCALLOC); ins->dreg = locals_var->dreg; ins->sreg1 = dreg; MONO_ADD_INS (cfg->cbb, ins); cfg->gsharedvt_locals_var_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; /* if (init_locals) ins->flags |= MONO_INST_INIT; */ if (cfg->llvm_only) { init_localsbb = cfg->cbb; init_localsbb2 = cfg->cbb; } } if (cfg->deopt) { /* * Push an LMFExt frame which points to a MonoMethodILState structure. */ emit_push_lmf (cfg); /* The type doesn't matter, the llvm backend will use the correct type */ MonoInst *il_state_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); il_state_var->flags |= MONO_INST_VOLATILE; cfg->il_state_var = il_state_var; EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL); int il_state_addr_reg = ins->dreg; /* il_state->method = method */ MonoInst *method_ins = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_METHOD); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, il_state_addr_reg, MONO_STRUCT_OFFSET (MonoMethodILState, method), method_ins->dreg); EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); int lmf_reg = ins->dreg; /* lmf->kind = MONO_LMFEXT_IL_STATE */ MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, kind), MONO_LMFEXT_IL_STATE); /* lmf->il_state = il_state */ MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, il_state), il_state_addr_reg); /* emit_get_rgctx_method () might create new bblocks */ if (cfg->llvm_only) { init_localsbb = cfg->cbb; init_localsbb2 = cfg->cbb; } } if (cfg->llvm_only && cfg->interp && cfg->method == method) { if (cfg->interp_entry_only) emit_llvmonly_interp_entry (cfg, header); } /* FIRST CODE BLOCK */ NEW_BBLOCK (cfg, tblock); tblock->cil_code = ip; cfg->cbb = tblock; cfg->ip = ip; init_localsbb->next_bb = cfg->cbb; link_bblock (cfg, init_localsbb, cfg->cbb); ADD_BBLOCK (cfg, tblock); CHECK_CFG_EXCEPTION; if (header->code_size == 0) UNVERIFIED; if (get_basic_blocks (cfg, header, cfg->real_offset, ip, end, &err_pos)) { ip = err_pos; UNVERIFIED; } if (cfg->method == method) { int breakpoint_id = mono_debugger_method_has_breakpoint (method); if (breakpoint_id) { MONO_INST_NEW (cfg, ins, OP_BREAK); MONO_ADD_INS (cfg->cbb, ins); } mono_debug_init_method (cfg, cfg->cbb, breakpoint_id); } for (n = 0; n < header->num_locals; ++n) { if (header->locals [n]->type == MONO_TYPE_VOID && !m_type_is_byref (header->locals [n])) UNVERIFIED; } class_inits = NULL; /* We force the vtable variable here for all shared methods for the possibility that they might show up in a stack trace where their exact instantiation is needed. */ if (cfg->gshared && method == cfg->method) { if ((method->flags & METHOD_ATTRIBUTE_STATIC) || mini_method_get_context (method)->method_inst || m_class_is_valuetype (method->klass)) { mono_get_vtable_var (cfg); } else { /* FIXME: Is there a better way to do this? We need the variable live for the duration of the whole method. */ cfg->args [0]->flags |= MONO_INST_VOLATILE; } } /* add a check for this != NULL to inlined methods */ if (is_virtual_call) { MonoInst *arg_ins; // // This is just a hack to avoid checks in empty methods which could get inlined // into finally clauses preventing the removal of empty finally clauses, since all // variables in finally clauses are marked volatile so the check can't be removed // if (!(cfg->llvm_only && m_class_is_valuetype (method->klass) && header->code_size == 1 && header->code [0] == CEE_RET)) { NEW_ARGLOAD (cfg, arg_ins, 0); MONO_ADD_INS (cfg->cbb, arg_ins); MONO_EMIT_NEW_CHECK_THIS (cfg, arg_ins->dreg); } } skip_dead_blocks = !dont_verify; if (skip_dead_blocks) { original_bb = bb = mono_basic_block_split (method, cfg->error, header); CHECK_CFG_ERROR; g_assert (bb); } /* we use a spare stack slot in SWITCH and NEWOBJ and others */ stack_start = sp = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * (header->max_stack + 1)); ins_flag = 0; start_new_bblock = 0; MonoOpcodeEnum il_op; il_op = MonoOpcodeEnum_Invalid; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); for (guchar *next_ip = ip; ip < end; ip = next_ip) { MonoOpcodeEnum previous_il_op = il_op; const guchar *tmp_ip = ip; const int op_size = mono_opcode_value_and_size (&tmp_ip, end, &il_op); CHECK_OPSIZE (op_size); next_ip += op_size; if (cfg->method == method) cfg->real_offset = ip - header->code; else cfg->real_offset = inline_offset; cfg->ip = ip; context_used = 0; if (start_new_bblock) { cfg->cbb->cil_length = ip - cfg->cbb->cil_code; if (start_new_bblock == 2) { g_assert (ip == tblock->cil_code); } else { GET_BBLOCK (cfg, tblock, ip); } cfg->cbb->next_bb = tblock; cfg->cbb = tblock; start_new_bblock = 0; for (i = 0; i < cfg->cbb->in_scount; ++i) { if (cfg->verbose_level > 3) printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0); EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0); *sp++ = ins; } if (class_inits) g_slist_free (class_inits); class_inits = NULL; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); } else { if ((tblock = cfg->cil_offset_to_bb [ip - cfg->cil_start]) && (tblock != cfg->cbb)) { link_bblock (cfg, cfg->cbb, tblock); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } cfg->cbb->next_bb = tblock; cfg->cbb = tblock; for (i = 0; i < cfg->cbb->in_scount; ++i) { if (cfg->verbose_level > 3) printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0); EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0); *sp++ = ins; } g_slist_free (class_inits); class_inits = NULL; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); } } /* * Methods with AggressiveInline flag could be inlined even if the class has a cctor. * This might create a branch so emit it in the first code bblock instead of into initlocals_bb. */ if (ip - header->code == 0 && cfg->method != method && cfg->compile_aot && (method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && mono_class_needs_cctor_run (method->klass, method)) { emit_class_init (cfg, method->klass); } if (skip_dead_blocks) { int ip_offset = ip - header->code; if (ip_offset == bb->end) bb = bb->next; if (bb->dead) { g_assert (op_size > 0); /*The BB formation pass must catch all bad ops*/ if (cfg->verbose_level > 3) printf ("SKIPPING DEAD OP at %x\n", ip_offset); if (ip_offset + op_size == bb->end) { MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; } continue; } } /* * Sequence points are points where the debugger can place a breakpoint. * Currently, we generate these automatically at points where the IL * stack is empty. */ if (seq_points && ((!sym_seq_points && (sp == stack_start)) || (sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code)))) { /* * Make methods interruptable at the beginning, and at the targets of * backward branches. * Also, do this at the start of every bblock in methods with clauses too, * to be able to handle instructions with inprecise control flow like * throw/endfinally. * Backward branches are handled at the end of method-to-ir (). */ gboolean intr_loc = ip == header->code || (!cfg->cbb->last_ins && cfg->header->num_clauses); gboolean sym_seq_point = sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code); /* Avoid sequence points on empty IL like .volatile */ // FIXME: Enable this //if (!(cfg->cbb->last_ins && cfg->cbb->last_ins->opcode == OP_SEQ_POINT)) { NEW_SEQ_POINT (cfg, ins, ip - header->code, intr_loc); if ((sp != stack_start) && !sym_seq_point) ins->flags |= MONO_INST_NONEMPTY_STACK; MONO_ADD_INS (cfg->cbb, ins); if (sym_seq_points) mono_bitset_set_fast (seq_point_set_locs, ip - header->code); if (cfg->prof_coverage) { guint32 cil_offset = ip - header->code; gpointer counter = &cfg->coverage_info->data [cil_offset].count; cfg->coverage_info->data [cil_offset].cil_code = ip; if (mono_arch_opcode_supported (OP_ATOMIC_ADD_I4)) { MonoInst *one_ins, *load_ins; EMIT_NEW_PCONST (cfg, load_ins, counter); EMIT_NEW_ICONST (cfg, one_ins, 1); MONO_INST_NEW (cfg, ins, OP_ATOMIC_ADD_I4); ins->dreg = mono_alloc_ireg (cfg); ins->inst_basereg = load_ins->dreg; ins->inst_offset = 0; ins->sreg2 = one_ins->dreg; ins->type = STACK_I4; MONO_ADD_INS (cfg->cbb, ins); } else { EMIT_NEW_PCONST (cfg, ins, counter); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, ins->dreg, 0, 1); } } } cfg->cbb->real_offset = cfg->real_offset; if (cfg->verbose_level > 3) printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL)); /* * This is used to compute BB_HAS_SIDE_EFFECTS, which is used for the elimination of * foreach finally clauses, so only IL opcodes which occur in such clauses * need to set this. */ ins_has_side_effect = TRUE; // Variables shared by CEE_CALLI CEE_CALL CEE_CALLVIRT CEE_JMP. // Initialize to either what they all need or zero. gboolean emit_widen = TRUE; gboolean tailcall = FALSE; gboolean common_call = FALSE; MonoInst *keep_this_alive = NULL; MonoMethod *cmethod = NULL; MonoMethodSignature *fsig = NULL; // These are used only in CALL/CALLVIRT but must be initialized also for CALLI, // since it jumps into CALL/CALLVIRT. gboolean need_seq_point = FALSE; gboolean push_res = TRUE; gboolean skip_ret = FALSE; gboolean tailcall_remove_ret = FALSE; // FIXME split 500 lines load/store field into separate file/function. MonoOpcodeParameter parameter; const MonoOpcodeInfo* info = mono_opcode_decode (ip, op_size, il_op, &parameter); g_assert (info); n = parameter.i32; token = parameter.i32; target = parameter.branch_target; // Check stack size for push/pop except variable cases -- -1 like call/ret/newobj. const int pushes = info->pushes; const int pops = info->pops; if (pushes >= 0 && pops >= 0) { g_assert (pushes - pops <= 1); if (pushes - pops == 1) CHECK_STACK_OVF (); } if (pops >= 0) CHECK_STACK (pops); switch (il_op) { case MONO_CEE_NOP: if (seq_points && !sym_seq_points && sp != stack_start) { /* * The C# compiler uses these nops to notify the JIT that it should * insert seq points. */ NEW_SEQ_POINT (cfg, ins, ip - header->code, FALSE); MONO_ADD_INS (cfg->cbb, ins); } if (cfg->keep_cil_nops) MONO_INST_NEW (cfg, ins, OP_HARD_NOP); else MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); emitted_funccall_seq_point = FALSE; ins_has_side_effect = FALSE; break; case MONO_CEE_BREAK: if (mini_should_insert_breakpoint (cfg->method)) { ins = mono_emit_jit_icall (cfg, mono_debugger_agent_user_break, NULL); } else { MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); } break; case MONO_CEE_LDARG_0: case MONO_CEE_LDARG_1: case MONO_CEE_LDARG_2: case MONO_CEE_LDARG_3: case MONO_CEE_LDARG_S: case MONO_CEE_LDARG: CHECK_ARG (n); if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, cfg->arg_types[n])) { EMIT_NEW_ARGLOADA (cfg, ins, n); } else { EMIT_NEW_ARGLOAD (cfg, ins, n); } *sp++ = ins; break; case MONO_CEE_LDLOC_0: case MONO_CEE_LDLOC_1: case MONO_CEE_LDLOC_2: case MONO_CEE_LDLOC_3: case MONO_CEE_LDLOC_S: case MONO_CEE_LDLOC: CHECK_LOCAL (n); if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, header->locals[n])) { EMIT_NEW_LOCLOADA (cfg, ins, n); } else { EMIT_NEW_LOCLOAD (cfg, ins, n); } *sp++ = ins; break; case MONO_CEE_STLOC_0: case MONO_CEE_STLOC_1: case MONO_CEE_STLOC_2: case MONO_CEE_STLOC_3: case MONO_CEE_STLOC_S: case MONO_CEE_STLOC: CHECK_LOCAL (n); --sp; *sp = convert_value (cfg, header->locals [n], *sp); if (!dont_verify_stloc && target_type_is_incompatible (cfg, header->locals [n], *sp)) UNVERIFIED; emit_stloc_ir (cfg, sp, header, n); inline_costs += 1; break; case MONO_CEE_LDARGA_S: case MONO_CEE_LDARGA: CHECK_ARG (n); NEW_ARGLOADA (cfg, ins, n); MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_STARG_S: case MONO_CEE_STARG: --sp; CHECK_ARG (n); *sp = convert_value (cfg, param_types [n], *sp); if (!dont_verify_stloc && target_type_is_incompatible (cfg, param_types [n], *sp)) UNVERIFIED; emit_starg_ir (cfg, sp, n); break; case MONO_CEE_LDLOCA: case MONO_CEE_LDLOCA_S: { guchar *tmp_ip; CHECK_LOCAL (n); if ((tmp_ip = emit_optimized_ldloca_ir (cfg, next_ip, end, n))) { next_ip = tmp_ip; il_op = MONO_CEE_INITOBJ; inline_costs += 1; break; } ins_has_side_effect = FALSE; EMIT_NEW_LOCLOADA (cfg, ins, n); *sp++ = ins; break; } case MONO_CEE_LDNULL: EMIT_NEW_PCONST (cfg, ins, NULL); ins->type = STACK_OBJ; *sp++ = ins; break; case MONO_CEE_LDC_I4_M1: case MONO_CEE_LDC_I4_0: case MONO_CEE_LDC_I4_1: case MONO_CEE_LDC_I4_2: case MONO_CEE_LDC_I4_3: case MONO_CEE_LDC_I4_4: case MONO_CEE_LDC_I4_5: case MONO_CEE_LDC_I4_6: case MONO_CEE_LDC_I4_7: case MONO_CEE_LDC_I4_8: case MONO_CEE_LDC_I4_S: case MONO_CEE_LDC_I4: EMIT_NEW_ICONST (cfg, ins, n); *sp++ = ins; break; case MONO_CEE_LDC_I8: MONO_INST_NEW (cfg, ins, OP_I8CONST); ins->type = STACK_I8; ins->dreg = alloc_dreg (cfg, STACK_I8); ins->inst_l = parameter.i64; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_LDC_R4: { float *f; gboolean use_aotconst = FALSE; #ifdef TARGET_POWERPC /* FIXME: Clean this up */ if (cfg->compile_aot) use_aotconst = TRUE; #endif /* FIXME: we should really allocate this only late in the compilation process */ f = (float *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (float)); if (use_aotconst) { MonoInst *cons; int dreg; EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R4, f); dreg = alloc_freg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR4_MEMBASE, dreg, cons->dreg, 0); ins->type = cfg->r4_stack_type; } else { MONO_INST_NEW (cfg, ins, OP_R4CONST); ins->type = cfg->r4_stack_type; ins->dreg = alloc_dreg (cfg, STACK_R8); ins->inst_p0 = f; MONO_ADD_INS (cfg->cbb, ins); } *f = parameter.f; *sp++ = ins; break; } case MONO_CEE_LDC_R8: { double *d; gboolean use_aotconst = FALSE; #ifdef TARGET_POWERPC /* FIXME: Clean this up */ if (cfg->compile_aot) use_aotconst = TRUE; #endif /* FIXME: we should really allocate this only late in the compilation process */ d = (double *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (double)); if (use_aotconst) { MonoInst *cons; int dreg; EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R8, d); dreg = alloc_freg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR8_MEMBASE, dreg, cons->dreg, 0); ins->type = STACK_R8; } else { MONO_INST_NEW (cfg, ins, OP_R8CONST); ins->type = STACK_R8; ins->dreg = alloc_dreg (cfg, STACK_R8); ins->inst_p0 = d; MONO_ADD_INS (cfg->cbb, ins); } *d = parameter.d; *sp++ = ins; break; } case MONO_CEE_DUP: { MonoInst *temp, *store; MonoClass *klass; sp--; ins = *sp; klass = ins->klass; temp = mono_compile_create_var (cfg, type_from_stack_type (ins), OP_LOCAL); EMIT_NEW_TEMPSTORE (cfg, store, temp->inst_c0, ins); EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0); ins->klass = klass; *sp++ = ins; EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0); ins->klass = klass; *sp++ = ins; inline_costs += 2; break; } case MONO_CEE_POP: --sp; #ifdef TARGET_X86 if (sp [0]->type == STACK_R8) /* we need to pop the value from the x86 FP stack */ MONO_EMIT_NEW_UNALU (cfg, OP_X86_FPOP, -1, sp [0]->dreg); #endif break; case MONO_CEE_JMP: { MonoCallInst *call; int i, n; INLINE_FAILURE ("jmp"); GSHAREDVT_FAILURE (il_op); if (stack_start != sp) UNVERIFIED; /* FIXME: check the signature matches */ cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; if (cfg->gshared && mono_method_check_context_used (cmethod)) GENERIC_SHARING_FAILURE (CEE_JMP); mini_profiler_emit_tail_call (cfg, cmethod); fsig = mono_method_signature_internal (cmethod); n = fsig->param_count + fsig->hasthis; if (cfg->llvm_only) { MonoInst **args; args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n); for (i = 0; i < n; ++i) EMIT_NEW_ARGLOAD (cfg, args [i], i); ins = mini_emit_method_call_full (cfg, cmethod, fsig, TRUE, args, NULL, NULL, NULL); /* * The code in mono-basic-block.c treats the rest of the code as dead, but we * have to emit a normal return since llvm expects it. */ if (cfg->ret) emit_setret (cfg, ins); MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); break; } else { /* Handle tailcalls similarly to calls */ DISABLE_AOT (cfg); mini_emit_tailcall_parameters (cfg, fsig); MONO_INST_NEW_CALL (cfg, call, OP_TAILCALL); call->method = cmethod; // FIXME Other initialization of the tailcall field occurs after // it is used. So this is the only "real" use and needs more attention. call->tailcall = TRUE; call->signature = fsig; call->args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n); call->inst.inst_p0 = cmethod; for (i = 0; i < n; ++i) EMIT_NEW_ARGLOAD (cfg, call->args [i], i); if (mini_type_is_vtype (mini_get_underlying_type (call->signature->ret))) call->vret_var = cfg->vret_addr; mono_arch_emit_call (cfg, call); cfg->param_area = MAX(cfg->param_area, call->stack_usage); MONO_ADD_INS (cfg->cbb, (MonoInst*)call); } start_new_bblock = 1; break; } case MONO_CEE_CALLI: { // FIXME tail.calli is problemetic because the this pointer's type // is not in the signature, and we cannot check for a byref valuetype. MonoInst *addr; MonoInst *callee = NULL; // Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT. common_call = TRUE; // i.e. skip_ret/push_res/seq_point logic cmethod = NULL; gboolean const inst_tailcall = G_UNLIKELY (debug_tailcall_try_all ? (next_ip < end && next_ip [0] == CEE_RET) : ((ins_flag & MONO_INST_TAILCALL) != 0)); ins = NULL; //GSHAREDVT_FAILURE (il_op); CHECK_STACK (1); --sp; addr = *sp; g_assert (addr); fsig = mini_get_signature (method, token, generic_context, cfg->error); CHECK_CFG_ERROR; if (method->dynamic && fsig->pinvoke) { MonoInst *args [3]; /* * This is a call through a function pointer using a pinvoke * signature. Have to create a wrapper and call that instead. * FIXME: This is very slow, need to create a wrapper at JIT time * instead based on the signature. */ EMIT_NEW_IMAGECONST (cfg, args [0], ((MonoDynamicMethod*)method)->assembly->image); EMIT_NEW_PCONST (cfg, args [1], fsig); args [2] = addr; // FIXME tailcall? addr = mono_emit_jit_icall (cfg, mono_get_native_calli_wrapper, args); } if (!method->dynamic && fsig->pinvoke && !method->wrapper_type) { /* MONO_WRAPPER_DYNAMIC_METHOD dynamic method handled above in the method->dynamic case; for other wrapper types assume the code knows what its doing and added its own GC transitions */ gboolean skip_gc_trans = fsig->suppress_gc_transition; if (!skip_gc_trans) { #if 0 fprintf (stderr, "generating wrapper for calli in method %s with wrapper type %s\n", method->name, mono_wrapper_type_to_str (method->wrapper_type)); #endif /* Call the wrapper that will do the GC transition instead */ MonoMethod *wrapper = mono_marshal_get_native_func_wrapper_indirect (method->klass, fsig, cfg->compile_aot); fsig = mono_method_signature_internal (wrapper); n = fsig->param_count - 1; /* wrapper has extra fnptr param */ CHECK_STACK (n); /* move the args to allow room for 'this' in the first position */ while (n--) { --sp; sp [1] = sp [0]; } sp[0] = addr; /* n+1 args, first arg is the address of the indirect method to call */ g_assert (!fsig->hasthis && !fsig->pinvoke); ins = mono_emit_method_call (cfg, wrapper, /*args*/sp, NULL); goto calli_end; } } n = fsig->param_count + fsig->hasthis; CHECK_STACK (n); //g_assert (!virtual_ || fsig->hasthis); sp -= n; if (!(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) && check_call_signature (cfg, fsig, sp)) { if (break_on_unverified ()) check_call_signature (cfg, fsig, sp); // Again, step through it. UNVERIFIED; } inline_costs += CALL_COST * MIN(10, num_calls++); /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { /* * We pass the address to the gsharedvt trampoline in the rgctx reg */ callee = addr; g_assert (addr); // Doubles as boolean after tailcall check. } inst_tailcall && is_supported_tailcall (cfg, ip, method, NULL, fsig, FALSE/*virtual irrelevant*/, addr != NULL, &tailcall); if (save_last_error) mono_emit_jit_icall (cfg, mono_marshal_clear_last_error, NULL); if (callee) { if (method->wrapper_type != MONO_WRAPPER_DELEGATE_INVOKE) /* Not tested */ GSHAREDVT_FAILURE (il_op); if (cfg->llvm_only) // FIXME: GSHAREDVT_FAILURE (il_op); addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, callee, tailcall); goto calli_end; } /* Prevent inlining of methods with indirect calls */ INLINE_FAILURE ("indirect call"); if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST || addr->opcode == OP_GOT_ENTRY) { MonoJumpInfoType info_type; gpointer info_data; /* * Instead of emitting an indirect call, emit a direct call * with the contents of the aotconst as the patch info. */ if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST) { info_type = (MonoJumpInfoType)addr->inst_c1; info_data = addr->inst_p0; } else { info_type = (MonoJumpInfoType)addr->inst_right->inst_c1; info_data = addr->inst_right->inst_left; } if (info_type == MONO_PATCH_INFO_ICALL_ADDR) { // non-JIT icall, mostly builtin, but also user-extensible tailcall = FALSE; ins = (MonoInst*)mini_emit_abs_call (cfg, MONO_PATCH_INFO_ICALL_ADDR_CALL, info_data, fsig, sp); NULLIFY_INS (addr); goto calli_end; } else if (info_type == MONO_PATCH_INFO_JIT_ICALL_ADDR || info_type == MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR) { tailcall = FALSE; ins = (MonoInst*)mini_emit_abs_call (cfg, info_type, info_data, fsig, sp); NULLIFY_INS (addr); goto calli_end; } } if (cfg->llvm_only && !(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD)) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, NULL, tailcall); goto calli_end; } case MONO_CEE_CALL: case MONO_CEE_CALLVIRT: { MonoInst *addr; addr = NULL; int array_rank; array_rank = 0; gboolean virtual_; virtual_ = il_op == MONO_CEE_CALLVIRT; gboolean pass_imt_from_rgctx; pass_imt_from_rgctx = FALSE; MonoInst *imt_arg; imt_arg = NULL; gboolean pass_vtable; pass_vtable = FALSE; gboolean pass_mrgctx; pass_mrgctx = FALSE; MonoInst *vtable_arg; vtable_arg = NULL; gboolean check_this; check_this = FALSE; gboolean delegate_invoke; delegate_invoke = FALSE; gboolean direct_icall; direct_icall = FALSE; gboolean tailcall_calli; tailcall_calli = FALSE; gboolean noreturn; noreturn = FALSE; gboolean gshared_static_virtual; gshared_static_virtual = FALSE; #ifdef TARGET_WASM gboolean needs_stack_walk; needs_stack_walk = FALSE; #endif // Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT. common_call = FALSE; // variables to help in assertions gboolean called_is_supported_tailcall; called_is_supported_tailcall = FALSE; MonoMethod *tailcall_method; tailcall_method = NULL; MonoMethod *tailcall_cmethod; tailcall_cmethod = NULL; MonoMethodSignature *tailcall_fsig; tailcall_fsig = NULL; gboolean tailcall_virtual; tailcall_virtual = FALSE; gboolean tailcall_extra_arg; tailcall_extra_arg = FALSE; gboolean inst_tailcall; inst_tailcall = G_UNLIKELY (debug_tailcall_try_all ? (next_ip < end && next_ip [0] == CEE_RET) : ((ins_flag & MONO_INST_TAILCALL) != 0)); ins = NULL; /* Used to pass arguments to called functions */ HandleCallData cdata; memset (&cdata, 0, sizeof (HandleCallData)); cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; if (cfg->verbose_level > 3) printf ("cmethod = %s\n", mono_method_get_full_name (cmethod)); MonoMethod *cil_method; cil_method = cmethod; if (constrained_class) { if (m_method_is_static (cil_method) && mini_class_check_context_used (cfg, constrained_class)) { /* get_constrained_method () doesn't work on the gparams used by generic sharing */ // FIXME: Other configurations //if (!cfg->gsharedvt) // GENERIC_SHARING_FAILURE (CEE_CALL); gshared_static_virtual = TRUE; } else { cmethod = get_constrained_method (cfg, image, token, cil_method, constrained_class, generic_context); CHECK_CFG_ERROR; if (m_class_is_enumtype (constrained_class) && !strcmp (cmethod->name, "GetHashCode")) { /* Use the corresponding method from the base type to avoid boxing */ MonoType *base_type = mono_class_enum_basetype_internal (constrained_class); g_assert (base_type); constrained_class = mono_class_from_mono_type_internal (base_type); cmethod = get_method_nofail (constrained_class, cmethod->name, 0, 0); g_assert (cmethod); } } } if (!dont_verify && !cfg->skip_visibility) { MonoMethod *target_method = cil_method; if (method->is_inflated) { MonoGenericContainer *container = mono_method_get_generic_container(method_definition); MonoGenericContext *context = (container != NULL ? &container->context : NULL); target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error); CHECK_CFG_ERROR; } if (!mono_method_can_access_method (method_definition, target_method) && !mono_method_can_access_method (method, cil_method)) emit_method_access_failure (cfg, method, cil_method); } if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) { if (cfg->interp && !cfg->interp_entry_only) { /* Use the interpreter instead */ cfg->exception_message = g_strdup ("stack walk"); cfg->disable_llvm = TRUE; } #ifdef TARGET_WASM else { needs_stack_walk = TRUE; } #endif } if (!virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT) && !gshared_static_virtual) { if (!mono_class_is_interface (method->klass)) emit_bad_image_failure (cfg, method, cil_method); else virtual_ = TRUE; } if (!m_class_is_inited (cmethod->klass)) if (!mono_class_init_internal (cmethod->klass)) TYPE_LOAD_ERROR (cmethod->klass); fsig = mono_method_signature_internal (cmethod); if (!fsig) LOAD_ERROR; if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL && mini_class_is_system_array (cmethod->klass)) { array_rank = m_class_get_rank (cmethod->klass); } else if ((cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) && direct_icalls_enabled (cfg, cmethod)) { direct_icall = TRUE; } else if (fsig->pinvoke) { if (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL) { /* * Avoid calling mono_marshal_get_native_wrapper () too early, it might call managed * callbacks on netcore. */ fsig = mono_metadata_signature_dup_mempool (cfg->mempool, fsig); fsig->pinvoke = FALSE; } else { MonoMethod *wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot); fsig = mono_method_signature_internal (wrapper); } } else if (constrained_class) { } else { fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error); CHECK_CFG_ERROR; } if (cfg->llvm_only && !cfg->method->wrapper_type && (!cmethod || cmethod->is_inflated)) cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig); /* See code below */ if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) { MonoBasicBlock *tbb; GET_BBLOCK (cfg, tbb, next_ip); if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) { /* * We want to extend the try block to cover the call, but we can't do it if the * call is made directly since its followed by an exception check. */ direct_icall = FALSE; } } mono_save_token_info (cfg, image, token, cil_method); if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code))) need_seq_point = TRUE; /* Don't support calls made using type arguments for now */ /* if (cfg->gsharedvt) { if (mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE (il_op); } */ if (cmethod->string_ctor && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) g_assert_not_reached (); n = fsig->param_count + fsig->hasthis; if (!cfg->gshared && mono_class_is_gtd (cmethod->klass)) UNVERIFIED; if (!cfg->gshared) g_assert (!mono_method_check_context_used (cmethod)); CHECK_STACK (n); //g_assert (!virtual_ || fsig->hasthis); sp -= n; if (virtual_ && cmethod && sp [0] && sp [0]->opcode == OP_TYPED_OBJREF) { ERROR_DECL (error); MonoMethod *new_cmethod = mono_class_get_virtual_method (sp [0]->klass, cmethod, error); if (is_ok (error)) { cmethod = new_cmethod; virtual_ = FALSE; } else { mono_error_cleanup (error); } } if (cmethod && method_does_not_return (cmethod)) { cfg->cbb->out_of_line = TRUE; noreturn = TRUE; } cdata.method = method; cdata.inst_tailcall = inst_tailcall; /* * We have the `constrained.' prefix opcode. */ if (constrained_class) { ins = handle_constrained_call (cfg, cmethod, fsig, constrained_class, sp, &cdata, &cmethod, &virtual_, &emit_widen); CHECK_CFG_EXCEPTION; if (!gshared_static_virtual) constrained_class = NULL; if (ins) goto call_end; } for (int i = 0; i < fsig->param_count; ++i) sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]); if (check_call_signature (cfg, fsig, sp)) { if (break_on_unverified ()) check_call_signature (cfg, fsig, sp); // Again, step through it. UNVERIFIED; } if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && !strcmp (cmethod->name, "Invoke")) delegate_invoke = TRUE; /* * Implement a workaround for the inherent races involved in locking: * Monitor.Enter () * try { * } finally { * Monitor.Exit () * } * If a thread abort happens between the call to Monitor.Enter () and the start of the * try block, the Exit () won't be executed, see: * http://www.bluebytesoftware.com/blog/2007/01/30/MonitorEnterThreadAbortsAndOrphanedLocks.aspx * To work around this, we extend such try blocks to include the last x bytes * of the Monitor.Enter () call. */ if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) { MonoBasicBlock *tbb; GET_BBLOCK (cfg, tbb, next_ip); /* * Only extend try blocks with a finally, to avoid catching exceptions thrown * from Monitor.Enter like ArgumentNullException. */ if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) { /* Mark this bblock as needing to be extended */ tbb->extend_try_block = TRUE; } } /* Conversion to a JIT intrinsic */ gboolean ins_type_initialized; if ((ins = mini_emit_inst_for_method (cfg, cmethod, fsig, sp, &ins_type_initialized))) { if (!MONO_TYPE_IS_VOID (fsig->ret)) { if (!ins_type_initialized) mini_type_to_eval_stack_type ((cfg), fsig->ret, ins); emit_widen = FALSE; } // FIXME This is only missed if in fact the intrinsic involves a call. if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall intrins %s -> %s\n", method->name, cmethod->name); goto call_end; } CHECK_CFG_ERROR; /* * If the callee is a shared method, then its static cctor * might not get called after the call was patched. */ if (cfg->gshared && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) { emit_class_init (cfg, cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } /* Inlining */ if ((cfg->opt & MONO_OPT_INLINE) && !inst_tailcall && (!virtual_ || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod)) && mono_method_check_inlining (cfg, cmethod)) { int costs; gboolean always = FALSE; gboolean is_empty = FALSE; if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) { /* Prevent inlining of methods that call wrappers */ INLINE_FAILURE ("wrapper call"); // FIXME? Does this write to cmethod impact tailcall_supported? Probably not. // Neither pinvoke or icall are likely to be tailcalled. cmethod = mono_marshal_get_native_wrapper (cmethod, TRUE, FALSE); always = TRUE; } costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, always, &is_empty); if (costs) { cfg->real_offset += 5; if (!MONO_TYPE_IS_VOID (fsig->ret)) /* *sp is already set by inline_method */ ins = *sp; inline_costs += costs; // FIXME This is missed if the inlinee contains tail calls that // would work, but not once inlined into caller. // This matchingness could be a factor in inlining. // i.e. Do not inline if it hurts tailcall, do inline // if it helps and/or or is neutral, and helps performance // using usual heuristics. // Note that inlining will expose multiple tailcall opportunities // so the tradeoff is not obvious. If we can tailcall anything // like desktop, then this factor mostly falls away, except // that inlining can affect tailcall performance due to // signature match/mismatch. if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall inline %s -> %s\n", method->name, cmethod->name); if (is_empty) ins_has_side_effect = FALSE; goto call_end; } } check_method_sharing (cfg, cmethod, &pass_vtable, &pass_mrgctx); if (cfg->gshared) { MonoGenericContext *cmethod_context = mono_method_get_context (cmethod); context_used = mini_method_check_context_used (cfg, cmethod); if (!context_used && gshared_static_virtual) context_used = mini_class_check_context_used (cfg, constrained_class); if (context_used && mono_class_is_interface (cmethod->klass) && !m_method_is_static (cmethod)) { /* Generic method interface calls are resolved via a helper function and don't need an imt. */ if (!cmethod_context || !cmethod_context->method_inst) pass_imt_from_rgctx = TRUE; } /* * If a shared method calls another * shared method then the caller must * have a generic sharing context * because the magic trampoline * requires it. FIXME: We shouldn't * have to force the vtable/mrgctx * variable here. Instead there * should be a flag in the cfg to * request a generic sharing context. */ if (context_used && ((cfg->method->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cfg->method->klass))) mono_get_vtable_var (cfg); } if (pass_vtable) { if (context_used) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE); } else { MonoVTable *vtable = mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable); } } if (pass_mrgctx) { g_assert (!vtable_arg); if (!cfg->compile_aot) { /* * emit_get_rgctx_method () calls mono_class_vtable () so check * for type load errors before. */ mono_class_setup_vtable (cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX); if ((!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod))) { if (virtual_) check_this = TRUE; virtual_ = FALSE; } } if (pass_imt_from_rgctx) { g_assert (!pass_vtable); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } if (check_this) MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg); /* Calling virtual generic methods */ // These temporaries help detangle "pure" computation of // inputs to is_supported_tailcall from side effects, so that // is_supported_tailcall can be computed just once. gboolean virtual_generic; virtual_generic = FALSE; gboolean virtual_generic_imt; virtual_generic_imt = FALSE; if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && !MONO_METHOD_IS_FINAL (cmethod) && fsig->generic_param_count && !(cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) && !cfg->llvm_only) { g_assert (fsig->is_inflated); virtual_generic = TRUE; /* Prevent inlining of methods that contain indirect calls */ INLINE_FAILURE ("virtual generic call"); if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE (il_op); if (cfg->backend->have_generalized_imt_trampoline && cfg->backend->gshared_supported && cmethod->wrapper_type == MONO_WRAPPER_NONE) { virtual_generic_imt = TRUE; g_assert (!imt_arg); if (!context_used) g_assert (cmethod->is_inflated); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); virtual_ = TRUE; vtable_arg = NULL; } } // Capture some intent before computing tailcall. gboolean make_generic_call_out_of_gsharedvt_method; gboolean will_have_imt_arg; make_generic_call_out_of_gsharedvt_method = FALSE; will_have_imt_arg = FALSE; /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (cfg->gsharedvt && (mini_is_gsharedvt_signature (fsig) || cmethod->is_inflated || mono_class_is_ginst (cmethod->klass)) && !(m_class_get_rank (cmethod->klass) && m_class_get_byval_arg (cmethod->klass)->type != MONO_TYPE_SZARRAY) && (!(cfg->llvm_only && virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)))) { make_generic_call_out_of_gsharedvt_method = TRUE; if (virtual_) { if (fsig->generic_param_count) { will_have_imt_arg = TRUE; } else if (mono_class_is_interface (cmethod->klass) && !imt_arg) { will_have_imt_arg = TRUE; } } } /* Tail prefix / tailcall optimization */ /* FIXME: Enabling TAILC breaks some inlining/stack trace/etc tests. Inlining and stack traces are not guaranteed however. */ /* FIXME: runtime generic context pointer for jumps? */ /* FIXME: handle this for generic sharing eventually */ // tailcall means "the backend can and will handle it". // inst_tailcall means the tail. prefix is present. tailcall_extra_arg = vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass); tailcall = inst_tailcall && is_supported_tailcall (cfg, ip, method, cmethod, fsig, virtual_, tailcall_extra_arg, &tailcall_calli); // Writes to imt_arg, vtable_arg, virtual_, cmethod, must not occur from here (inputs to is_supported_tailcall). // Capture values to later assert they don't change. called_is_supported_tailcall = TRUE; tailcall_method = method; tailcall_cmethod = cmethod; tailcall_fsig = fsig; tailcall_virtual = virtual_; if (virtual_generic) { if (virtual_generic_imt) { if (tailcall) { /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); } common_call = TRUE; goto call_end; } MonoInst *this_temp, *this_arg_temp, *store; MonoInst *iargs [4]; this_temp = mono_compile_create_var (cfg, type_from_stack_type (sp [0]), OP_LOCAL); NEW_TEMPSTORE (cfg, store, this_temp->inst_c0, sp [0]); MONO_ADD_INS (cfg->cbb, store); /* FIXME: This should be a managed pointer */ this_arg_temp = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); EMIT_NEW_TEMPLOAD (cfg, iargs [0], this_temp->inst_c0); iargs [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); EMIT_NEW_TEMPLOADA (cfg, iargs [2], this_arg_temp->inst_c0); addr = mono_emit_jit_icall (cfg, mono_helper_compile_generic_method, iargs); EMIT_NEW_TEMPLOAD (cfg, sp [0], this_arg_temp->inst_c0); ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall virtual generic %s -> %s\n", method->name, cmethod->name); goto call_end; } CHECK_CFG_ERROR; /* Tail recursion elimination */ if (((cfg->opt & MONO_OPT_TAILCALL) || inst_tailcall) && il_op == MONO_CEE_CALL && cmethod == method && next_ip < end && next_ip [0] == CEE_RET && !vtable_arg) { gboolean has_vtargs = FALSE; int i; /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); /* keep it simple */ for (i = fsig->param_count - 1; !has_vtargs && i >= 0; i--) has_vtargs = MONO_TYPE_ISSTRUCT (mono_method_signature_internal (cmethod)->params [i]); if (!has_vtargs) { if (need_seq_point) { emit_seq_point (cfg, method, ip, FALSE, TRUE); need_seq_point = FALSE; } for (i = 0; i < n; ++i) EMIT_NEW_ARGSTORE (cfg, ins, i, sp [i]); mini_profiler_emit_tail_call (cfg, cmethod); MONO_INST_NEW (cfg, ins, OP_BR); MONO_ADD_INS (cfg->cbb, ins); tblock = start_bblock->out_bb [0]; link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; start_new_bblock = 1; /* skip the CEE_RET, too */ if (ip_in_bb (cfg, cfg->cbb, next_ip)) skip_ret = TRUE; push_res = FALSE; need_seq_point = FALSE; goto call_end; } } inline_costs += CALL_COST * MIN(10, num_calls++); /* * Synchronized wrappers. * Its hard to determine where to replace a method with its synchronized * wrapper without causing an infinite recursion. The current solution is * to add the synchronized wrapper in the trampolines, and to * change the called method to a dummy wrapper, and resolve that wrapper * to the real method in mono_jit_compile_method (). */ if (cfg->method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) { MonoMethod *orig = mono_marshal_method_from_wrapper (cfg->method); if (cmethod == orig || (cmethod->is_inflated && mono_method_get_declaring_generic_method (cmethod) == orig)) { // FIXME? Does this write to cmethod impact tailcall_supported? Probably not. cmethod = mono_marshal_get_synchronized_inner_wrapper (cmethod); } } /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (make_generic_call_out_of_gsharedvt_method) { if (virtual_) { //if (mono_class_is_interface (cmethod->klass)) //GSHAREDVT_FAILURE (il_op); // disable for possible remoting calls if (fsig->hasthis && method->klass == mono_defaults.object_class) GSHAREDVT_FAILURE (il_op); if (fsig->generic_param_count) { /* virtual generic call */ g_assert (!imt_arg); g_assert (will_have_imt_arg); /* Same as the virtual generic case above */ imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } else if (mono_class_is_interface (cmethod->klass) && !imt_arg) { /* This can happen when we call a fully instantiated iface method */ g_assert (will_have_imt_arg); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } /* This is not needed, as the trampoline code will pass one, and it might be passed in the same reg as the imt arg */ vtable_arg = NULL; } if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && (!strcmp (cmethod->name, "Invoke"))) keep_this_alive = sp [0]; MonoRgctxInfoType info_type; if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT; else info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE; addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, info_type); if (cfg->llvm_only) { // FIXME: Avoid initializing vtable_arg ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall llvmonly gsharedvt %s -> %s\n", method->name, cmethod->name); } else { tailcall = tailcall_calli; ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall); tailcall_remove_ret |= tailcall; } goto call_end; } /* Generic sharing */ /* * Calls to generic methods from shared code cannot go through the trampoline infrastructure * in some cases, because the called method might end up being different on every call. * Load the called method address from the rgctx and do an indirect call in these cases. * Use this if the callee is gsharedvt sharable too, since * at runtime we might find an instantiation so the call cannot * be patched (the 'no_patch' code path in mini-trampolines.c). */ gboolean gshared_indirect; gshared_indirect = context_used && !imt_arg && !array_rank && !delegate_invoke; if (gshared_indirect) gshared_indirect = (!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) || !mono_class_generic_sharing_enabled (cmethod->klass) || gshared_static_virtual); if (gshared_indirect) gshared_indirect = (!virtual_ || MONO_METHOD_IS_FINAL (cmethod) || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)); if (gshared_indirect) { INLINE_FAILURE ("gshared"); g_assert (cfg->gshared && cmethod); g_assert (!addr); if (fsig->hasthis) MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg); if (cfg->llvm_only) { if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) { /* Handled in handle_constrained_gsharedvt_call () */ g_assert (!gshared_static_virtual); addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER); } else { if (gshared_static_virtual) addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); else addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC); } // FIXME: Avoid initializing imt_arg/vtable_arg ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall context_used_llvmonly %s -> %s\n", method->name, cmethod->name); } else { if (gshared_static_virtual) { /* * cmethod is a static interface method, the actual called method at runtime * needs to be computed using constrained_class and cmethod. */ addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); } else { addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); } if (inst_tailcall) mono_tailcall_print ("%s tailcall_calli#2 %s -> %s\n", tailcall_calli ? "making" : "missed", method->name, cmethod->name); tailcall = tailcall_calli; ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall); tailcall_remove_ret |= tailcall; } goto call_end; } /* Direct calls to icalls */ if (direct_icall) { MonoMethod *wrapper; int costs; /* Inline the wrapper */ wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot); costs = inline_method (cfg, wrapper, fsig, sp, ip, cfg->real_offset, TRUE, NULL); g_assert (costs > 0); cfg->real_offset += 5; if (!MONO_TYPE_IS_VOID (fsig->ret)) /* *sp is already set by inline_method */ ins = *sp; inline_costs += costs; if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall direct_icall %s -> %s\n", method->name, cmethod->name); goto call_end; } /* Array methods */ if (array_rank) { MonoInst *addr; if (strcmp (cmethod->name, "Set") == 0) { /* array Set */ MonoInst *val = sp [fsig->param_count]; if (val->type == STACK_OBJ) { MonoInst *iargs [ ] = { sp [0], val }; mono_emit_jit_icall (cfg, mono_helper_stelem_ref_check, iargs); } addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, TRUE); if (!mini_debug_options.weak_memory_model && val->type == STACK_OBJ) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, fsig->params [fsig->param_count - 1], addr->dreg, 0, val->dreg); if (cfg->gen_write_barriers && val->type == STACK_OBJ && !MONO_INS_IS_PCONST_NULL (val)) mini_emit_write_barrier (cfg, addr, val); if (cfg->gen_write_barriers && mini_is_gsharedvt_klass (cmethod->klass)) GSHAREDVT_FAILURE (il_op); } else if (strcmp (cmethod->name, "Get") == 0) { /* array Get */ addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, addr->dreg, 0); } else if (strcmp (cmethod->name, "Address") == 0) { /* array Address */ if (!m_class_is_valuetype (m_class_get_element_class (cmethod->klass)) && !readonly) mini_emit_check_array_type (cfg, sp [0], cmethod->klass); CHECK_TYPELOAD (cmethod->klass); readonly = FALSE; addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE); ins = addr; } else { g_assert_not_reached (); } emit_widen = FALSE; if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall array_rank %s -> %s\n", method->name, cmethod->name); goto call_end; } ins = mini_redirect_call (cfg, cmethod, fsig, sp, virtual_ ? sp [0] : NULL); if (ins) { if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall redirect %s -> %s\n", method->name, cmethod->name); goto call_end; } /* Tail prefix / tailcall optimization */ if (tailcall) { /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); } /* * Virtual calls in llvm-only mode. */ if (cfg->llvm_only && virtual_ && cmethod && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) { ins = mini_emit_llvmonly_virtual_call (cfg, cmethod, fsig, context_used, sp); goto call_end; } /* Common call */ if (!(cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !method_does_not_return (cmethod)) INLINE_FAILURE ("call"); common_call = TRUE; #ifdef TARGET_WASM /* Push an LMF so these frames can be enumerated during stack walks by mono_arch_unwind_frame () */ if (needs_stack_walk && !cfg->deopt) { MonoInst *method_ins; int lmf_reg; emit_push_lmf (cfg); EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; /* The lmf->method field will be used to look up the MonoJitInfo for this method */ method_ins = emit_get_rgctx_method (cfg, mono_method_check_context_used (cfg->method), cfg->method, MONO_RGCTX_INFO_METHOD); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, method), method_ins->dreg); } #endif call_end: // Check that the decision to tailcall would not have changed. g_assert (!called_is_supported_tailcall || tailcall_method == method); // FIXME? cmethod does change, weaken the assert if we weren't tailcalling anyway. // If this still fails, restructure the code, or call tailcall_supported again and assert no change. g_assert (!called_is_supported_tailcall || !tailcall || tailcall_cmethod == cmethod); g_assert (!called_is_supported_tailcall || tailcall_fsig == fsig); g_assert (!called_is_supported_tailcall || tailcall_virtual == virtual_); g_assert (!called_is_supported_tailcall || tailcall_extra_arg == (vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass))); if (common_call) // FIXME goto call_end && !common_call often skips tailcall processing. ins = mini_emit_method_call_full (cfg, cmethod, fsig, tailcall, sp, virtual_ ? sp [0] : NULL, imt_arg, vtable_arg); /* * Handle devirt of some A.B.C calls by replacing the result of A.B with a OP_TYPED_OBJREF instruction, so the .C * call can be devirtualized above. */ if (cmethod) ins = handle_call_res_devirt (cfg, cmethod, ins); #ifdef TARGET_WASM if (common_call && needs_stack_walk && !cfg->deopt) /* If an exception is thrown, the LMF is popped by a call to mini_llvmonly_pop_lmf () */ emit_pop_lmf (cfg); #endif if (noreturn) { MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); } calli_end: if ((tailcall_remove_ret || (common_call && tailcall)) && !cfg->llvm_only) { link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; // FIXME: Eliminate unreachable epilogs /* * OP_TAILCALL has no return value, so skip the CEE_RET if it is * only reachable from this call. */ GET_BBLOCK (cfg, tblock, next_ip); if (tblock == cfg->cbb || tblock->in_count == 0) skip_ret = TRUE; push_res = FALSE; need_seq_point = FALSE; } if (ins_flag & MONO_INST_TAILCALL) mini_test_tailcall (cfg, tailcall); /* End of call, INS should contain the result of the call, if any */ if (push_res && !MONO_TYPE_IS_VOID (fsig->ret)) { g_assert (ins); if (emit_widen) *sp++ = mono_emit_widen_call_res (cfg, ins, fsig); else *sp++ = ins; } if (save_last_error) { save_last_error = FALSE; #ifdef TARGET_WIN32 // Making icalls etc could clobber the value so emit inline code // to read last error on Windows. MONO_INST_NEW (cfg, ins, OP_GET_LAST_ERROR); ins->dreg = alloc_dreg (cfg, STACK_I4); ins->type = STACK_I4; MONO_ADD_INS (cfg->cbb, ins); mono_emit_jit_icall (cfg, mono_marshal_set_last_error_windows, &ins); #else mono_emit_jit_icall (cfg, mono_marshal_set_last_error, NULL); #endif } if (keep_this_alive) { MonoInst *dummy_use; /* See mini_emit_method_call_full () */ EMIT_NEW_DUMMY_USE (cfg, dummy_use, keep_this_alive); } if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) { /* * Clang can convert these calls to tailcalls which screw up the stack * walk. This happens even when the -fno-optimize-sibling-calls * option is passed to clang. * Work around this by emitting a dummy call. */ mono_emit_jit_icall (cfg, mono_dummy_jit_icall, NULL); } CHECK_CFG_EXCEPTION; if (skip_ret) { // FIXME When not followed by CEE_RET, correct behavior is to raise an exception. g_assert (next_ip [0] == CEE_RET); next_ip += 1; il_op = MonoOpcodeEnum_Invalid; // Call or ret? Unclear. } ins_flag = 0; constrained_class = NULL; if (need_seq_point) { //check is is a nested call and remove the non_empty_stack of the last call, only for non native methods if (!(method->flags & METHOD_IMPL_ATTRIBUTE_NATIVE)) { if (emitted_funccall_seq_point) { if (cfg->last_seq_point) cfg->last_seq_point->flags |= MONO_INST_NESTED_CALL; } else emitted_funccall_seq_point = TRUE; } emit_seq_point (cfg, method, next_ip, FALSE, TRUE); } break; } case MONO_CEE_RET: if (!detached_before_ret) mini_profiler_emit_leave (cfg, sig->ret->type != MONO_TYPE_VOID ? sp [-1] : NULL); g_assert (!method_does_not_return (method)); if (cfg->method != method) { /* return from inlined method */ /* * If in_count == 0, that means the ret is unreachable due to * being preceded by a throw. In that case, inline_method () will * handle setting the return value * (test case: test_0_inline_throw ()). */ if (return_var && cfg->cbb->in_count) { MonoType *ret_type = mono_method_signature_internal (method)->ret; MonoInst *store; CHECK_STACK (1); --sp; *sp = convert_value (cfg, ret_type, *sp); if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp)) UNVERIFIED; //g_assert (returnvar != -1); EMIT_NEW_TEMPSTORE (cfg, store, return_var->inst_c0, *sp); cfg->ret_var_set = TRUE; } } else { if (cfg->lmf_var && cfg->cbb->in_count && (!cfg->llvm_only || cfg->deopt)) emit_pop_lmf (cfg); if (cfg->ret) { MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (method)->ret); if (seq_points && !sym_seq_points) { /* * Place a seq point here too even through the IL stack is not * empty, so a step over on * call <FOO> * ret * will work correctly. */ NEW_SEQ_POINT (cfg, ins, ip - header->code, TRUE); MONO_ADD_INS (cfg->cbb, ins); } g_assert (!return_var); CHECK_STACK (1); --sp; *sp = convert_value (cfg, ret_type, *sp); if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp)) UNVERIFIED; emit_setret (cfg, *sp); } } if (sp != stack_start) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; case MONO_CEE_BR_S: MONO_INST_NEW (cfg, ins, OP_BR); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; inline_costs += BRANCH_COST; break; case MONO_CEE_BEQ_S: case MONO_CEE_BGE_S: case MONO_CEE_BGT_S: case MONO_CEE_BLE_S: case MONO_CEE_BLT_S: case MONO_CEE_BNE_UN_S: case MONO_CEE_BGE_UN_S: case MONO_CEE_BGT_UN_S: case MONO_CEE_BLE_UN_S: case MONO_CEE_BLT_UN_S: MONO_INST_NEW (cfg, ins, il_op + BIG_BRANCH_OFFSET); ADD_BINCOND (NULL); sp = stack_start; inline_costs += BRANCH_COST; break; case MONO_CEE_BR: MONO_INST_NEW (cfg, ins, OP_BR); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; inline_costs += BRANCH_COST; break; case MONO_CEE_BRFALSE_S: case MONO_CEE_BRTRUE_S: case MONO_CEE_BRFALSE: case MONO_CEE_BRTRUE: { MonoInst *cmp; gboolean is_true = il_op == MONO_CEE_BRTRUE_S || il_op == MONO_CEE_BRTRUE; if (sp [-1]->type == STACK_VTYPE || sp [-1]->type == STACK_R8) UNVERIFIED; sp--; GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); GET_BBLOCK (cfg, tblock, next_ip); link_bblock (cfg, cfg->cbb, tblock); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); CHECK_UNVERIFIABLE (cfg); } MONO_INST_NEW(cfg, cmp, OP_ICOMPARE_IMM); cmp->sreg1 = sp [0]->dreg; type_from_op (cfg, cmp, sp [0], NULL); CHECK_TYPE (cmp); #if SIZEOF_REGISTER == 4 if (cmp->opcode == OP_LCOMPARE_IMM) { /* Convert it to OP_LCOMPARE */ MONO_INST_NEW (cfg, ins, OP_I8CONST); ins->type = STACK_I8; ins->dreg = alloc_dreg (cfg, STACK_I8); ins->inst_l = 0; MONO_ADD_INS (cfg->cbb, ins); cmp->opcode = OP_LCOMPARE; cmp->sreg2 = ins->dreg; } #endif MONO_ADD_INS (cfg->cbb, cmp); MONO_INST_NEW (cfg, ins, is_true ? CEE_BNE_UN : CEE_BEQ); type_from_op (cfg, ins, sp [0], NULL); MONO_ADD_INS (cfg->cbb, ins); ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * 2); GET_BBLOCK (cfg, tblock, target); ins->inst_true_bb = tblock; GET_BBLOCK (cfg, tblock, next_ip); ins->inst_false_bb = tblock; start_new_bblock = 2; sp = stack_start; inline_costs += BRANCH_COST; break; } case MONO_CEE_BEQ: case MONO_CEE_BGE: case MONO_CEE_BGT: case MONO_CEE_BLE: case MONO_CEE_BLT: case MONO_CEE_BNE_UN: case MONO_CEE_BGE_UN: case MONO_CEE_BGT_UN: case MONO_CEE_BLE_UN: case MONO_CEE_BLT_UN: MONO_INST_NEW (cfg, ins, il_op); ADD_BINCOND (NULL); sp = stack_start; inline_costs += BRANCH_COST; break; case MONO_CEE_SWITCH: { MonoInst *src1; MonoBasicBlock **targets; MonoBasicBlock *default_bblock; MonoJumpInfoBBTable *table; int offset_reg = alloc_preg (cfg); int target_reg = alloc_preg (cfg); int table_reg = alloc_preg (cfg); int sum_reg = alloc_preg (cfg); gboolean use_op_switch; n = read32 (ip + 1); --sp; src1 = sp [0]; if ((src1->type != STACK_I4) && (src1->type != STACK_PTR)) UNVERIFIED; ip += 5; GET_BBLOCK (cfg, default_bblock, next_ip); default_bblock->flags |= BB_INDIRECT_JUMP_TARGET; targets = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * n); for (i = 0; i < n; ++i) { GET_BBLOCK (cfg, tblock, next_ip + (gint32)read32 (ip)); targets [i] = tblock; targets [i]->flags |= BB_INDIRECT_JUMP_TARGET; ip += 4; } if (sp != stack_start) { /* * Link the current bb with the targets as well, so handle_stack_args * will set their in_stack correctly. */ link_bblock (cfg, cfg->cbb, default_bblock); for (i = 0; i < n; ++i) link_bblock (cfg, cfg->cbb, targets [i]); handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); /* Undo the links */ mono_unlink_bblock (cfg, cfg->cbb, default_bblock); for (i = 0; i < n; ++i) mono_unlink_bblock (cfg, cfg->cbb, targets [i]); } MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ICOMPARE_IMM, -1, src1->dreg, n); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBGE_UN, default_bblock); for (i = 0; i < n; ++i) link_bblock (cfg, cfg->cbb, targets [i]); table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable)); table->table = targets; table->table_size = n; use_op_switch = FALSE; #ifdef TARGET_ARM /* ARM implements SWITCH statements differently */ /* FIXME: Make it use the generic implementation */ if (!cfg->compile_aot) use_op_switch = TRUE; #endif if (COMPILE_LLVM (cfg)) use_op_switch = TRUE; cfg->cbb->has_jump_table = 1; if (use_op_switch) { MONO_INST_NEW (cfg, ins, OP_SWITCH); ins->sreg1 = src1->dreg; ins->inst_p0 = table; ins->inst_many_bb = targets; ins->klass = (MonoClass *)GUINT_TO_POINTER (n); MONO_ADD_INS (cfg->cbb, ins); } else { if (TARGET_SIZEOF_VOID_P == 8) MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 3); else MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 2); #if SIZEOF_REGISTER == 8 /* The upper word might not be zero, and we add it to a 64 bit address later */ MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, offset_reg, offset_reg); #endif if (cfg->compile_aot) { MONO_EMIT_NEW_AOTCONST (cfg, table_reg, table, MONO_PATCH_INFO_SWITCH); } else { MONO_INST_NEW (cfg, ins, OP_JUMP_TABLE); ins->inst_c1 = MONO_PATCH_INFO_SWITCH; ins->inst_p0 = table; ins->dreg = table_reg; MONO_ADD_INS (cfg->cbb, ins); } /* FIXME: Use load_memindex */ MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, table_reg, offset_reg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, target_reg, sum_reg, 0); MONO_EMIT_NEW_UNALU (cfg, OP_BR_REG, -1, target_reg); } start_new_bblock = 1; inline_costs += BRANCH_COST * 2; break; } case MONO_CEE_LDIND_I1: case MONO_CEE_LDIND_U1: case MONO_CEE_LDIND_I2: case MONO_CEE_LDIND_U2: case MONO_CEE_LDIND_I4: case MONO_CEE_LDIND_U4: case MONO_CEE_LDIND_I8: case MONO_CEE_LDIND_I: case MONO_CEE_LDIND_R4: case MONO_CEE_LDIND_R8: case MONO_CEE_LDIND_REF: --sp; if (!(ins_flag & MONO_INST_NONULLCHECK)) MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, FALSE); ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (ldind_to_type (il_op)), sp [0], 0, ins_flag); *sp++ = ins; ins_flag = 0; break; case MONO_CEE_STIND_REF: case MONO_CEE_STIND_I1: case MONO_CEE_STIND_I2: case MONO_CEE_STIND_I4: case MONO_CEE_STIND_I8: case MONO_CEE_STIND_R4: case MONO_CEE_STIND_R8: case MONO_CEE_STIND_I: { sp -= 2; if (il_op == MONO_CEE_STIND_REF && sp [1]->type != STACK_OBJ) { /* stind.ref must only be used with object references. */ UNVERIFIED; } if (il_op == MONO_CEE_STIND_R4 && sp [1]->type == STACK_R8) sp [1] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.single_class), sp [1]); mini_emit_memory_store (cfg, m_class_get_byval_arg (stind_to_type (il_op)), sp [0], sp [1], ins_flag); ins_flag = 0; inline_costs += 1; break; } case MONO_CEE_MUL: MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); /* Use the immediate opcodes if possible */ int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode); if ((sp [1]->opcode == OP_ICONST) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->inst_c0)) { if (imm_opcode != -1) { ins->opcode = imm_opcode; ins->inst_p1 = (gpointer)(gssize)(sp [1]->inst_c0); ins->sreg2 = -1; NULLIFY_INS (sp [1]); } } MONO_ADD_INS ((cfg)->cbb, (ins)); *sp++ = mono_decompose_opcode (cfg, ins); break; case MONO_CEE_ADD: case MONO_CEE_SUB: case MONO_CEE_DIV: case MONO_CEE_DIV_UN: case MONO_CEE_REM: case MONO_CEE_REM_UN: case MONO_CEE_AND: case MONO_CEE_OR: case MONO_CEE_XOR: case MONO_CEE_SHL: case MONO_CEE_SHR: case MONO_CEE_SHR_UN: { MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); add_widen_op (cfg, ins, &sp [0], &sp [1]); ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); /* Use the immediate opcodes if possible */ int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode); if (((sp [1]->opcode == OP_ICONST) || (sp [1]->opcode == OP_I8CONST)) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->opcode == OP_ICONST ? sp [1]->inst_c0 : sp [1]->inst_l)) { if (imm_opcode != -1) { ins->opcode = imm_opcode; if (sp [1]->opcode == OP_I8CONST) { #if SIZEOF_REGISTER == 8 ins->inst_imm = sp [1]->inst_l; #else ins->inst_l = sp [1]->inst_l; #endif } else { ins->inst_imm = (gssize)(sp [1]->inst_c0); } ins->sreg2 = -1; /* Might be followed by an instruction added by add_widen_op */ if (sp [1]->next == NULL) NULLIFY_INS (sp [1]); } } MONO_ADD_INS ((cfg)->cbb, (ins)); *sp++ = mono_decompose_opcode (cfg, ins); break; } case MONO_CEE_NEG: case MONO_CEE_NOT: case MONO_CEE_CONV_I1: case MONO_CEE_CONV_I2: case MONO_CEE_CONV_I4: case MONO_CEE_CONV_R4: case MONO_CEE_CONV_R8: case MONO_CEE_CONV_U4: case MONO_CEE_CONV_I8: case MONO_CEE_CONV_U8: case MONO_CEE_CONV_OVF_I8: case MONO_CEE_CONV_OVF_U8: case MONO_CEE_CONV_R_UN: /* Special case this earlier so we have long constants in the IR */ if ((il_op == MONO_CEE_CONV_I8 || il_op == MONO_CEE_CONV_U8) && (sp [-1]->opcode == OP_ICONST)) { int data = sp [-1]->inst_c0; sp [-1]->opcode = OP_I8CONST; sp [-1]->type = STACK_I8; #if SIZEOF_REGISTER == 8 if (il_op == MONO_CEE_CONV_U8) sp [-1]->inst_c0 = (guint32)data; else sp [-1]->inst_c0 = data; #else if (il_op == MONO_CEE_CONV_U8) sp [-1]->inst_l = (guint32)data; else sp [-1]->inst_l = data; #endif sp [-1]->dreg = alloc_dreg (cfg, STACK_I8); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_OVF_I4: case MONO_CEE_CONV_OVF_I1: case MONO_CEE_CONV_OVF_I2: case MONO_CEE_CONV_OVF_I: case MONO_CEE_CONV_OVF_I1_UN: case MONO_CEE_CONV_OVF_I2_UN: case MONO_CEE_CONV_OVF_I4_UN: case MONO_CEE_CONV_OVF_I8_UN: case MONO_CEE_CONV_OVF_I_UN: if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) { /* floats are always signed, _UN has no effect */ ADD_UNOP (CEE_CONV_OVF_I8); if (il_op == MONO_CEE_CONV_OVF_I1_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I1); else if (il_op == MONO_CEE_CONV_OVF_I2_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I2); else if (il_op == MONO_CEE_CONV_OVF_I4_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I4); else if (il_op == MONO_CEE_CONV_OVF_I8_UN) ; else ADD_UNOP (il_op); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_OVF_U1: case MONO_CEE_CONV_OVF_U2: case MONO_CEE_CONV_OVF_U4: case MONO_CEE_CONV_OVF_U: case MONO_CEE_CONV_OVF_U1_UN: case MONO_CEE_CONV_OVF_U2_UN: case MONO_CEE_CONV_OVF_U4_UN: case MONO_CEE_CONV_OVF_U8_UN: case MONO_CEE_CONV_OVF_U_UN: if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) { /* floats are always signed, _UN has no effect */ ADD_UNOP (CEE_CONV_OVF_U8); ADD_UNOP (il_op); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_U2: case MONO_CEE_CONV_U1: case MONO_CEE_CONV_U: case MONO_CEE_CONV_I: ADD_UNOP (il_op); CHECK_CFG_EXCEPTION; break; case MONO_CEE_ADD_OVF: case MONO_CEE_ADD_OVF_UN: case MONO_CEE_MUL_OVF: case MONO_CEE_MUL_OVF_UN: case MONO_CEE_SUB_OVF: case MONO_CEE_SUB_OVF_UN: MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); if (ovf_exc) ins->inst_exc_name = ovf_exc; else ins->inst_exc_name = "OverflowException"; /* Have to insert a widening op */ add_widen_op (cfg, ins, &sp [0], &sp [1]); ins->dreg = alloc_dreg (cfg, (MonoStackType)(ins)->type); MONO_ADD_INS ((cfg)->cbb, ins); /* The opcode might be emulated, so need to special case this */ if (ovf_exc && mono_find_jit_opcode_emulation (ins->opcode)) { switch (ins->opcode) { case OP_IMUL_OVF_UN: /* This opcode is just a placeholder, it will be emulated also */ ins->opcode = OP_IMUL_OVF_UN_OOM; break; case OP_LMUL_OVF_UN: /* This opcode is just a placeholder, it will be emulated also */ ins->opcode = OP_LMUL_OVF_UN_OOM; break; default: g_assert_not_reached (); } } ovf_exc = NULL; *sp++ = mono_decompose_opcode (cfg, ins); break; case MONO_CEE_CPOBJ: GSHAREDVT_FAILURE (il_op); GSHAREDVT_FAILURE (*ip); klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); sp -= 2; mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag); ins_flag = 0; break; case MONO_CEE_LDOBJ: { int loc_index = -1; int stloc_len = 0; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* Optimize the common ldobj+stloc combination */ if (next_ip < end) { switch (next_ip [0]) { case MONO_CEE_STLOC_S: CHECK_OPSIZE (7); loc_index = next_ip [1]; stloc_len = 2; break; case MONO_CEE_STLOC_0: case MONO_CEE_STLOC_1: case MONO_CEE_STLOC_2: case MONO_CEE_STLOC_3: loc_index = next_ip [0] - CEE_STLOC_0; stloc_len = 1; break; default: break; } } if ((loc_index != -1) && ip_in_bb (cfg, cfg->cbb, next_ip)) { CHECK_LOCAL (loc_index); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), sp [0]->dreg, 0); ins->dreg = cfg->locals [loc_index]->dreg; ins->flags |= ins_flag; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += stloc_len; if (ins_flag & MONO_INST_VOLATILE) { /* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ); } ins_flag = 0; break; } /* Optimize the ldobj+stobj combination */ if (next_ip + 4 < end && next_ip [0] == CEE_STOBJ && ip_in_bb (cfg, cfg->cbb, next_ip) && read32 (next_ip + 1) == token) { CHECK_STACK (1); sp --; mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag); il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += 5; ins_flag = 0; break; } ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (klass), sp [0], 0, ins_flag); *sp++ = ins; ins_flag = 0; inline_costs += 1; break; } case MONO_CEE_LDSTR: if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD) { EMIT_NEW_PCONST (cfg, ins, mono_method_get_wrapper_data (method, n)); ins->type = STACK_OBJ; *sp = ins; } else if (method->wrapper_type != MONO_WRAPPER_NONE) { MonoInst *iargs [1]; char *str = (char *)mono_method_get_wrapper_data (method, n); if (cfg->compile_aot) EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str); else EMIT_NEW_PCONST (cfg, iargs [0], str); *sp = mono_emit_jit_icall (cfg, mono_string_new_wrapper_internal, iargs); } else { { if (cfg->cbb->out_of_line) { MonoInst *iargs [2]; if (image == mono_defaults.corlib) { /* * Avoid relocations in AOT and save some space by using a * version of helper_ldstr specialized to mscorlib. */ EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (n)); *sp = mono_emit_jit_icall (cfg, mono_helper_ldstr_mscorlib, iargs); } else { /* Avoid creating the string object */ EMIT_NEW_IMAGECONST (cfg, iargs [0], image); EMIT_NEW_ICONST (cfg, iargs [1], mono_metadata_token_index (n)); *sp = mono_emit_jit_icall (cfg, mono_helper_ldstr, iargs); } } else if (cfg->compile_aot) { NEW_LDSTRCONST (cfg, ins, image, n); *sp = ins; MONO_ADD_INS (cfg->cbb, ins); } else { NEW_PCONST (cfg, ins, NULL); ins->type = STACK_OBJ; ins->inst_p0 = mono_ldstr_checked (image, mono_metadata_token_index (n), cfg->error); CHECK_CFG_ERROR; if (!ins->inst_p0) OUT_OF_MEMORY_FAILURE; *sp = ins; MONO_ADD_INS (cfg->cbb, ins); } } } sp++; break; case MONO_CEE_NEWOBJ: { MonoInst *iargs [2]; MonoMethodSignature *fsig; MonoInst this_ins; MonoInst *alloc; MonoInst *vtable_arg = NULL; cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error); CHECK_CFG_ERROR; mono_save_token_info (cfg, image, token, cmethod); if (!mono_class_init_internal (cmethod->klass)) TYPE_LOAD_ERROR (cmethod->klass); context_used = mini_method_check_context_used (cfg, cmethod); if (!dont_verify && !cfg->skip_visibility) { MonoMethod *cil_method = cmethod; MonoMethod *target_method = cil_method; if (method->is_inflated) { MonoGenericContainer *container = mono_method_get_generic_container(method_definition); MonoGenericContext *context = (container != NULL ? &container->context : NULL); target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error); CHECK_CFG_ERROR; } if (!mono_method_can_access_method (method_definition, target_method) && !mono_method_can_access_method (method, cil_method)) emit_method_access_failure (cfg, method, cil_method); } if (cfg->gshared && cmethod && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) { emit_class_init (cfg, cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } /* if (cfg->gsharedvt) { if (mini_is_gsharedvt_variable_signature (sig)) GSHAREDVT_FAILURE (il_op); } */ n = fsig->param_count; CHECK_STACK (n); /* * Generate smaller code for the common newobj <exception> instruction in * argument checking code. */ if (cfg->cbb->out_of_line && m_class_get_image (cmethod->klass) == mono_defaults.corlib && is_exception_class (cmethod->klass) && n <= 2 && ((n < 1) || (!m_type_is_byref (fsig->params [0]) && fsig->params [0]->type == MONO_TYPE_STRING)) && ((n < 2) || (!m_type_is_byref (fsig->params [1]) && fsig->params [1]->type == MONO_TYPE_STRING))) { MonoInst *iargs [3]; sp -= n; EMIT_NEW_ICONST (cfg, iargs [0], m_class_get_type_token (cmethod->klass)); switch (n) { case 0: *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_0, iargs); break; case 1: iargs [1] = sp [0]; *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_1, iargs); break; case 2: iargs [1] = sp [0]; iargs [2] = sp [1]; *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_2, iargs); break; default: g_assert_not_reached (); } inline_costs += 5; break; } /* move the args to allow room for 'this' in the first position */ while (n--) { --sp; sp [1] = sp [0]; } for (int i = 0; i < fsig->param_count; ++i) sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]); /* check_call_signature () requires sp[0] to be set */ this_ins.type = STACK_OBJ; sp [0] = &this_ins; if (check_call_signature (cfg, fsig, sp)) UNVERIFIED; iargs [0] = NULL; if (mini_class_is_system_array (cmethod->klass)) { *sp = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); MonoJitICallId function = MONO_JIT_ICALL_ZeroIsReserved; int rank = m_class_get_rank (cmethod->klass); int n = fsig->param_count; /* Optimize the common cases, use ctor using length for each rank (no lbound). */ if (n == rank) { switch (n) { case 1: function = MONO_JIT_ICALL_mono_array_new_1; break; case 2: function = MONO_JIT_ICALL_mono_array_new_2; break; case 3: function = MONO_JIT_ICALL_mono_array_new_3; break; case 4: function = MONO_JIT_ICALL_mono_array_new_4; break; default: break; } } /* Regular case, rank > 4 or legnth, lbound specified per rank. */ if (function == MONO_JIT_ICALL_ZeroIsReserved) { // FIXME Maximum value of param_count? Realistically 64. Fits in imm? if (!array_new_localalloc_ins) { MONO_INST_NEW (cfg, array_new_localalloc_ins, OP_LOCALLOC_IMM); array_new_localalloc_ins->dreg = alloc_preg (cfg); cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_ADD_INS (init_localsbb, array_new_localalloc_ins); } array_new_localalloc_ins->inst_imm = MAX (array_new_localalloc_ins->inst_imm, n * sizeof (target_mgreg_t)); int dreg = array_new_localalloc_ins->dreg; if (2 * rank == n) { /* [lbound, length, lbound, length, ...] * mono_array_new_n_icall expects a non-interleaved list of * lbounds and lengths, so deinterleave here. */ for (int l = 0; l < 2; ++l) { int src = l; int dst = l * rank; for (int r = 0; r < rank; ++r, src += 2, ++dst) { NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, dst * sizeof (target_mgreg_t), sp [src + 1]->dreg); MONO_ADD_INS (cfg->cbb, ins); } } } else { /* [length, length, length, ...] */ for (int i = 0; i < n; ++i) { NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, i * sizeof (target_mgreg_t), sp [i + 1]->dreg); MONO_ADD_INS (cfg->cbb, ins); } } EMIT_NEW_ICONST (cfg, ins, n); sp [1] = ins; EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), dreg); ins->type = STACK_PTR; sp [2] = ins; // FIXME Adjust sp by n - 3? Attempts failed. function = MONO_JIT_ICALL_mono_array_new_n_icall; } alloc = mono_emit_jit_icall_id (cfg, function, sp); } else if (cmethod->string_ctor) { g_assert (!context_used); g_assert (!vtable_arg); /* we simply pass a null pointer */ EMIT_NEW_PCONST (cfg, *sp, NULL); /* now call the string ctor */ alloc = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, NULL, NULL, NULL); } else { if (m_class_is_valuetype (cmethod->klass)) { iargs [0] = mono_compile_create_var (cfg, m_class_get_byval_arg (cmethod->klass), OP_LOCAL); mini_emit_init_rvar (cfg, iargs [0]->dreg, m_class_get_byval_arg (cmethod->klass)); EMIT_NEW_TEMPLOADA (cfg, *sp, iargs [0]->inst_c0); alloc = NULL; /* * The code generated by mini_emit_virtual_call () expects * iargs [0] to be a boxed instance, but luckily the vcall * will be transformed into a normal call there. */ } else if (context_used) { alloc = handle_alloc (cfg, cmethod->klass, FALSE, context_used); *sp = alloc; } else { MonoVTable *vtable = NULL; if (!cfg->compile_aot) vtable = mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); /* * TypeInitializationExceptions thrown from the mono_runtime_class_init * call in mono_jit_runtime_invoke () can abort the finalizer thread. * As a workaround, we call class cctors before allocating objects. */ if (mini_field_access_needs_cctor_run (cfg, method, cmethod->klass, vtable) && !(g_slist_find (class_inits, cmethod->klass))) { emit_class_init (cfg, cmethod->klass); if (cfg->verbose_level > 2) printf ("class %s.%s needs init call for ctor\n", m_class_get_name_space (cmethod->klass), m_class_get_name (cmethod->klass)); class_inits = g_slist_prepend (class_inits, cmethod->klass); } alloc = handle_alloc (cfg, cmethod->klass, FALSE, 0); *sp = alloc; } CHECK_CFG_EXCEPTION; /*for handle_alloc*/ if (alloc) MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, alloc->dreg); /* Now call the actual ctor */ int ctor_inline_costs = 0; handle_ctor_call (cfg, cmethod, fsig, context_used, sp, ip, &ctor_inline_costs); // don't contribute to inline_const if ctor has [MethodImpl(MethodImplOptions.AggressiveInlining)] if (!COMPILE_LLVM(cfg) || !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING)) inline_costs += ctor_inline_costs; CHECK_CFG_EXCEPTION; } if (alloc == NULL) { /* Valuetype */ EMIT_NEW_TEMPLOAD (cfg, ins, iargs [0]->inst_c0); mini_type_to_eval_stack_type (cfg, m_class_get_byval_arg (ins->klass), ins); *sp++= ins; } else { *sp++ = alloc; } inline_costs += 5; if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code))) emit_seq_point (cfg, method, next_ip, FALSE, TRUE); break; } case MONO_CEE_CASTCLASS: case MONO_CEE_ISINST: { --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (sp [0]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, (il_op == MONO_CEE_ISINST) ? OP_ISINST : OP_CASTCLASS); ins->dreg = alloc_preg (cfg); ins->sreg1 = (*sp)->dreg; ins->klass = klass; ins->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, ins); CHECK_CFG_EXCEPTION; *sp++ = ins; cfg->flags |= MONO_CFG_HAS_TYPE_CHECK; break; } case MONO_CEE_UNBOX_ANY: { MonoInst *res, *addr; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mini_is_gsharedvt_klass (klass)) { res = handle_unbox_gsharedvt (cfg, klass, *sp); inline_costs += 2; } else if (mini_class_is_reference (klass)) { if (MONO_INS_IS_PCONST_NULL (*sp)) { EMIT_NEW_PCONST (cfg, res, NULL); res->type = STACK_OBJ; } else { MONO_INST_NEW (cfg, res, OP_CASTCLASS); res->dreg = alloc_preg (cfg); res->sreg1 = (*sp)->dreg; res->klass = klass; res->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, res); cfg->flags |= MONO_CFG_HAS_TYPE_CHECK; } } else if (mono_class_is_nullable (klass)) { res = handle_unbox_nullable (cfg, *sp, klass, context_used); } else { addr = mini_handle_unbox (cfg, klass, *sp, context_used); /* LDOBJ */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); res = ins; inline_costs += 2; } *sp ++ = res; break; } case MONO_CEE_BOX: { MonoInst *val; MonoClass *enum_class; MonoMethod *has_flag; MonoMethodSignature *has_flag_sig; --sp; val = *sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mini_class_is_reference (klass)) { *sp++ = val; break; } val = convert_value (cfg, m_class_get_byval_arg (klass), val); if (klass == mono_defaults.void_class) UNVERIFIED; if (target_type_is_incompatible (cfg, m_class_get_byval_arg (klass), val)) UNVERIFIED; /* frequent check in generic code: box (struct), brtrue */ /* * Look for: * * <push int/long ptr> * <push int/long> * box MyFlags * constrained. MyFlags * callvirt instace bool class [mscorlib] System.Enum::HasFlag (class [mscorlib] System.Enum) * * If we find this sequence and the operand types on box and constrained * are equal, we can emit a specialized instruction sequence instead of * the very slow HasFlag () call. * This code sequence is generated by older mcs/csc, the newer one is handled in * emit_inst_for_method (). */ guint32 constrained_token; guint32 callvirt_token; if ((cfg->opt & MONO_OPT_INTRINS) && // FIXME ip_in_bb as we go? next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (ip = il_read_constrained (next_ip, end, &constrained_token)) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_callvirt (ip, end, &callvirt_token)) && ip_in_bb (cfg, cfg->cbb, ip) && m_class_is_enumtype (klass) && (enum_class = mini_get_class (method, constrained_token, generic_context)) && (has_flag = mini_get_method (cfg, method, callvirt_token, NULL, generic_context)) && has_flag->klass == mono_defaults.enum_class && !strcmp (has_flag->name, "HasFlag") && (has_flag_sig = mono_method_signature_internal (has_flag)) && has_flag_sig->hasthis && has_flag_sig->param_count == 1) { CHECK_TYPELOAD (enum_class); if (enum_class == klass) { MonoInst *enum_this, *enum_flag; next_ip = ip; il_op = MONO_CEE_CALLVIRT; --sp; enum_this = sp [0]; enum_flag = sp [1]; *sp++ = mini_handle_enum_has_flag (cfg, klass, enum_this, -1, enum_flag); break; } } guint32 unbox_any_token; /* * Common in generic code: * box T1, unbox.any T2. */ if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (ip = il_read_unbox_any (next_ip, end, &unbox_any_token))) { MonoClass *unbox_klass = mini_get_class (method, unbox_any_token, generic_context); CHECK_TYPELOAD (unbox_klass); if (klass == unbox_klass) { next_ip = ip; *sp++ = val; break; } } // Optimize // // box // call object::GetType() // guint32 gettype_token; if ((ip = il_read_call(next_ip, end, &gettype_token)) && ip_in_bb (cfg, cfg->cbb, ip)) { MonoMethod* gettype_method = mini_get_method (cfg, method, gettype_token, NULL, generic_context); if (!strcmp (gettype_method->name, "GetType") && gettype_method->klass == mono_defaults.object_class) { mono_class_init_internal(klass); if (mono_class_get_checked (m_class_get_image (klass), m_class_get_type_token (klass), error) == klass) { if (cfg->compile_aot) { EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (klass), m_class_get_type_token (klass), generic_context); } else { MonoType *klass_type = m_class_get_byval_arg (klass); MonoReflectionType* reflection_type = mono_type_get_object_checked (klass_type, cfg->error); EMIT_NEW_PCONST (cfg, ins, reflection_type); } ins->type = STACK_OBJ; ins->klass = mono_defaults.systemtype_class; *sp++ = ins; next_ip = ip; break; } } } // Optimize // // box // ldnull // ceq (or cgt.un) // // to just // // ldc.i4.0 (or 1) guchar* ldnull_ip; if ((ldnull_ip = il_read_op (next_ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) { gboolean is_eq = FALSE, is_neq = FALSE; if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ))) is_eq = TRUE; else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN))) is_neq = TRUE; if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) && !mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) { next_ip = ip; il_op = (MonoOpcodeEnum) (is_eq ? CEE_LDC_I4_0 : CEE_LDC_I4_1); EMIT_NEW_ICONST (cfg, ins, is_eq ? 0 : 1); ins->type = STACK_I4; *sp++ = ins; break; } } guint32 isinst_tk = 0; if ((ip = il_read_op_and_token (next_ip, end, CEE_ISINST, MONO_CEE_ISINST, &isinst_tk)) && ip_in_bb (cfg, cfg->cbb, ip)) { MonoClass *isinst_class = mini_get_class (method, isinst_tk, generic_context); if (!mono_class_is_nullable (klass) && !mono_class_is_nullable (isinst_class) && !mini_is_gsharedvt_variable_klass (klass) && !mini_is_gsharedvt_variable_klass (isinst_class) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (klass)) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (isinst_class))) { // Optimize // // box // isinst [Type] // brfalse/brtrue // // to // // ldc.i4.0 (or 1) // brfalse/brtrue // guchar* br_ip = NULL; if ((br_ip = il_read_brtrue (ip, end, &target)) || (br_ip = il_read_brtrue_s (ip, end, &target)) || (br_ip = il_read_brfalse (ip, end, &target)) || (br_ip = il_read_brfalse_s (ip, end, &target))) { gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass); next_ip = ip; il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0); EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } // Optimize // // box // isinst [Type] // ldnull // ceq/cgt.un // // to // // ldc.i4.0 (or 1) // guchar* ldnull_ip = NULL; if ((ldnull_ip = il_read_op (ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) { gboolean is_eq = FALSE, is_neq = FALSE; if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ))) is_eq = TRUE; else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN))) is_neq = TRUE; if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) && !mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) { gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass); next_ip = ip; if (is_eq) isinst = !isinst; il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0); EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } } // Optimize // // box // isinst [Type] // unbox.any // // to // // nop // guchar* unbox_ip = NULL; guint32 unbox_token = 0; if ((unbox_ip = il_read_unbox_any (ip, end, &unbox_token)) && ip_in_bb (cfg, cfg->cbb, unbox_ip)) { MonoClass *unbox_klass = mini_get_class (method, unbox_token, generic_context); CHECK_TYPELOAD (unbox_klass); if (!mono_class_is_nullable (unbox_klass) && !mini_is_gsharedvt_klass (unbox_klass) && klass == isinst_class && klass == unbox_klass) { *sp++ = val; next_ip = unbox_ip; break; } } } } gboolean is_true; // FIXME: LLVM can't handle the inconsistent bb linking if (!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && ( (is_true = !!(ip = il_read_brtrue (next_ip, end, &target))) || (is_true = !!(ip = il_read_brtrue_s (next_ip, end, &target))) || (ip = il_read_brfalse (next_ip, end, &target)) || (ip = il_read_brfalse_s (next_ip, end, &target)))) { int dreg; MonoBasicBlock *true_bb, *false_bb; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip = ip; if (cfg->verbose_level > 3) { printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL)); printf ("<box+brtrue opt>\n"); } /* * We need to link both bblocks, since it is needed for handling stack * arguments correctly (See test_0_box_brtrue_opt_regress_81102). * Branching to only one of them would lead to inconsistencies, so * generate an ICONST+BRTRUE, the branch opts will get rid of them. */ GET_BBLOCK (cfg, true_bb, target); GET_BBLOCK (cfg, false_bb, next_ip); mono_link_bblock (cfg, cfg->cbb, true_bb); mono_link_bblock (cfg, cfg->cbb, false_bb); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } if (COMPILE_LLVM (cfg)) { dreg = alloc_ireg (cfg); MONO_EMIT_NEW_ICONST (cfg, dreg, 0); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, dreg, is_true ? 0 : 1); MONO_EMIT_NEW_BRANCH_BLOCK2 (cfg, OP_IBEQ, true_bb, false_bb); } else { /* The JIT can't eliminate the iconst+compare */ MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = is_true ? true_bb : false_bb; MONO_ADD_INS (cfg->cbb, ins); } start_new_bblock = 1; break; } if (m_class_is_enumtype (klass) && !mini_is_gsharedvt_klass (klass) && !(val->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4)) { /* Can't do this with 64 bit enums on 32 bit since the vtype decomp pass is ran after the long decomp pass */ if (val->opcode == OP_ICONST) { MONO_INST_NEW (cfg, ins, OP_BOX_ICONST); ins->type = STACK_OBJ; ins->klass = klass; ins->inst_c0 = val->inst_c0; ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type); } else { MONO_INST_NEW (cfg, ins, OP_BOX); ins->type = STACK_OBJ; ins->klass = klass; ins->sreg1 = val->dreg; ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type); } MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; } else { *sp++ = mini_emit_box (cfg, val, klass, context_used); } CHECK_CFG_EXCEPTION; inline_costs += 1; break; } case MONO_CEE_UNBOX: { --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mono_class_is_nullable (klass)) { MonoInst *val; val = handle_unbox_nullable (cfg, *sp, klass, context_used); EMIT_NEW_VARLOADA (cfg, ins, get_vreg_to_inst (cfg, val->dreg), m_class_get_byval_arg (val->klass)); *sp++= ins; } else { ins = mini_handle_unbox (cfg, klass, *sp, context_used); *sp++ = ins; } inline_costs += 2; break; } case MONO_CEE_LDFLD: case MONO_CEE_LDFLDA: case MONO_CEE_STFLD: case MONO_CEE_LDSFLD: case MONO_CEE_LDSFLDA: case MONO_CEE_STSFLD: { MonoClassField *field; guint foffset; gboolean is_instance; gpointer addr = NULL; gboolean is_special_static; MonoType *ftype; MonoInst *store_val = NULL; MonoInst *thread_ins; is_instance = (il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDFLDA || il_op == MONO_CEE_STFLD); if (is_instance) { if (il_op == MONO_CEE_STFLD) { sp -= 2; store_val = sp [1]; } else { --sp; } if (sp [0]->type == STACK_I4 || sp [0]->type == STACK_I8 || sp [0]->type == STACK_R8) UNVERIFIED; if (il_op != MONO_CEE_LDFLD && sp [0]->type == STACK_VTYPE) UNVERIFIED; } else { if (il_op == MONO_CEE_STSFLD) { sp--; store_val = sp [0]; } } if (method->wrapper_type != MONO_WRAPPER_NONE) { field = (MonoClassField *)mono_method_get_wrapper_data (method, token); klass = m_field_get_parent (field); } else { klass = NULL; field = mono_field_from_token_checked (image, token, &klass, generic_context, cfg->error); if (!field) CHECK_TYPELOAD (klass); CHECK_CFG_ERROR; } if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_field (method, field)) FIELD_ACCESS_FAILURE (method, field); mono_class_init_internal (klass); mono_class_setup_fields (klass); ftype = mono_field_get_type_internal (field); /* * LDFLD etc. is usable on static fields as well, so convert those cases to * the static case. */ if (is_instance && ftype->attrs & FIELD_ATTRIBUTE_STATIC) { switch (il_op) { case MONO_CEE_LDFLD: il_op = MONO_CEE_LDSFLD; break; case MONO_CEE_STFLD: il_op = MONO_CEE_STSFLD; break; case MONO_CEE_LDFLDA: il_op = MONO_CEE_LDSFLDA; break; default: g_assert_not_reached (); } is_instance = FALSE; } context_used = mini_class_check_context_used (cfg, klass); if (il_op == MONO_CEE_LDSFLD) { ins = mini_emit_inst_for_field_load (cfg, field); if (ins) { *sp++ = ins; goto field_access_end; } } /* INSTANCE CASE */ if (is_instance) g_assert (field->offset); foffset = m_class_is_valuetype (klass) ? field->offset - MONO_ABI_SIZEOF (MonoObject): field->offset; if (il_op == MONO_CEE_STFLD) { sp [1] = convert_value (cfg, field->type, sp [1]); if (target_type_is_incompatible (cfg, field->type, sp [1])) UNVERIFIED; { MonoInst *store; MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ()); if (ins_flag & MONO_INST_VOLATILE) { /* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; context_used = mini_class_check_context_used (cfg, klass); offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg); if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) { store = mini_emit_storing_write_barrier (cfg, ins, sp [1]); } else { /* The decomposition will call mini_emit_memory_copy () which will emit a wbarrier if needed */ EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, dreg, 0, sp [1]->dreg); } } else { if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) { /* insert call to write barrier */ MonoInst *ptr; int dreg; dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, sp [0]->dreg, foffset); store = mini_emit_storing_write_barrier (cfg, ptr, sp [1]); } else { EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, sp [0]->dreg, foffset, sp [1]->dreg); } } if (sp [0]->opcode != OP_LDADDR) store->flags |= MONO_INST_FAULT; store->flags |= ins_flag; } goto field_access_end; } if (is_instance) { if (sp [0]->type == STACK_VTYPE) { MonoInst *var; /* Have to compute the address of the variable */ var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!var) var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, sp [0]->dreg); else g_assert (var->klass == klass); EMIT_NEW_VARLOADA (cfg, ins, var, m_class_get_byval_arg (var->klass)); sp [0] = ins; } if (il_op == MONO_CEE_LDFLDA) { if (sp [0]->type == STACK_OBJ) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); } dreg = alloc_ireg_mp (cfg); if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg); } else { EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, dreg, sp [0]->dreg, foffset); } ins->klass = mono_class_from_mono_type_internal (field->type); ins->type = STACK_MP; *sp++ = ins; } else { MonoInst *load; MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ()); #ifdef MONO_ARCH_SIMD_INTRINSICS if (sp [0]->opcode == OP_LDADDR && m_class_is_simd_type (klass) && cfg->opt & MONO_OPT_SIMD) { ins = mono_emit_simd_field_load (cfg, field, sp [0]); if (ins) { *sp++ = ins; goto field_access_end; } } #endif MonoInst *field_add_inst = sp [0]; if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); EMIT_NEW_BIALU (cfg, field_add_inst, OP_PADD, alloc_ireg_mp (cfg), sp [0]->dreg, offset_ins->dreg); foffset = 0; } load = mini_emit_memory_load (cfg, field->type, field_add_inst, foffset, ins_flag); if (sp [0]->opcode != OP_LDADDR) load->flags |= MONO_INST_FAULT; *sp++ = load; } } if (is_instance) goto field_access_end; /* STATIC CASE */ context_used = mini_class_check_context_used (cfg, klass); if (ftype->attrs & FIELD_ATTRIBUTE_LITERAL) { mono_error_set_field_missing (cfg->error, m_field_get_parent (field), field->name, NULL, "Using static instructions with literal field"); CHECK_CFG_ERROR; } /* The special_static_fields field is init'd in mono_class_vtable, so it needs * to be called here. */ if (!context_used) { mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); } addr = mono_special_static_field_get_offset (field, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); is_special_static = mono_class_field_is_special_static (field); if (is_special_static && ((gsize)addr & 0x80000000) == 0) thread_ins = mono_create_tls_get (cfg, TLS_KEY_THREAD); else thread_ins = NULL; /* Generate IR to compute the field address */ if (is_special_static && ((gsize)addr & 0x80000000) == 0 && thread_ins && !(context_used && cfg->gsharedvt && mini_is_gsharedvt_klass (klass))) { /* * Fast access to TLS data * Inline version of get_thread_static_data () in * threads.c. */ guint32 offset; int idx, static_data_reg, array_reg, dreg; static_data_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, static_data_reg, thread_ins->dreg, MONO_STRUCT_OFFSET (MonoInternalThread, static_data)); if (cfg->compile_aot || context_used) { int offset_reg, offset2_reg, idx_reg; /* For TLS variables, this will return the TLS offset */ if (context_used) { MonoInst *addr_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, addr_ins->dreg, addr_ins->dreg, 1); } else { EMIT_NEW_SFLDACONST (cfg, ins, field); } offset_reg = ins->dreg; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset_reg, offset_reg, 0x7fffffff); idx_reg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, idx_reg, offset_reg, 0x3f); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHL_IMM, idx_reg, idx_reg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, static_data_reg, static_data_reg, idx_reg); array_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, 0); offset2_reg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHR_UN_IMM, offset2_reg, offset_reg, 6); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset2_reg, offset2_reg, 0x1ffffff); dreg = alloc_ireg (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, array_reg, offset2_reg); } else { offset = (gsize)addr & 0x7fffffff; idx = offset & 0x3f; array_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, idx * TARGET_SIZEOF_VOID_P); dreg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_ADD_IMM, dreg, array_reg, ((offset >> 6) & 0x1ffffff)); } } else if ((cfg->compile_aot && is_special_static) || (context_used && is_special_static)) { MonoInst *iargs [1]; g_assert (m_field_get_parent (field)); if (context_used) { iargs [0] = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_CLASS_FIELD); } else { EMIT_NEW_FIELDCONST (cfg, iargs [0], field); } ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs); } else if (context_used) { MonoInst *static_data; /* g_print ("sharing static field access in %s.%s.%s - depth %d offset %d\n", method->klass->name_space, method->klass->name, method->name, depth, field->offset); */ if (mono_class_needs_cctor_run (klass, method)) emit_class_init (cfg, klass); /* * The pointer we're computing here is * * super_info.static_data + field->offset */ static_data = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_STATIC_DATA); if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, static_data->dreg, offset_ins->dreg); } else if (field->offset == 0) { ins = static_data; } else { int addr_reg = mono_alloc_preg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, addr_reg, static_data->dreg, field->offset); } } else if (cfg->compile_aot && addr) { MonoInst *iargs [1]; g_assert (m_field_get_parent (field)); EMIT_NEW_FIELDCONST (cfg, iargs [0], field); ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs); } else { MonoVTable *vtable = NULL; if (!cfg->compile_aot) vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); if (!addr) { if (mini_field_access_needs_cctor_run (cfg, method, klass, vtable)) { if (!(g_slist_find (class_inits, klass))) { emit_class_init (cfg, klass); if (cfg->verbose_level > 2) printf ("class %s.%s needs init call for %s\n", m_class_get_name_space (klass), m_class_get_name (klass), mono_field_get_name (field)); class_inits = g_slist_prepend (class_inits, klass); } } else { if (cfg->run_cctors) { /* This makes so that inline cannot trigger */ /* .cctors: too many apps depend on them */ /* running with a specific order... */ g_assert (vtable); if (!vtable->initialized && m_class_has_cctor (vtable->klass)) INLINE_FAILURE ("class init"); if (!mono_runtime_class_init_full (vtable, cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); goto exception_exit; } } } if (cfg->compile_aot) EMIT_NEW_SFLDACONST (cfg, ins, field); else { g_assert (vtable); addr = mono_static_field_get_addr (vtable, field); g_assert (addr); EMIT_NEW_PCONST (cfg, ins, addr); } } else { MonoInst *iargs [1]; EMIT_NEW_ICONST (cfg, iargs [0], GPOINTER_TO_UINT (addr)); ins = mono_emit_jit_icall (cfg, mono_get_special_static_data, iargs); } } /* Generate IR to do the actual load/store operation */ if ((il_op == MONO_CEE_STFLD || il_op == MONO_CEE_STSFLD)) { if (ins_flag & MONO_INST_VOLATILE) { /* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } else if (!mini_debug_options.weak_memory_model && mini_type_is_reference (ftype)) { mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } } if (il_op == MONO_CEE_LDSFLDA) { ins->klass = mono_class_from_mono_type_internal (ftype); ins->type = STACK_PTR; *sp++ = ins; } else if (il_op == MONO_CEE_STSFLD) { MonoInst *store; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, ftype, ins->dreg, 0, store_val->dreg); store->flags |= ins_flag; } else { gboolean is_const = FALSE; MonoVTable *vtable = NULL; gpointer addr = NULL; if (!context_used) { vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); } if ((ftype->attrs & FIELD_ATTRIBUTE_INIT_ONLY) && (((addr = mono_aot_readonly_field_override (field)) != NULL) || (!context_used && !cfg->compile_aot && vtable->initialized))) { int ro_type = ftype->type; if (!addr) addr = mono_static_field_get_addr (vtable, field); if (ro_type == MONO_TYPE_VALUETYPE && m_class_is_enumtype (ftype->data.klass)) { ro_type = mono_class_enum_basetype_internal (ftype->data.klass)->type; } GSHAREDVT_FAILURE (il_op); /* printf ("RO-FIELD %s.%s:%s\n", klass->name_space, klass->name, mono_field_get_name (field));*/ is_const = TRUE; switch (ro_type) { case MONO_TYPE_BOOLEAN: case MONO_TYPE_U1: EMIT_NEW_ICONST (cfg, *sp, *((guint8 *)addr)); sp++; break; case MONO_TYPE_I1: EMIT_NEW_ICONST (cfg, *sp, *((gint8 *)addr)); sp++; break; case MONO_TYPE_CHAR: case MONO_TYPE_U2: EMIT_NEW_ICONST (cfg, *sp, *((guint16 *)addr)); sp++; break; case MONO_TYPE_I2: EMIT_NEW_ICONST (cfg, *sp, *((gint16 *)addr)); sp++; break; break; case MONO_TYPE_I4: EMIT_NEW_ICONST (cfg, *sp, *((gint32 *)addr)); sp++; break; case MONO_TYPE_U4: EMIT_NEW_ICONST (cfg, *sp, *((guint32 *)addr)); sp++; break; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr)); mini_type_to_eval_stack_type ((cfg), field->type, *sp); sp++; break; case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_CLASS: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (!mono_gc_is_moving ()) { EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr)); mini_type_to_eval_stack_type ((cfg), field->type, *sp); sp++; } else { is_const = FALSE; } break; case MONO_TYPE_I8: case MONO_TYPE_U8: EMIT_NEW_I8CONST (cfg, *sp, *((gint64 *)addr)); sp++; break; case MONO_TYPE_R4: case MONO_TYPE_R8: case MONO_TYPE_VALUETYPE: default: is_const = FALSE; break; } } if (!is_const) { MonoInst *load; EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, load, field->type, ins->dreg, 0); load->flags |= ins_flag; *sp++ = load; } } field_access_end: if ((il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDSFLD) && (ins_flag & MONO_INST_VOLATILE)) { /* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ); } ins_flag = 0; break; } case MONO_CEE_STOBJ: sp -= 2; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* FIXME: should check item at sp [1] is compatible with the type of the store. */ mini_emit_memory_store (cfg, m_class_get_byval_arg (klass), sp [0], sp [1], ins_flag); ins_flag = 0; inline_costs += 1; break; /* * Array opcodes */ case MONO_CEE_NEWARR: { MonoInst *len_ins; const char *data_ptr; int data_size = 0; guint32 field_token; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (m_class_get_byval_arg (klass)->type == MONO_TYPE_VOID) UNVERIFIED; context_used = mini_class_check_context_used (cfg, klass); #ifndef TARGET_S390X if (sp [0]->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4) { MONO_INST_NEW (cfg, ins, OP_LCONV_TO_OVF_U4); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } #else /* The array allocator expects a 64-bit input, and we cannot rely on the high bits of a 32-bit result, so we have to extend. */ if (sp [0]->type == STACK_I4 && TARGET_SIZEOF_VOID_P == 8) { MONO_INST_NEW (cfg, ins, OP_ICONV_TO_I8); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I8; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } #endif if (context_used) { MonoInst *args [3]; MonoClass *array_class = mono_class_create_array (klass, 1); MonoMethod *managed_alloc = mono_gc_get_managed_array_allocator (array_class); /* FIXME: Use OP_NEWARR and decompose later to help abcrem */ /* vtable */ args [0] = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE); /* array len */ args [1] = sp [0]; if (managed_alloc) ins = mono_emit_method_call (cfg, managed_alloc, args, NULL); else ins = mono_emit_jit_icall (cfg, ves_icall_array_new_specific, args); } else { /* Decompose later since it is needed by abcrem */ MonoClass *array_type = mono_class_create_array (klass, 1); mono_class_vtable_checked (array_type, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (array_type); MONO_INST_NEW (cfg, ins, OP_NEWARR); ins->dreg = alloc_ireg_ref (cfg); ins->sreg1 = sp [0]->dreg; ins->inst_newa_class = klass; ins->type = STACK_OBJ; ins->klass = array_type; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; /* Needed so mono_emit_load_get_addr () gets called */ mono_get_got_var (cfg); } len_ins = sp [0]; ip += 5; *sp++ = ins; inline_costs += 1; /* * we inline/optimize the initialization sequence if possible. * we should also allocate the array as not cleared, since we spend as much time clearing to 0 as initializing * for small sizes open code the memcpy * ensure the rva field is big enough */ if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (len_ins->opcode == OP_ICONST) && (data_ptr = initialize_array_data (cfg, method, cfg->compile_aot, next_ip, end, klass, len_ins->inst_c0, &data_size, &field_token, &il_op, &next_ip))) { MonoMethod *memcpy_method = mini_get_memcpy_method (); MonoInst *iargs [3]; int add_reg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU_IMM (cfg, iargs [0], OP_PADD_IMM, add_reg, ins->dreg, MONO_STRUCT_OFFSET (MonoArray, vector)); if (cfg->compile_aot) { EMIT_NEW_AOTCONST_TOKEN (cfg, iargs [1], MONO_PATCH_INFO_RVA, m_class_get_image (method->klass), GPOINTER_TO_UINT(field_token), STACK_PTR, NULL); } else { EMIT_NEW_PCONST (cfg, iargs [1], (char*)data_ptr); } EMIT_NEW_ICONST (cfg, iargs [2], data_size); mono_emit_method_call (cfg, memcpy_method, iargs, NULL); } break; } case MONO_CEE_LDLEN: --sp; if (sp [0]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_LDLEN); ins->dreg = alloc_preg (cfg); ins->sreg1 = sp [0]->dreg; ins->inst_imm = MONO_STRUCT_OFFSET (MonoArray, max_length); ins->type = STACK_I4; /* This flag will be inherited by the decomposition */ ins->flags |= MONO_INST_FAULT | MONO_INST_INVARIANT_LOAD; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sp [0]->dreg); *sp++ = ins; break; case MONO_CEE_LDELEMA: sp -= 2; if (sp [0]->type != STACK_OBJ) UNVERIFIED; cfg->flags |= MONO_CFG_HAS_LDELEMA; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* we need to make sure that this array is exactly the type it needs * to be for correctness. the wrappers are lax with their usage * so we need to ignore them here */ if (!m_class_is_valuetype (klass) && method->wrapper_type == MONO_WRAPPER_NONE && !readonly) { MonoClass *array_class = mono_class_create_array (klass, 1); mini_emit_check_array_type (cfg, sp [0], array_class); CHECK_TYPELOAD (array_class); } readonly = FALSE; ins = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); *sp++ = ins; break; case MONO_CEE_LDELEM: case MONO_CEE_LDELEM_I1: case MONO_CEE_LDELEM_U1: case MONO_CEE_LDELEM_I2: case MONO_CEE_LDELEM_U2: case MONO_CEE_LDELEM_I4: case MONO_CEE_LDELEM_U4: case MONO_CEE_LDELEM_I8: case MONO_CEE_LDELEM_I: case MONO_CEE_LDELEM_R4: case MONO_CEE_LDELEM_R8: case MONO_CEE_LDELEM_REF: { MonoInst *addr; sp -= 2; if (il_op == MONO_CEE_LDELEM) { klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_class_init_internal (klass); } else klass = array_access_to_klass (il_op); if (sp [0]->type != STACK_OBJ) UNVERIFIED; cfg->flags |= MONO_CFG_HAS_LDELEMA; if (mini_is_gsharedvt_variable_klass (klass)) { // FIXME-VT: OP_ICONST optimization addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); ins->opcode = OP_LOADV_MEMBASE; } else if (sp [1]->opcode == OP_ICONST) { int array_reg = sp [0]->dreg; int index_reg = sp [1]->dreg; int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector); if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg)) MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg); MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset); } else { addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); } *sp++ = ins; break; } case MONO_CEE_STELEM_I: case MONO_CEE_STELEM_I1: case MONO_CEE_STELEM_I2: case MONO_CEE_STELEM_I4: case MONO_CEE_STELEM_I8: case MONO_CEE_STELEM_R4: case MONO_CEE_STELEM_R8: case MONO_CEE_STELEM_REF: case MONO_CEE_STELEM: { sp -= 3; cfg->flags |= MONO_CFG_HAS_LDELEMA; if (il_op == MONO_CEE_STELEM) { klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_class_init_internal (klass); } else klass = array_access_to_klass (il_op); if (sp [0]->type != STACK_OBJ) UNVERIFIED; sp [2] = convert_value (cfg, m_class_get_byval_arg (klass), sp [2]); mini_emit_array_store (cfg, klass, sp, TRUE); inline_costs += 1; break; } case MONO_CEE_CKFINITE: { --sp; if (cfg->llvm_only) { MonoInst *iargs [1]; iargs [0] = sp [0]; *sp++ = mono_emit_jit_icall (cfg, mono_ckfinite, iargs); } else { sp [0] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.double_class), sp [0]); MONO_INST_NEW (cfg, ins, OP_CKFINITE); ins->sreg1 = sp [0]->dreg; ins->dreg = alloc_freg (cfg); ins->type = STACK_R8; MONO_ADD_INS (cfg->cbb, ins); *sp++ = mono_decompose_opcode (cfg, ins); } break; } case MONO_CEE_REFANYVAL: { MonoInst *src_var, *src; int klass_reg = alloc_preg (cfg); int dreg = alloc_preg (cfg); GSHAREDVT_FAILURE (il_op); MONO_INST_NEW (cfg, ins, il_op); --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); context_used = mini_class_check_context_used (cfg, klass); // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg); EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass)); if (context_used) { MonoInst *klass_ins; klass_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS); // FIXME: MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, klass_reg, klass_ins->dreg); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); } else { mini_emit_class_check (cfg, klass_reg, klass); } EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value)); ins->type = STACK_MP; ins->klass = klass; *sp++ = ins; break; } case MONO_CEE_MKREFANY: { MonoInst *loc, *addr; GSHAREDVT_FAILURE (il_op); MONO_INST_NEW (cfg, ins, il_op); --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); context_used = mini_class_check_context_used (cfg, klass); loc = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL); EMIT_NEW_TEMPLOADA (cfg, addr, loc->inst_c0); MonoInst *const_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS); int type_reg = alloc_preg (cfg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass), const_ins->dreg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ADD_IMM, type_reg, const_ins->dreg, m_class_offsetof_byval_arg ()); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type), type_reg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value), sp [0]->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, loc->inst_c0); ins->type = STACK_VTYPE; ins->klass = mono_defaults.typed_reference_class; *sp++ = ins; break; } case MONO_CEE_LDTOKEN: { gpointer handle; MonoClass *handle_class; if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) { handle = mono_method_get_wrapper_data (method, n); handle_class = (MonoClass *)mono_method_get_wrapper_data (method, n + 1); if (handle_class == mono_defaults.typehandle_class) handle = m_class_get_byval_arg ((MonoClass*)handle); } else { handle = mono_ldtoken_checked (image, n, &handle_class, generic_context, cfg->error); CHECK_CFG_ERROR; } if (!handle) LOAD_ERROR; mono_class_init_internal (handle_class); if (cfg->gshared) { if (mono_metadata_token_table (n) == MONO_TABLE_TYPEDEF || mono_metadata_token_table (n) == MONO_TABLE_TYPEREF) { /* This case handles ldtoken of an open type, like for typeof(Gen<>). */ context_used = 0; } else if (handle_class == mono_defaults.typehandle_class) { context_used = mini_class_check_context_used (cfg, mono_class_from_mono_type_internal ((MonoType *)handle)); } else if (handle_class == mono_defaults.fieldhandle_class) context_used = mini_class_check_context_used (cfg, m_field_get_parent (((MonoClassField*)handle))); else if (handle_class == mono_defaults.methodhandle_class) context_used = mini_method_check_context_used (cfg, (MonoMethod *)handle); else g_assert_not_reached (); } { if ((next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && ((next_ip [0] == CEE_CALL) || (next_ip [0] == CEE_CALLVIRT)) && (cmethod = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context)) && (cmethod->klass == mono_defaults.systemtype_class) && (strcmp (cmethod->name, "GetTypeFromHandle") == 0)) { MonoClass *tclass = mono_class_from_mono_type_internal ((MonoType *)handle); mono_class_init_internal (tclass); // Optimize to true/false if next instruction is `call instance bool Type::get_IsValueType()` guchar *is_vt_ip; guint32 is_vt_token; if ((is_vt_ip = il_read_call (next_ip + 5, end, &is_vt_token)) && ip_in_bb (cfg, cfg->cbb, is_vt_ip)) { MonoMethod *is_vt_method = mini_get_method (cfg, method, is_vt_token, NULL, generic_context); if (is_vt_method->klass == mono_defaults.systemtype_class && !mini_is_gsharedvt_variable_klass (tclass) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (tclass)) && !strcmp ("get_IsValueType", is_vt_method->name)) { next_ip = is_vt_ip; EMIT_NEW_ICONST (cfg, ins, m_class_is_valuetype (tclass) ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } } if (context_used) { MONO_INST_NEW (cfg, ins, OP_RTTYPE); ins->dreg = alloc_ireg_ref (cfg); ins->inst_p0 = tclass; ins->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; } else if (cfg->compile_aot) { if (method->wrapper_type) { error_init (error); //got to do it since there are multiple conditionals below if (mono_class_get_checked (m_class_get_image (tclass), m_class_get_type_token (tclass), error) == tclass && !generic_context) { /* Special case for static synchronized wrappers */ EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (tclass), m_class_get_type_token (tclass), generic_context); } else { mono_error_cleanup (error); /* FIXME don't swallow the error */ /* FIXME: n is not a normal token */ DISABLE_AOT (cfg); EMIT_NEW_PCONST (cfg, ins, NULL); } } else { EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, image, n, generic_context); } } else { MonoReflectionType *rt = mono_type_get_object_checked ((MonoType *)handle, cfg->error); CHECK_CFG_ERROR; EMIT_NEW_PCONST (cfg, ins, rt); } ins->type = STACK_OBJ; ins->klass = mono_defaults.runtimetype_class; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += 5; } else { MonoInst *addr, *vtvar; vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (handle_class), OP_LOCAL); if (context_used) { if (handle_class == mono_defaults.typehandle_class) { ins = mini_emit_get_rgctx_klass (cfg, context_used, mono_class_from_mono_type_internal ((MonoType *)handle), MONO_RGCTX_INFO_TYPE); } else if (handle_class == mono_defaults.methodhandle_class) { ins = emit_get_rgctx_method (cfg, context_used, (MonoMethod *)handle, MONO_RGCTX_INFO_METHOD); } else if (handle_class == mono_defaults.fieldhandle_class) { ins = emit_get_rgctx_field (cfg, context_used, (MonoClassField *)handle, MONO_RGCTX_INFO_CLASS_FIELD); } else { g_assert_not_reached (); } } else if (cfg->compile_aot) { EMIT_NEW_LDTOKENCONST (cfg, ins, image, n, generic_context); } else { EMIT_NEW_PCONST (cfg, ins, handle); } EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, ins->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0); } } *sp++ = ins; break; } case MONO_CEE_THROW: if (sp [-1]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_THROW); --sp; ins->sreg1 = sp [0]->dreg; cfg->cbb->out_of_line = TRUE; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; /* This can complicate code generation for llvm since the return value might not be defined */ if (COMPILE_LLVM (cfg)) INLINE_FAILURE ("throw"); break; case MONO_CEE_ENDFINALLY: if (!ip_in_finally_clause (cfg, ip - header->code)) UNVERIFIED; /* mono_save_seq_point_info () depends on this */ if (sp != stack_start) emit_seq_point (cfg, method, ip, FALSE, FALSE); MONO_INST_NEW (cfg, ins, OP_ENDFINALLY); MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; ins_has_side_effect = FALSE; /* * Control will leave the method so empty the stack, otherwise * the next basic block will start with a nonempty stack. */ while (sp != stack_start) { sp--; } break; case MONO_CEE_LEAVE: case MONO_CEE_LEAVE_S: { GList *handlers; /* empty the stack */ g_assert (sp >= stack_start); sp = stack_start; /* * If this leave statement is in a catch block, check for a * pending exception, and rethrow it if necessary. * We avoid doing this in runtime invoke wrappers, since those are called * by native code which excepts the wrapper to catch all exceptions. */ for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; /* * Use <= in the final comparison to handle clauses with multiple * leave statements, like in bug #78024. * The ordering of the exception clauses guarantees that we find the * innermost clause. */ if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && (clause->flags == MONO_EXCEPTION_CLAUSE_NONE) && (ip - header->code + ((il_op == MONO_CEE_LEAVE) ? 5 : 2)) <= (clause->handler_offset + clause->handler_len) && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) { MonoInst *exc_ins; MonoBasicBlock *dont_throw; /* MonoInst *load; NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, clause->handler_offset)->inst_c0); */ exc_ins = mono_emit_jit_icall (cfg, mono_thread_get_undeniable_exception, NULL); NEW_BBLOCK (cfg, dont_throw); /* * Currently, we always rethrow the abort exception, despite the * fact that this is not correct. See thread6.cs for an example. * But propagating the abort exception is more important than * getting the semantics right. */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, exc_ins->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw); MONO_EMIT_NEW_UNALU (cfg, OP_THROW, -1, exc_ins->dreg); MONO_START_BB (cfg, dont_throw); } } #ifdef ENABLE_LLVM cfg->cbb->try_end = (intptr_t)(ip - header->code); #endif if ((handlers = mono_find_leave_clauses (cfg, ip, target))) { GList *tmp; /* * For each finally clause that we exit we need to invoke the finally block. * After each invocation we need to add try holes for all the clauses that * we already exited. */ for (tmp = handlers; tmp; tmp = tmp->next) { MonoLeaveClause *leave = (MonoLeaveClause *) tmp->data; MonoExceptionClause *clause = leave->clause; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY) continue; MonoInst *abort_exc = (MonoInst *)mono_find_exvar_for_offset (cfg, clause->handler_offset); MonoBasicBlock *dont_throw; /* * Emit instrumentation code before linking the basic blocks below as this * will alter cfg->cbb. */ mini_profiler_emit_call_finally (cfg, header, ip, leave->index, clause); tblock = cfg->cil_offset_to_bb [clause->handler_offset]; g_assert (tblock); link_bblock (cfg, cfg->cbb, tblock); MONO_EMIT_NEW_PCONST (cfg, abort_exc->dreg, 0); MONO_INST_NEW (cfg, ins, OP_CALL_HANDLER); ins->inst_target_bb = tblock; ins->inst_eh_blocks = tmp; MONO_ADD_INS (cfg->cbb, ins); cfg->cbb->has_call_handler = 1; /* Throw exception if exvar is set */ /* FIXME Do we need this for calls from catch/filter ? */ NEW_BBLOCK (cfg, dont_throw); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, abort_exc->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw); mono_emit_jit_icall (cfg, ves_icall_thread_finish_async_abort, NULL); cfg->cbb->clause_holes = tmp; MONO_START_BB (cfg, dont_throw); cfg->cbb->clause_holes = tmp; if (COMPILE_LLVM (cfg)) { MonoBasicBlock *target_bb; /* * Link the finally bblock with the target, since it will * conceptually branch there. */ GET_BBLOCK (cfg, tblock, cfg->cil_start + clause->handler_offset + clause->handler_len - 1); GET_BBLOCK (cfg, target_bb, target); link_bblock (cfg, tblock, target_bb); } } } MONO_INST_NEW (cfg, ins, OP_BR); MONO_ADD_INS (cfg->cbb, ins); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; start_new_bblock = 1; break; } /* * Mono specific opcodes */ case MONO_CEE_MONO_ICALL: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); const MonoJitICallId jit_icall_id = (MonoJitICallId)token; MonoJitICallInfo * const info = mono_find_jit_icall_info (jit_icall_id); CHECK_STACK (info->sig->param_count); sp -= info->sig->param_count; if (token == MONO_JIT_ICALL_mono_threads_attach_coop) { MonoInst *addr; MonoBasicBlock *next_bb; if (cfg->compile_aot) { /* * This is called on unattached threads, so it cannot go through the trampoline * infrastructure. Use an indirect call through a got slot initialized at load time * instead. */ EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id)); ins = mini_emit_calli (cfg, info->sig, sp, addr, NULL, NULL); } else { ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp); } /* * Parts of the initlocals code needs to come after this, since it might call methods like memset. * Also profiling needs to be after attach. */ init_localsbb2 = cfg->cbb; NEW_BBLOCK (cfg, next_bb); MONO_START_BB (cfg, next_bb); } else { if (token == MONO_JIT_ICALL_mono_threads_detach_coop) { /* can't emit profiling code after a detach, so emit it now */ mini_profiler_emit_leave (cfg, NULL); detached_before_ret = TRUE; } ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp); } if (!MONO_TYPE_IS_VOID (info->sig->ret)) *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } MonoJumpInfoType ldptr_type; case MONO_CEE_MONO_LDPTR_CARD_TABLE: ldptr_type = MONO_PATCH_INFO_GC_CARD_TABLE_ADDR; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_NURSERY_START: ldptr_type = MONO_PATCH_INFO_GC_NURSERY_START; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_NURSERY_BITS: ldptr_type = MONO_PATCH_INFO_GC_NURSERY_BITS; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_INT_REQ_FLAG: ldptr_type = MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_PROFILER_ALLOCATION_COUNT: ldptr_type = MONO_PATCH_INFO_PROFILER_ALLOCATION_COUNT; mono_ldptr: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ins = mini_emit_runtime_constant (cfg, ldptr_type, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_LDPTR: { gpointer ptr; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ptr = mono_method_get_wrapper_data (method, token); EMIT_NEW_PCONST (cfg, ins, ptr); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); /* Can't embed random pointers into AOT code */ DISABLE_AOT (cfg); break; } case MONO_CEE_MONO_JIT_ICALL_ADDR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_JIT_ICALL_ADDRCONST (cfg, ins, GUINT_TO_POINTER (token)); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_ICALL_ADDR: { MonoMethod *cmethod; gpointer ptr; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); cmethod = (MonoMethod *)mono_method_get_wrapper_data (method, token); if (cfg->compile_aot) { if (cfg->direct_pinvoke && ip + 6 < end && (ip [6] == CEE_POP)) { /* * This is generated by emit_native_wrapper () to resolve the pinvoke address * before the call, its not needed when using direct pinvoke. * This is not an optimization, but its used to avoid looking up pinvokes * on platforms which don't support dlopen (). */ EMIT_NEW_PCONST (cfg, ins, NULL); } else { EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_ICALL_ADDR, cmethod); } } else { ptr = mono_lookup_internal_call (cmethod); g_assert (ptr); EMIT_NEW_PCONST (cfg, ins, ptr); } *sp++ = ins; break; } case MONO_CEE_MONO_VTADDR: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoInst *src_var, *src; --sp; // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); EMIT_NEW_VARLOADA ((cfg), (src), src_var, src_var->inst_vtype); *sp++ = src; break; } case MONO_CEE_MONO_NEWOBJ: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoInst *iargs [2]; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); mono_class_init_internal (klass); NEW_CLASSCONST (cfg, iargs [0], klass); MONO_ADD_INS (cfg->cbb, iargs [0]); *sp++ = mono_emit_jit_icall (cfg, ves_icall_object_new, iargs); inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_MONO_OBJADDR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); --sp; MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = alloc_ireg_mp (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_MP; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_MONO_LDNATIVEOBJ: /* * Similar to LDOBJ, but instead load the unmanaged * representation of the vtype to the stack. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); --sp; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); g_assert (m_class_is_valuetype (klass)); mono_class_init_internal (klass); { MonoInst *src, *dest, *temp; src = sp [0]; temp = mono_compile_create_var (cfg, m_class_get_byval_arg (klass), OP_LOCAL); temp->backend.is_pinvoke = 1; EMIT_NEW_TEMPLOADA (cfg, dest, temp->inst_c0); mini_emit_memory_copy (cfg, dest, src, klass, TRUE, 0); EMIT_NEW_TEMPLOAD (cfg, dest, temp->inst_c0); dest->type = STACK_VTYPE; dest->klass = klass; *sp ++ = dest; } break; case MONO_CEE_MONO_RETOBJ: { /* * Same as RET, but return the native representation of a vtype * to the caller. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); g_assert (cfg->ret); g_assert (mono_method_signature_internal (method)->pinvoke); --sp; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); if (!cfg->vret_addr) { g_assert (cfg->ret_var_is_local); EMIT_NEW_VARLOADA (cfg, ins, cfg->ret, cfg->ret->inst_vtype); } else { EMIT_NEW_RETLOADA (cfg, ins); } mini_emit_memory_copy (cfg, ins, sp [0], klass, TRUE, 0); if (sp != stack_start) UNVERIFIED; if (!detached_before_ret) mini_profiler_emit_leave (cfg, sp [0]); MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; } case MONO_CEE_MONO_SAVE_LMF: case MONO_CEE_MONO_RESTORE_LMF: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); break; case MONO_CEE_MONO_CLASSCONST: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_CLASSCONST (cfg, ins, mono_method_get_wrapper_data (method, token)); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_METHODCONST: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_METHODCONST (cfg, ins, mono_method_get_wrapper_data (method, token)); *sp++ = ins; break; case MONO_CEE_MONO_PINVOKE_ADDR_CACHE: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoMethod *pinvoke_method = (MonoMethod*)mono_method_get_wrapper_data (method, token); /* This is a memory slot used by the wrapper */ if (cfg->compile_aot) { EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_PINVOKE_ADDR_CACHE, pinvoke_method); } else { gpointer addr = mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (gpointer)); EMIT_NEW_PCONST (cfg, ins, addr); } *sp++ = ins; break; } case MONO_CEE_MONO_NOT_TAKEN: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); cfg->cbb->out_of_line = TRUE; break; case MONO_CEE_MONO_TLS: { MonoTlsKey key; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); key = (MonoTlsKey)n; g_assert (key < TLS_KEY_NUM); ins = mono_create_tls_get (cfg, key); g_assert (ins); ins->type = STACK_PTR; *sp++ = ins; break; } case MONO_CEE_MONO_DYN_CALL: { MonoCallInst *call; /* It would be easier to call a trampoline, but that would put an * extra frame on the stack, confusing exception handling. So * implement it inline using an opcode for now. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); if (!cfg->dyn_call_var) { cfg->dyn_call_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ cfg->dyn_call_var->flags |= MONO_INST_VOLATILE; } /* Has to use a call inst since local regalloc expects it */ MONO_INST_NEW_CALL (cfg, call, OP_DYN_CALL); ins = (MonoInst*)call; sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; MONO_ADD_INS (cfg->cbb, ins); cfg->param_area = MAX (cfg->param_area, cfg->backend->dyn_call_param_area); /* OP_DYN_CALL might need to allocate a dynamically sized param area */ cfg->flags |= MONO_CFG_HAS_ALLOCA; inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_MONO_MEMORY_BARRIER: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); mini_emit_memory_barrier (cfg, (int)n); break; } case MONO_CEE_MONO_ATOMIC_STORE_I4: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); g_assert (mono_arch_opcode_supported (OP_ATOMIC_STORE_I4)); sp -= 2; MONO_INST_NEW (cfg, ins, OP_ATOMIC_STORE_I4); ins->dreg = sp [0]->dreg; ins->sreg1 = sp [1]->dreg; ins->backend.memory_barrier_kind = (int)n; MONO_ADD_INS (cfg->cbb, ins); break; } case MONO_CEE_MONO_LD_DELEGATE_METHOD_PTR: { CHECK_STACK (1); --sp; dreg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr)); *sp++ = ins; break; } case MONO_CEE_MONO_CALLI_EXTRA_ARG: { MonoInst *addr; MonoMethodSignature *fsig; MonoInst *arg; /* * This is the same as CEE_CALLI, but passes an additional argument * to the called method in llvmonly mode. * This is only used by delegate invoke wrappers to call the * actual delegate method. */ g_assert (method->wrapper_type == MONO_WRAPPER_DELEGATE_INVOKE); ins = NULL; cmethod = NULL; CHECK_STACK (1); --sp; addr = *sp; fsig = mini_get_signature (method, token, generic_context, cfg->error); CHECK_CFG_ERROR; if (cfg->llvm_only) cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig); n = fsig->param_count + fsig->hasthis + 1; CHECK_STACK (n); sp -= n; arg = sp [n - 1]; if (cfg->llvm_only) { /* * The lowest bit of 'arg' determines whenever the callee uses the gsharedvt * cconv. This is set by mono_init_delegate (). */ if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) { MonoInst *callee = addr; MonoInst *call, *localloc_ins; MonoBasicBlock *is_gsharedvt_bb, *end_bb; int low_bit_reg = alloc_preg (cfg); NEW_BBLOCK (cfg, is_gsharedvt_bb); NEW_BBLOCK (cfg, end_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb); /* Normal case: callee uses a normal cconv, have to add an out wrapper */ addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); /* * ADDR points to a gsharedvt-out wrapper, have to pass <callee, arg> as an extra arg. */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P; MONO_ADD_INS (cfg->cbb, ins); localloc_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg); call = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Gsharedvt case: callee uses a gsharedvt cconv, no conversion is needed */ MONO_START_BB (cfg, is_gsharedvt_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1); ins = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee); ins->dreg = call->dreg; MONO_START_BB (cfg, end_bb); } else { /* Caller uses a normal calling conv */ MonoInst *callee = addr; MonoInst *call, *localloc_ins; MonoBasicBlock *is_gsharedvt_bb, *end_bb; int low_bit_reg = alloc_preg (cfg); NEW_BBLOCK (cfg, is_gsharedvt_bb); NEW_BBLOCK (cfg, end_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb); /* Normal case: callee uses a normal cconv, no conversion is needed */ call = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Gsharedvt case: callee uses a gsharedvt cconv, have to add an in wrapper */ MONO_START_BB (cfg, is_gsharedvt_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1); NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_GSHAREDVT_IN_WRAPPER, fsig); MONO_ADD_INS (cfg->cbb, addr); /* * ADDR points to a gsharedvt-in wrapper, have to pass <callee, arg> as an extra arg. */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P; MONO_ADD_INS (cfg->cbb, ins); localloc_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg); ins = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr); ins->dreg = call->dreg; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); } } else { /* Same as CEE_CALLI */ if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { /* * We pass the address to the gsharedvt trampoline in the rgctx reg */ MonoInst *callee = addr; addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, callee); } else { ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); } } if (!MONO_TYPE_IS_VOID (fsig->ret)) *sp++ = mono_emit_widen_call_res (cfg, ins, fsig); CHECK_CFG_EXCEPTION; ins_flag = 0; constrained_class = NULL; break; } case MONO_CEE_MONO_LDDOMAIN: { MonoDomain *domain = mono_get_root_domain (); g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_PCONST (cfg, ins, cfg->compile_aot ? NULL : domain); *sp++ = ins; break; } case MONO_CEE_MONO_SAVE_LAST_ERROR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); // Just an IL prefix, setting this flag, picked up by call instructions. save_last_error = TRUE; break; case MONO_CEE_MONO_GET_RGCTX_ARG: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); mono_create_rgctx_var (cfg); MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = alloc_dreg (cfg, STACK_PTR); ins->sreg1 = cfg->rgctx_var->dreg; ins->type = STACK_PTR; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_MONO_GET_SP: { /* Used by COOP only, so this is good enough */ MonoInst *var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); EMIT_NEW_VARLOADA (cfg, ins, var, NULL); *sp++ = ins; break; } case MONO_CEE_MONO_REMAP_OVF_EXC: /* Remap the exception thrown by the next _OVF opcode */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ovf_exc = (const char*)mono_method_get_wrapper_data (method, token); break; case MONO_CEE_ARGLIST: { /* somewhat similar to LDTOKEN */ MonoInst *addr, *vtvar; vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.argumenthandle_class), OP_LOCAL); EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0); EMIT_NEW_UNALU (cfg, ins, OP_ARGLIST, -1, addr->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0); ins->type = STACK_VTYPE; ins->klass = mono_defaults.argumenthandle_class; *sp++ = ins; break; } case MONO_CEE_CEQ: case MONO_CEE_CGT: case MONO_CEE_CGT_UN: case MONO_CEE_CLT: case MONO_CEE_CLT_UN: { MonoInst *cmp, *arg1, *arg2; sp -= 2; arg1 = sp [0]; arg2 = sp [1]; /* * The following transforms: * CEE_CEQ into OP_CEQ * CEE_CGT into OP_CGT * CEE_CGT_UN into OP_CGT_UN * CEE_CLT into OP_CLT * CEE_CLT_UN into OP_CLT_UN */ MONO_INST_NEW (cfg, cmp, (OP_CEQ - CEE_CEQ) + ip [1]); MONO_INST_NEW (cfg, ins, cmp->opcode); cmp->sreg1 = arg1->dreg; cmp->sreg2 = arg2->dreg; type_from_op (cfg, cmp, arg1, arg2); CHECK_TYPE (cmp); add_widen_op (cfg, cmp, &arg1, &arg2); if ((arg1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((arg1->type == STACK_PTR) || (arg1->type == STACK_OBJ) || (arg1->type == STACK_MP)))) cmp->opcode = OP_LCOMPARE; else if (arg1->type == STACK_R4) cmp->opcode = OP_RCOMPARE; else if (arg1->type == STACK_R8) cmp->opcode = OP_FCOMPARE; else cmp->opcode = OP_ICOMPARE; MONO_ADD_INS (cfg->cbb, cmp); ins->type = STACK_I4; ins->dreg = alloc_dreg (cfg, (MonoStackType)ins->type); type_from_op (cfg, ins, arg1, arg2); if (cmp->opcode == OP_FCOMPARE || cmp->opcode == OP_RCOMPARE) { /* * The backends expect the fceq opcodes to do the * comparison too. */ ins->sreg1 = cmp->sreg1; ins->sreg2 = cmp->sreg2; NULLIFY_INS (cmp); } MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; } case MONO_CEE_LDFTN: { MonoInst *argconst; MonoMethod *cil_method; cmethod = mini_get_method (cfg, method, n, NULL, generic_context); CHECK_CFG_ERROR; if (constrained_class) { if (m_method_is_static (cmethod) && mini_class_check_context_used (cfg, constrained_class)) // FIXME: GENERIC_SHARING_FAILURE (CEE_LDFTN); cmethod = get_constrained_method (cfg, image, n, cmethod, constrained_class, generic_context); constrained_class = NULL; CHECK_CFG_ERROR; } mono_class_init_internal (cmethod->klass); mono_save_token_info (cfg, image, n, cmethod); context_used = mini_method_check_context_used (cfg, cmethod); cil_method = cmethod; if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_method (method, cmethod)) emit_method_access_failure (cfg, method, cil_method); const gboolean has_unmanaged_callers_only = cmethod->wrapper_type == MONO_WRAPPER_NONE && mono_method_has_unmanaged_callers_only_attribute (cmethod); /* * Optimize the common case of ldftn+delegate creation */ if ((sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) { MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context); if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) { MonoInst *target_ins, *handle_ins; MonoMethod *invoke; int invoke_context_used; if (G_UNLIKELY (has_unmanaged_callers_only)) { mono_error_set_not_supported (cfg->error, "Cannot create delegate from method with UnmanagedCallersOnlyAttribute"); CHECK_CFG_ERROR; } invoke = mono_get_delegate_invoke_internal (ctor_method->klass); if (!invoke || !mono_method_signature_internal (invoke)) LOAD_ERROR; invoke_context_used = mini_method_check_context_used (cfg, invoke); target_ins = sp [-1]; if (!(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) { /*BAD IMPL: We must not add a null check for virtual invoke delegates.*/ if (mono_method_signature_internal (invoke)->param_count == mono_method_signature_internal (cmethod)->param_count) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target_ins->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "ArgumentException"); } } if ((invoke_context_used == 0 || !cfg->gsharedvt) || cfg->llvm_only) { if (cfg->verbose_level > 3) g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL)); if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, FALSE))) { sp --; *sp = handle_ins; CHECK_CFG_EXCEPTION; sp ++; next_ip += 5; il_op = MONO_CEE_NEWOBJ; break; } else { CHECK_CFG_ERROR; } } } } /* UnmanagedCallersOnlyAttribute means ldftn should return a method callable from native */ if (G_UNLIKELY (has_unmanaged_callers_only)) { if (G_UNLIKELY (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) { // Follow CoreCLR, disallow [UnmanagedCallersOnly] and [DllImport] to be used // together emit_not_supported_failure (cfg); EMIT_NEW_PCONST (cfg, ins, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } MonoClass *delegate_klass = NULL; MonoGCHandle target_handle = 0; ERROR_DECL (wrapper_error); MonoMethod *wrapped_cmethod; wrapped_cmethod = mono_marshal_get_managed_wrapper (cmethod, delegate_klass, target_handle, wrapper_error); if (!is_ok (wrapper_error)) { /* if we couldn't create a wrapper because cmethod isn't supposed to have an UnmanagedCallersOnly attribute, follow CoreCLR behavior and throw when the method with the ldftn is executing, not when it is being compiled. */ emit_invalid_program_with_msg (cfg, wrapper_error, method, cmethod); mono_error_cleanup (wrapper_error); EMIT_NEW_PCONST (cfg, ins, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } else { cmethod = wrapped_cmethod; } } argconst = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); ins = mono_emit_jit_icall (cfg, mono_ldftn, &argconst); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_LDVIRTFTN: { MonoInst *args [2]; cmethod = mini_get_method (cfg, method, n, NULL, generic_context); CHECK_CFG_ERROR; mono_class_init_internal (cmethod->klass); context_used = mini_method_check_context_used (cfg, cmethod); /* * Optimize the common case of ldvirtftn+delegate creation */ if (previous_il_op == MONO_CEE_DUP && (sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) { MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context); if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) { MonoInst *target_ins, *handle_ins; MonoMethod *invoke; int invoke_context_used; const gboolean is_virtual = (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) != 0; invoke = mono_get_delegate_invoke_internal (ctor_method->klass); if (!invoke || !mono_method_signature_internal (invoke)) LOAD_ERROR; invoke_context_used = mini_method_check_context_used (cfg, invoke); target_ins = sp [-1]; if (invoke_context_used == 0 || !cfg->gsharedvt || cfg->llvm_only) { if (cfg->verbose_level > 3) g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL)); if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, is_virtual))) { sp -= 2; *sp = handle_ins; CHECK_CFG_EXCEPTION; next_ip += 5; previous_il_op = MONO_CEE_NEWOBJ; sp ++; break; } else { CHECK_CFG_ERROR; } } } } --sp; args [0] = *sp; args [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); if (context_used) *sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn_gshared, args); else *sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn, args); inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_LOCALLOC: { MonoBasicBlock *non_zero_bb, *end_bb; int alloc_ptr = alloc_preg (cfg); --sp; if (sp != stack_start) UNVERIFIED; if (cfg->method != method) /* * Inlining this into a loop in a parent could lead to * stack overflows which is different behavior than the * non-inlined case, thus disable inlining in this case. */ INLINE_FAILURE("localloc"); NEW_BBLOCK (cfg, non_zero_bb); NEW_BBLOCK (cfg, end_bb); /* if size != zero */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, non_zero_bb); //size is zero, so result is NULL MONO_EMIT_NEW_PCONST (cfg, alloc_ptr, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, non_zero_bb); MONO_INST_NEW (cfg, ins, OP_LOCALLOC); ins->dreg = alloc_ptr; ins->sreg1 = sp [0]->dreg; ins->type = STACK_PTR; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_HAS_ALLOCA; if (header->init_locals) ins->flags |= MONO_INST_INIT; MONO_START_BB (cfg, end_bb); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), alloc_ptr); ins->type = STACK_PTR; *sp++ = ins; break; } case MONO_CEE_ENDFILTER: { MonoExceptionClause *clause, *nearest; int cc; --sp; if ((sp != stack_start) || (sp [0]->type != STACK_I4)) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_ENDFILTER); ins->sreg1 = (*sp)->dreg; MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; nearest = NULL; for (cc = 0; cc < header->num_clauses; ++cc) { clause = &header->clauses [cc]; if ((clause->flags & MONO_EXCEPTION_CLAUSE_FILTER) && ((next_ip - header->code) > clause->data.filter_offset && (next_ip - header->code) <= clause->handler_offset) && (!nearest || (clause->data.filter_offset < nearest->data.filter_offset))) nearest = clause; } g_assert (nearest); if ((next_ip - header->code) != nearest->handler_offset) UNVERIFIED; break; } case MONO_CEE_UNALIGNED_: ins_flag |= MONO_INST_UNALIGNED; /* FIXME: record alignment? we can assume 1 for now */ break; case MONO_CEE_VOLATILE_: ins_flag |= MONO_INST_VOLATILE; break; case MONO_CEE_TAIL_: ins_flag |= MONO_INST_TAILCALL; cfg->flags |= MONO_CFG_HAS_TAILCALL; /* Can't inline tailcalls at this time */ inline_costs += 100000; break; case MONO_CEE_INITOBJ: --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (mini_class_is_reference (klass)) MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, sp [0]->dreg, 0, 0); else mini_emit_initobj (cfg, *sp, NULL, klass); inline_costs += 1; break; case MONO_CEE_CONSTRAINED_: constrained_class = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (constrained_class); ins_has_side_effect = FALSE; break; case MONO_CEE_CPBLK: sp -= 3; mini_emit_memory_copy_bytes (cfg, sp [0], sp [1], sp [2], ins_flag); ins_flag = 0; inline_costs += 1; break; case MONO_CEE_INITBLK: sp -= 3; mini_emit_memory_init_bytes (cfg, sp [0], sp [1], sp [2], ins_flag); ins_flag = 0; inline_costs += 1; break; case MONO_CEE_NO_: if (ip [2] & CEE_NO_TYPECHECK) ins_flag |= MONO_INST_NOTYPECHECK; if (ip [2] & CEE_NO_RANGECHECK) ins_flag |= MONO_INST_NORANGECHECK; if (ip [2] & CEE_NO_NULLCHECK) ins_flag |= MONO_INST_NONULLCHECK; break; case MONO_CEE_RETHROW: { MonoInst *load; int handler_offset = -1; for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && !(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY)) { handler_offset = clause->handler_offset; break; } } cfg->cbb->flags |= BB_EXCEPTION_UNSAFE; if (handler_offset == -1) UNVERIFIED; EMIT_NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, handler_offset)->inst_c0); MONO_INST_NEW (cfg, ins, OP_RETHROW); ins->sreg1 = load->dreg; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; } case MONO_CEE_MONO_RETHROW: { if (sp [-1]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_RETHROW); --sp; ins->sreg1 = sp [0]->dreg; cfg->cbb->out_of_line = TRUE; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; /* This can complicate code generation for llvm since the return value might not be defined */ if (COMPILE_LLVM (cfg)) INLINE_FAILURE ("mono_rethrow"); break; } case MONO_CEE_SIZEOF: { guint32 val; int ialign; if (mono_metadata_token_table (token) == MONO_TABLE_TYPESPEC && !image_is_dynamic (m_class_get_image (method->klass)) && !generic_context) { MonoType *type = mono_type_create_from_typespec_checked (image, token, cfg->error); CHECK_CFG_ERROR; val = mono_type_size (type, &ialign); EMIT_NEW_ICONST (cfg, ins, val); } else { MonoClass *klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (mini_is_gsharedvt_klass (klass)) { ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_SIZEOF); ins->type = STACK_I4; } else { val = mono_type_size (m_class_get_byval_arg (klass), &ialign); EMIT_NEW_ICONST (cfg, ins, val); } } *sp++ = ins; break; } case MONO_CEE_REFANYTYPE: { MonoInst *src_var, *src; GSHAREDVT_FAILURE (il_op); --sp; // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg); EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (mono_defaults.typehandle_class), src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type)); *sp++ = ins; break; } case MONO_CEE_READONLY_: readonly = TRUE; break; case MONO_CEE_UNUSED56: case MONO_CEE_UNUSED57: case MONO_CEE_UNUSED70: case MONO_CEE_UNUSED: case MONO_CEE_UNUSED99: case MONO_CEE_UNUSED58: case MONO_CEE_UNUSED1: UNVERIFIED; default: g_warning ("opcode 0x%02x not handled", il_op); UNVERIFIED; } if (ins_has_side_effect) cfg->cbb->flags |= BB_HAS_SIDE_EFFECTS; } if (start_new_bblock != 1) UNVERIFIED; cfg->cbb->cil_length = ip - cfg->cbb->cil_code; if (cfg->cbb->next_bb) { /* This could already be set because of inlining, #693905 */ MonoBasicBlock *bb = cfg->cbb; while (bb->next_bb) bb = bb->next_bb; bb->next_bb = end_bblock; } else { cfg->cbb->next_bb = end_bblock; } #if defined(TARGET_POWERPC) || defined(TARGET_X86) if (cfg->compile_aot) /* FIXME: The plt slots require a GOT var even if the method doesn't use it */ mono_get_got_var (cfg); #endif #ifdef TARGET_WASM if (cfg->lmf_var && !cfg->deopt) { // mini_llvmonly_pop_lmf () might be called before emit_push_lmf () so initialize the LMF cfg->cbb = init_localsbb; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); int lmf_reg = ins->dreg; EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), 0); } #endif if (cfg->method == method && cfg->got_var) mono_emit_load_got_addr (cfg); if (init_localsbb) { cfg->cbb = init_localsbb; cfg->ip = NULL; for (i = 0; i < header->num_locals; ++i) { /* * Vtype initialization might need to be done after CEE_JIT_ATTACH, since it can make calls to memset (), * which need the trampoline code to work. */ if (MONO_TYPE_ISSTRUCT (header->locals [i])) cfg->cbb = init_localsbb2; else cfg->cbb = init_localsbb; emit_init_local (cfg, i, header->locals [i], init_locals); } } if (cfg->init_ref_vars && cfg->method == method) { /* Emit initialization for ref vars */ // FIXME: Avoid duplication initialization for IL locals. for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *ins = cfg->varinfo [i]; if (ins->opcode == OP_LOCAL && ins->type == STACK_OBJ) MONO_EMIT_NEW_PCONST (cfg, ins->dreg, NULL); } } if (cfg->lmf_var && cfg->method == method && !cfg->llvm_only) { cfg->cbb = init_localsbb; emit_push_lmf (cfg); } /* emit profiler enter code after a jit attach if there is one */ cfg->cbb = init_localsbb2; mini_profiler_emit_enter (cfg); cfg->cbb = init_localsbb; if (seq_points) { MonoBasicBlock *bb; /* * Make seq points at backward branch targets interruptable. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) if (bb->code && bb->in_count > 1 && bb->code->opcode == OP_SEQ_POINT) bb->code->flags |= MONO_INST_SINGLE_STEP_LOC; } /* Add a sequence point for method entry/exit events */ if (seq_points && cfg->gen_sdb_seq_points) { NEW_SEQ_POINT (cfg, ins, METHOD_ENTRY_IL_OFFSET, FALSE); MONO_ADD_INS (init_localsbb, ins); NEW_SEQ_POINT (cfg, ins, METHOD_EXIT_IL_OFFSET, FALSE); MONO_ADD_INS (cfg->bb_exit, ins); } /* * Add seq points for IL offsets which have line number info, but wasn't generated a seq point during JITting because * the code they refer to was dead (#11880). */ if (sym_seq_points) { for (i = 0; i < header->code_size; ++i) { if (mono_bitset_test_fast (seq_point_locs, i) && !mono_bitset_test_fast (seq_point_set_locs, i)) { MonoInst *ins; NEW_SEQ_POINT (cfg, ins, i, FALSE); mono_add_seq_point (cfg, NULL, ins, SEQ_POINT_NATIVE_OFFSET_DEAD_CODE); } } } cfg->ip = NULL; if (cfg->method == method) { compute_bb_regions (cfg); } else { MonoBasicBlock *bb; /* get_most_deep_clause () in mini-llvm.c depends on this for inlined bblocks */ for (bb = start_bblock; bb != end_bblock; bb = bb->next_bb) { bb->real_offset = inline_offset; } } if (inline_costs < 0) { char *mname; /* Method is too large */ mname = mono_method_full_name (method, TRUE); mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Method %s is too complex.", mname)); g_free (mname); } if ((cfg->verbose_level > 2) && (cfg->method == method)) mono_print_code (cfg, "AFTER METHOD-TO-IR"); goto cleanup; mono_error_exit: if (cfg->verbose_level > 3) g_print ("exiting due to error"); g_assert (!is_ok (cfg->error)); goto cleanup; exception_exit: if (cfg->verbose_level > 3) g_print ("exiting due to exception"); g_assert (cfg->exception_type != MONO_EXCEPTION_NONE); goto cleanup; unverified: if (cfg->verbose_level > 3) g_print ("exiting due to invalid il"); set_exception_type_from_invalid_il (cfg, method, ip); goto cleanup; cleanup: g_slist_free (class_inits); mono_basic_block_free (original_bb); cfg->dont_inline = g_list_remove (cfg->dont_inline, method); if (cfg->exception_type) return -1; else return inline_costs; } static int store_membase_reg_to_store_membase_imm (int opcode) { switch (opcode) { case OP_STORE_MEMBASE_REG: return OP_STORE_MEMBASE_IMM; case OP_STOREI1_MEMBASE_REG: return OP_STOREI1_MEMBASE_IMM; case OP_STOREI2_MEMBASE_REG: return OP_STOREI2_MEMBASE_IMM; case OP_STOREI4_MEMBASE_REG: return OP_STOREI4_MEMBASE_IMM; case OP_STOREI8_MEMBASE_REG: return OP_STOREI8_MEMBASE_IMM; default: g_assert_not_reached (); } return -1; } int mono_op_to_op_imm (int opcode) { switch (opcode) { case OP_IADD: return OP_IADD_IMM; case OP_ISUB: return OP_ISUB_IMM; case OP_IDIV: return OP_IDIV_IMM; case OP_IDIV_UN: return OP_IDIV_UN_IMM; case OP_IREM: return OP_IREM_IMM; case OP_IREM_UN: return OP_IREM_UN_IMM; case OP_IMUL: return OP_IMUL_IMM; case OP_IAND: return OP_IAND_IMM; case OP_IOR: return OP_IOR_IMM; case OP_IXOR: return OP_IXOR_IMM; case OP_ISHL: return OP_ISHL_IMM; case OP_ISHR: return OP_ISHR_IMM; case OP_ISHR_UN: return OP_ISHR_UN_IMM; case OP_LADD: return OP_LADD_IMM; case OP_LSUB: return OP_LSUB_IMM; case OP_LAND: return OP_LAND_IMM; case OP_LOR: return OP_LOR_IMM; case OP_LXOR: return OP_LXOR_IMM; case OP_LSHL: return OP_LSHL_IMM; case OP_LSHR: return OP_LSHR_IMM; case OP_LSHR_UN: return OP_LSHR_UN_IMM; #if SIZEOF_REGISTER == 8 case OP_LMUL: return OP_LMUL_IMM; case OP_LREM: return OP_LREM_IMM; #endif case OP_COMPARE: return OP_COMPARE_IMM; case OP_ICOMPARE: return OP_ICOMPARE_IMM; case OP_LCOMPARE: return OP_LCOMPARE_IMM; case OP_STORE_MEMBASE_REG: return OP_STORE_MEMBASE_IMM; case OP_STOREI1_MEMBASE_REG: return OP_STOREI1_MEMBASE_IMM; case OP_STOREI2_MEMBASE_REG: return OP_STOREI2_MEMBASE_IMM; case OP_STOREI4_MEMBASE_REG: return OP_STOREI4_MEMBASE_IMM; #if defined(TARGET_X86) || defined (TARGET_AMD64) case OP_X86_PUSH: return OP_X86_PUSH_IMM; case OP_X86_COMPARE_MEMBASE_REG: return OP_X86_COMPARE_MEMBASE_IMM; #endif #if defined(TARGET_AMD64) case OP_AMD64_ICOMPARE_MEMBASE_REG: return OP_AMD64_ICOMPARE_MEMBASE_IMM; #endif case OP_VOIDCALL_REG: return OP_VOIDCALL; case OP_CALL_REG: return OP_CALL; case OP_LCALL_REG: return OP_LCALL; case OP_FCALL_REG: return OP_FCALL; case OP_LOCALLOC: return OP_LOCALLOC_IMM; } return -1; } int mono_load_membase_to_load_mem (int opcode) { // FIXME: Add a MONO_ARCH_HAVE_LOAD_MEM macro #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_LOAD_MEMBASE: return OP_LOAD_MEM; case OP_LOADU1_MEMBASE: return OP_LOADU1_MEM; case OP_LOADU2_MEMBASE: return OP_LOADU2_MEM; case OP_LOADI4_MEMBASE: return OP_LOADI4_MEM; case OP_LOADU4_MEMBASE: return OP_LOADU4_MEM; #if SIZEOF_REGISTER == 8 case OP_LOADI8_MEMBASE: return OP_LOADI8_MEM; #endif } #endif return -1; } static int op_to_op_dest_membase (int store_opcode, int opcode) { #if defined(TARGET_X86) if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG))) return -1; switch (opcode) { case OP_IADD: return OP_X86_ADD_MEMBASE_REG; case OP_ISUB: return OP_X86_SUB_MEMBASE_REG; case OP_IAND: return OP_X86_AND_MEMBASE_REG; case OP_IOR: return OP_X86_OR_MEMBASE_REG; case OP_IXOR: return OP_X86_XOR_MEMBASE_REG; case OP_ADD_IMM: case OP_IADD_IMM: return OP_X86_ADD_MEMBASE_IMM; case OP_SUB_IMM: case OP_ISUB_IMM: return OP_X86_SUB_MEMBASE_IMM; case OP_AND_IMM: case OP_IAND_IMM: return OP_X86_AND_MEMBASE_IMM; case OP_OR_IMM: case OP_IOR_IMM: return OP_X86_OR_MEMBASE_IMM; case OP_XOR_IMM: case OP_IXOR_IMM: return OP_X86_XOR_MEMBASE_IMM; case OP_MOVE: return OP_NOP; } #endif #if defined(TARGET_AMD64) if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG) || (store_opcode == OP_STOREI8_MEMBASE_REG))) return -1; switch (opcode) { case OP_IADD: return OP_X86_ADD_MEMBASE_REG; case OP_ISUB: return OP_X86_SUB_MEMBASE_REG; case OP_IAND: return OP_X86_AND_MEMBASE_REG; case OP_IOR: return OP_X86_OR_MEMBASE_REG; case OP_IXOR: return OP_X86_XOR_MEMBASE_REG; case OP_IADD_IMM: return OP_X86_ADD_MEMBASE_IMM; case OP_ISUB_IMM: return OP_X86_SUB_MEMBASE_IMM; case OP_IAND_IMM: return OP_X86_AND_MEMBASE_IMM; case OP_IOR_IMM: return OP_X86_OR_MEMBASE_IMM; case OP_IXOR_IMM: return OP_X86_XOR_MEMBASE_IMM; case OP_LADD: return OP_AMD64_ADD_MEMBASE_REG; case OP_LSUB: return OP_AMD64_SUB_MEMBASE_REG; case OP_LAND: return OP_AMD64_AND_MEMBASE_REG; case OP_LOR: return OP_AMD64_OR_MEMBASE_REG; case OP_LXOR: return OP_AMD64_XOR_MEMBASE_REG; case OP_ADD_IMM: case OP_LADD_IMM: return OP_AMD64_ADD_MEMBASE_IMM; case OP_SUB_IMM: case OP_LSUB_IMM: return OP_AMD64_SUB_MEMBASE_IMM; case OP_AND_IMM: case OP_LAND_IMM: return OP_AMD64_AND_MEMBASE_IMM; case OP_OR_IMM: case OP_LOR_IMM: return OP_AMD64_OR_MEMBASE_IMM; case OP_XOR_IMM: case OP_LXOR_IMM: return OP_AMD64_XOR_MEMBASE_IMM; case OP_MOVE: return OP_NOP; } #endif return -1; } static int op_to_op_store_membase (int store_opcode, int opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_ICEQ: if (store_opcode == OP_STOREI1_MEMBASE_REG) return OP_X86_SETEQ_MEMBASE; case OP_CNE: if (store_opcode == OP_STOREI1_MEMBASE_REG) return OP_X86_SETNE_MEMBASE; } #endif return -1; } static int op_to_op_src1_membase (MonoCompile *cfg, int load_opcode, int opcode) { #ifdef TARGET_X86 /* FIXME: This has sign extension issues */ /* if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE)) return OP_X86_COMPARE_MEMBASE8_IMM; */ if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))) return -1; switch (opcode) { case OP_X86_PUSH: return OP_X86_PUSH_MEMBASE; case OP_COMPARE_IMM: case OP_ICOMPARE_IMM: return OP_X86_COMPARE_MEMBASE_IMM; case OP_COMPARE: case OP_ICOMPARE: return OP_X86_COMPARE_MEMBASE_REG; } #endif #ifdef TARGET_AMD64 /* FIXME: This has sign extension issues */ /* if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE)) return OP_X86_COMPARE_MEMBASE8_IMM; */ switch (opcode) { case OP_X86_PUSH: if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_X86_PUSH_MEMBASE; break; /* FIXME: This only works for 32 bit immediates case OP_COMPARE_IMM: case OP_LCOMPARE_IMM: if ((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_AMD64_COMPARE_MEMBASE_IMM; */ case OP_ICOMPARE_IMM: if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)) return OP_AMD64_ICOMPARE_MEMBASE_IMM; break; case OP_COMPARE: case OP_LCOMPARE: if (cfg->backend->ilp32 && load_opcode == OP_LOAD_MEMBASE) return OP_AMD64_ICOMPARE_MEMBASE_REG; if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_AMD64_COMPARE_MEMBASE_REG; break; case OP_ICOMPARE: if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)) return OP_AMD64_ICOMPARE_MEMBASE_REG; break; } #endif return -1; } static int op_to_op_src2_membase (MonoCompile *cfg, int load_opcode, int opcode) { #ifdef TARGET_X86 if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))) return -1; switch (opcode) { case OP_COMPARE: case OP_ICOMPARE: return OP_X86_COMPARE_REG_MEMBASE; case OP_IADD: return OP_X86_ADD_REG_MEMBASE; case OP_ISUB: return OP_X86_SUB_REG_MEMBASE; case OP_IAND: return OP_X86_AND_REG_MEMBASE; case OP_IOR: return OP_X86_OR_REG_MEMBASE; case OP_IXOR: return OP_X86_XOR_REG_MEMBASE; } #endif #ifdef TARGET_AMD64 if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && cfg->backend->ilp32)) { switch (opcode) { case OP_ICOMPARE: return OP_AMD64_ICOMPARE_REG_MEMBASE; case OP_IADD: return OP_X86_ADD_REG_MEMBASE; case OP_ISUB: return OP_X86_SUB_REG_MEMBASE; case OP_IAND: return OP_X86_AND_REG_MEMBASE; case OP_IOR: return OP_X86_OR_REG_MEMBASE; case OP_IXOR: return OP_X86_XOR_REG_MEMBASE; } } else if ((load_opcode == OP_LOADI8_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32)) { switch (opcode) { case OP_COMPARE: case OP_LCOMPARE: return OP_AMD64_COMPARE_REG_MEMBASE; case OP_LADD: return OP_AMD64_ADD_REG_MEMBASE; case OP_LSUB: return OP_AMD64_SUB_REG_MEMBASE; case OP_LAND: return OP_AMD64_AND_REG_MEMBASE; case OP_LOR: return OP_AMD64_OR_REG_MEMBASE; case OP_LXOR: return OP_AMD64_XOR_REG_MEMBASE; } } #endif return -1; } int mono_op_to_op_imm_noemul (int opcode) { MONO_DISABLE_WARNING(4065) // switch with default but no case switch (opcode) { #if SIZEOF_REGISTER == 4 && !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS) case OP_LSHR: case OP_LSHL: case OP_LSHR_UN: return -1; #endif #if defined(MONO_ARCH_EMULATE_MUL_DIV) || defined(MONO_ARCH_EMULATE_DIV) case OP_IDIV: case OP_IDIV_UN: case OP_IREM: case OP_IREM_UN: return -1; #endif #if defined(MONO_ARCH_EMULATE_MUL_DIV) case OP_IMUL: return -1; #endif default: return mono_op_to_op_imm (opcode); } MONO_RESTORE_WARNING } gboolean mono_op_no_side_effects (int opcode) { /* FIXME: Add more instructions */ /* INEG sets the condition codes, and the OP_LNEG decomposition depends on this on x86 */ switch (opcode) { case OP_MOVE: case OP_FMOVE: case OP_VMOVE: case OP_XMOVE: case OP_RMOVE: case OP_VZERO: case OP_XZERO: case OP_ICONST: case OP_I8CONST: case OP_ADD_IMM: case OP_R8CONST: case OP_LADD_IMM: case OP_ISUB_IMM: case OP_IADD_IMM: case OP_LNEG: case OP_ISUB: case OP_CMOV_IGE: case OP_ISHL_IMM: case OP_ISHR_IMM: case OP_ISHR_UN_IMM: case OP_IAND_IMM: case OP_ICONV_TO_U1: case OP_ICONV_TO_I1: case OP_SEXT_I4: case OP_LCONV_TO_U1: case OP_ICONV_TO_U2: case OP_ICONV_TO_I2: case OP_LCONV_TO_I2: case OP_LDADDR: case OP_PHI: case OP_NOP: case OP_ZEXT_I4: case OP_NOT_NULL: case OP_IL_SEQ_POINT: case OP_RTTYPE: return TRUE; default: return FALSE; } } gboolean mono_ins_no_side_effects (MonoInst *ins) { if (mono_op_no_side_effects (ins->opcode)) return TRUE; if (ins->opcode == OP_AOTCONST) { MonoJumpInfoType type = (MonoJumpInfoType)(intptr_t)ins->inst_p1; // Some AOTCONSTs have side effects switch (type) { case MONO_PATCH_INFO_TYPE_FROM_HANDLE: case MONO_PATCH_INFO_LDSTR: case MONO_PATCH_INFO_VTABLE: case MONO_PATCH_INFO_METHOD_RGCTX: return TRUE; } } return FALSE; } /** * mono_handle_global_vregs: * * Make vregs used in more than one bblock 'global', i.e. allocate a variable * for them. */ void mono_handle_global_vregs (MonoCompile *cfg) { gint32 *vreg_to_bb; MonoBasicBlock *bb; int i, pos; vreg_to_bb = (gint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (gint32*) * cfg->next_vreg + 1); #ifdef MONO_ARCH_SIMD_INTRINSICS if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION) mono_simd_simplify_indirection (cfg); #endif /* Find local vregs used in more than one bb */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins = bb->code; int block_num = bb->block_num; if (cfg->verbose_level > 2) printf ("\nHANDLE-GLOBAL-VREGS BLOCK %d:\n", bb->block_num); cfg->cbb = bb; for (; ins; ins = ins->next) { const char *spec = INS_INFO (ins->opcode); int regtype = 0, regindex; gint32 prev_bb; if (G_UNLIKELY (cfg->verbose_level > 2)) mono_print_ins (ins); g_assert (ins->opcode >= MONO_CEE_LAST); for (regindex = 0; regindex < 4; regindex ++) { int vreg = 0; if (regindex == 0) { regtype = spec [MONO_INST_DEST]; if (regtype == ' ') continue; vreg = ins->dreg; } else if (regindex == 1) { regtype = spec [MONO_INST_SRC1]; if (regtype == ' ') continue; vreg = ins->sreg1; } else if (regindex == 2) { regtype = spec [MONO_INST_SRC2]; if (regtype == ' ') continue; vreg = ins->sreg2; } else if (regindex == 3) { regtype = spec [MONO_INST_SRC3]; if (regtype == ' ') continue; vreg = ins->sreg3; } #if SIZEOF_REGISTER == 4 /* In the LLVM case, the long opcodes are not decomposed */ if (regtype == 'l' && !COMPILE_LLVM (cfg)) { /* * Since some instructions reference the original long vreg, * and some reference the two component vregs, it is quite hard * to determine when it needs to be global. So be conservative. */ if (!get_vreg_to_inst (cfg, vreg)) { mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg); if (cfg->verbose_level > 2) printf ("LONG VREG R%d made global.\n", vreg); } /* * Make the component vregs volatile since the optimizations can * get confused otherwise. */ get_vreg_to_inst (cfg, MONO_LVREG_LS (vreg))->flags |= MONO_INST_VOLATILE; get_vreg_to_inst (cfg, MONO_LVREG_MS (vreg))->flags |= MONO_INST_VOLATILE; } #endif g_assert (vreg != -1); prev_bb = vreg_to_bb [vreg]; if (prev_bb == 0) { /* 0 is a valid block num */ vreg_to_bb [vreg] = block_num + 1; } else if ((prev_bb != block_num + 1) && (prev_bb != -1)) { if (((regtype == 'i' && (vreg < MONO_MAX_IREGS))) || (regtype == 'f' && (vreg < MONO_MAX_FREGS))) continue; if (!get_vreg_to_inst (cfg, vreg)) { if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("VREG R%d used in BB%d and BB%d made global.\n", vreg, vreg_to_bb [vreg], block_num); switch (regtype) { case 'i': if (vreg_is_ref (cfg, vreg)) mono_compile_create_var_for_vreg (cfg, mono_get_object_type (), OP_LOCAL, vreg); else mono_compile_create_var_for_vreg (cfg, mono_get_int_type (), OP_LOCAL, vreg); break; case 'l': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg); break; case 'f': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.double_class), OP_LOCAL, vreg); break; case 'v': case 'x': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (ins->klass), OP_LOCAL, vreg); break; default: g_assert_not_reached (); } } /* Flag as having been used in more than one bb */ vreg_to_bb [vreg] = -1; } } } } /* If a variable is used in only one bblock, convert it into a local vreg */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *var = cfg->varinfo [i]; MonoMethodVar *vmv = MONO_VARINFO (cfg, i); switch (var->type) { case STACK_I4: case STACK_OBJ: case STACK_PTR: case STACK_MP: case STACK_VTYPE: #if SIZEOF_REGISTER == 8 case STACK_I8: #endif #if !defined(TARGET_X86) /* Enabling this screws up the fp stack on x86 */ case STACK_R8: #endif if (mono_arch_is_soft_float ()) break; /* if (var->type == STACK_VTYPE && cfg->gsharedvt && mini_is_gsharedvt_variable_type (var->inst_vtype)) break; */ /* Arguments are implicitly global */ /* Putting R4 vars into registers doesn't work currently */ /* The gsharedvt vars are implicitly referenced by ldaddr opcodes, but those opcodes are only generated later */ if ((var->opcode != OP_ARG) && (var != cfg->ret) && !(var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && (vreg_to_bb [var->dreg] != -1) && (m_class_get_byval_arg (var->klass)->type != MONO_TYPE_R4) && !cfg->disable_vreg_to_lvreg && var != cfg->gsharedvt_info_var && var != cfg->gsharedvt_locals_var && var != cfg->lmf_addr_var) { /* * Make that the variable's liveness interval doesn't contain a call, since * that would cause the lvreg to be spilled, making the whole optimization * useless. */ /* This is too slow for JIT compilation */ #if 0 if (cfg->compile_aot && vreg_to_bb [var->dreg]) { MonoInst *ins; int def_index, call_index, ins_index; gboolean spilled = FALSE; def_index = -1; call_index = -1; ins_index = 0; for (ins = vreg_to_bb [var->dreg]->code; ins; ins = ins->next) { const char *spec = INS_INFO (ins->opcode); if ((spec [MONO_INST_DEST] != ' ') && (ins->dreg == var->dreg)) def_index = ins_index; if (((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg)) || ((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg))) { if (call_index > def_index) { spilled = TRUE; break; } } if (MONO_IS_CALL (ins)) call_index = ins_index; ins_index ++; } if (spilled) break; } #endif if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("CONVERTED R%d(%d) TO VREG.\n", var->dreg, vmv->idx); var->flags |= MONO_INST_IS_DEAD; cfg->vreg_to_inst [var->dreg] = NULL; } break; } } /* * Compress the varinfo and vars tables so the liveness computation is faster and * takes up less space. */ pos = 0; for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if (pos < i && cfg->locals_start == i) cfg->locals_start = pos; if (!(var->flags & MONO_INST_IS_DEAD)) { if (pos < i) { cfg->varinfo [pos] = cfg->varinfo [i]; cfg->varinfo [pos]->inst_c0 = pos; memcpy (&cfg->vars [pos], &cfg->vars [i], sizeof (MonoMethodVar)); cfg->vars [pos].idx = pos; #if SIZEOF_REGISTER == 4 if (cfg->varinfo [pos]->type == STACK_I8) { /* Modify the two component vars too */ MonoInst *var1; var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS (cfg->varinfo [pos]->dreg)); var1->inst_c0 = pos; var1 = get_vreg_to_inst (cfg, MONO_LVREG_MS (cfg->varinfo [pos]->dreg)); var1->inst_c0 = pos; } #endif } pos ++; } } cfg->num_varinfo = pos; if (cfg->locals_start > cfg->num_varinfo) cfg->locals_start = cfg->num_varinfo; } /* * mono_allocate_gsharedvt_vars: * * Allocate variables with gsharedvt types to entries in the MonoGSharedVtMethodRuntimeInfo.entries array. * Initialize cfg->gsharedvt_vreg_to_idx with the mapping between vregs and indexes. */ void mono_allocate_gsharedvt_vars (MonoCompile *cfg) { int i; cfg->gsharedvt_vreg_to_idx = (int *)mono_mempool_alloc0 (cfg->mempool, sizeof (int) * cfg->next_vreg); for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *ins = cfg->varinfo [i]; int idx; if (mini_is_gsharedvt_variable_type (ins->inst_vtype)) { if (i >= cfg->locals_start) { /* Local */ idx = get_gsharedvt_info_slot (cfg, ins->inst_vtype, MONO_RGCTX_INFO_LOCAL_OFFSET); cfg->gsharedvt_vreg_to_idx [ins->dreg] = idx + 1; ins->opcode = OP_GSHAREDVT_LOCAL; ins->inst_imm = idx; } else { /* Arg */ cfg->gsharedvt_vreg_to_idx [ins->dreg] = -1; ins->opcode = OP_GSHAREDVT_ARG_REGOFFSET; } } } } /** * mono_spill_global_vars: * * Generate spill code for variables which are not allocated to registers, * and replace vregs with their allocated hregs. *need_local_opts is set to TRUE if * code is generated which could be optimized by the local optimization passes. */ void mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts) { MonoBasicBlock *bb; char spec2 [16]; int orig_next_vreg; guint32 *vreg_to_lvreg; guint32 *lvregs; guint32 i, lvregs_len, lvregs_size; gboolean dest_has_lvreg = FALSE; MonoStackType stacktypes [128]; MonoInst **live_range_start, **live_range_end; MonoBasicBlock **live_range_start_bb, **live_range_end_bb; *need_local_opts = FALSE; memset (spec2, 0, sizeof (spec2)); /* FIXME: Move this function to mini.c */ stacktypes [(int)'i'] = STACK_PTR; stacktypes [(int)'l'] = STACK_I8; stacktypes [(int)'f'] = STACK_R8; #ifdef MONO_ARCH_SIMD_INTRINSICS stacktypes [(int)'x'] = STACK_VTYPE; #endif #if SIZEOF_REGISTER == 4 /* Create MonoInsts for longs */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *ins = cfg->varinfo [i]; if ((ins->opcode != OP_REGVAR) && !(ins->flags & MONO_INST_IS_DEAD)) { switch (ins->type) { case STACK_R8: case STACK_I8: { MonoInst *tree; if (ins->type == STACK_R8 && !COMPILE_SOFT_FLOAT (cfg)) break; g_assert (ins->opcode == OP_REGOFFSET); tree = get_vreg_to_inst (cfg, MONO_LVREG_LS (ins->dreg)); g_assert (tree); tree->opcode = OP_REGOFFSET; tree->inst_basereg = ins->inst_basereg; tree->inst_offset = ins->inst_offset + MINI_LS_WORD_OFFSET; tree = get_vreg_to_inst (cfg, MONO_LVREG_MS (ins->dreg)); g_assert (tree); tree->opcode = OP_REGOFFSET; tree->inst_basereg = ins->inst_basereg; tree->inst_offset = ins->inst_offset + MINI_MS_WORD_OFFSET; break; } default: break; } } } #endif if (cfg->compute_gc_maps) { /* registers need liveness info even for !non refs */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *ins = cfg->varinfo [i]; if (ins->opcode == OP_REGVAR) ins->flags |= MONO_INST_GC_TRACK; } } /* FIXME: widening and truncation */ /* * As an optimization, when a variable allocated to the stack is first loaded into * an lvreg, we will remember the lvreg and use it the next time instead of loading * the variable again. */ orig_next_vreg = cfg->next_vreg; vreg_to_lvreg = (guint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * cfg->next_vreg); lvregs_size = 1024; lvregs = (guint32 *)mono_mempool_alloc (cfg->mempool, sizeof (guint32) * lvregs_size); lvregs_len = 0; /* * These arrays contain the first and last instructions accessing a given * variable. * Since we emit bblocks in the same order we process them here, and we * don't split live ranges, these will precisely describe the live range of * the variable, i.e. the instruction range where a valid value can be found * in the variables location. * The live range is computed using the liveness info computed by the liveness pass. * We can't use vmv->range, since that is an abstract live range, and we need * one which is instruction precise. * FIXME: Variables used in out-of-line bblocks have a hole in their live range. */ /* FIXME: Only do this if debugging info is requested */ live_range_start = g_new0 (MonoInst*, cfg->next_vreg); live_range_end = g_new0 (MonoInst*, cfg->next_vreg); live_range_start_bb = g_new (MonoBasicBlock*, cfg->next_vreg); live_range_end_bb = g_new (MonoBasicBlock*, cfg->next_vreg); /* Add spill loads/stores */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins; if (cfg->verbose_level > 2) printf ("\nSPILL BLOCK %d:\n", bb->block_num); /* Clear vreg_to_lvreg array */ for (i = 0; i < lvregs_len; i++) vreg_to_lvreg [lvregs [i]] = 0; lvregs_len = 0; cfg->cbb = bb; MONO_BB_FOR_EACH_INS (bb, ins) { const char *spec = INS_INFO (ins->opcode); int regtype, srcindex, sreg, tmp_reg, prev_dreg, num_sregs; gboolean store, no_lvreg; int sregs [MONO_MAX_SRC_REGS]; if (G_UNLIKELY (cfg->verbose_level > 2)) mono_print_ins (ins); if (ins->opcode == OP_NOP) continue; /* * We handle LDADDR here as well, since it can only be decomposed * when variable addresses are known. */ if (ins->opcode == OP_LDADDR) { MonoInst *var = (MonoInst *)ins->inst_p0; if (var->opcode == OP_VTARG_ADDR) { /* Happens on SPARC/S390 where vtypes are passed by reference */ MonoInst *vtaddr = var->inst_left; if (vtaddr->opcode == OP_REGVAR) { ins->opcode = OP_MOVE; ins->sreg1 = vtaddr->dreg; } else if (var->inst_left->opcode == OP_REGOFFSET) { ins->opcode = OP_LOAD_MEMBASE; ins->inst_basereg = vtaddr->inst_basereg; ins->inst_offset = vtaddr->inst_offset; } else NOT_IMPLEMENTED; } else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg] < 0) { /* gsharedvt arg passed by ref */ g_assert (var->opcode == OP_GSHAREDVT_ARG_REGOFFSET); ins->opcode = OP_LOAD_MEMBASE; ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; } else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg]) { MonoInst *load, *load2, *load3; int idx = cfg->gsharedvt_vreg_to_idx [var->dreg] - 1; int reg1, reg2, reg3; MonoInst *info_var = cfg->gsharedvt_info_var; MonoInst *locals_var = cfg->gsharedvt_locals_var; /* * gsharedvt local. * Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx]. */ g_assert (var->opcode == OP_GSHAREDVT_LOCAL); g_assert (info_var); g_assert (locals_var); /* Mark the instruction used to compute the locals var as used */ cfg->gsharedvt_locals_var_ins = NULL; /* Load the offset */ if (info_var->opcode == OP_REGOFFSET) { reg1 = alloc_ireg (cfg); NEW_LOAD_MEMBASE (cfg, load, OP_LOAD_MEMBASE, reg1, info_var->inst_basereg, info_var->inst_offset); } else if (info_var->opcode == OP_REGVAR) { load = NULL; reg1 = info_var->dreg; } else { g_assert_not_reached (); } reg2 = alloc_ireg (cfg); NEW_LOAD_MEMBASE (cfg, load2, OP_LOADI4_MEMBASE, reg2, reg1, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P)); /* Load the locals area address */ reg3 = alloc_ireg (cfg); if (locals_var->opcode == OP_REGOFFSET) { NEW_LOAD_MEMBASE (cfg, load3, OP_LOAD_MEMBASE, reg3, locals_var->inst_basereg, locals_var->inst_offset); } else if (locals_var->opcode == OP_REGVAR) { NEW_UNALU (cfg, load3, OP_MOVE, reg3, locals_var->dreg); } else { g_assert_not_reached (); } /* Compute the address */ ins->opcode = OP_PADD; ins->sreg1 = reg3; ins->sreg2 = reg2; mono_bblock_insert_before_ins (bb, ins, load3); mono_bblock_insert_before_ins (bb, load3, load2); if (load) mono_bblock_insert_before_ins (bb, load2, load); } else { g_assert (var->opcode == OP_REGOFFSET); ins->opcode = OP_ADD_IMM; ins->sreg1 = var->inst_basereg; ins->inst_imm = var->inst_offset; } *need_local_opts = TRUE; spec = INS_INFO (ins->opcode); } if (ins->opcode < MONO_CEE_LAST) { mono_print_ins (ins); g_assert_not_reached (); } /* * Store opcodes have destbasereg in the dreg, but in reality, it is an * src register. * FIXME: */ if (MONO_IS_STORE_MEMBASE (ins)) { tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; store = TRUE; spec2 [MONO_INST_DEST] = ' '; spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1]; spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST]; spec2 [MONO_INST_SRC3] = ' '; spec = spec2; } else if (MONO_IS_STORE_MEMINDEX (ins)) g_assert_not_reached (); else store = FALSE; no_lvreg = FALSE; if (G_UNLIKELY (cfg->verbose_level > 2)) { printf ("\t %.3s %d", spec, ins->dreg); num_sregs = mono_inst_get_src_registers (ins, sregs); for (srcindex = 0; srcindex < num_sregs; ++srcindex) printf (" %d", sregs [srcindex]); printf ("\n"); } /***************/ /* DREG */ /***************/ regtype = spec [MONO_INST_DEST]; g_assert (((ins->dreg == -1) && (regtype == ' ')) || ((ins->dreg != -1) && (regtype != ' '))); prev_dreg = -1; int dreg_using_dest_to_membase_op = -1; if ((ins->dreg != -1) && get_vreg_to_inst (cfg, ins->dreg)) { MonoInst *var = get_vreg_to_inst (cfg, ins->dreg); MonoInst *store_ins; int store_opcode; MonoInst *def_ins = ins; int dreg = ins->dreg; /* The original vreg */ store_opcode = mono_type_to_store_membase (cfg, var->inst_vtype); if (var->opcode == OP_REGVAR) { ins->dreg = var->dreg; } else if ((ins->dreg == ins->sreg1) && (spec [MONO_INST_DEST] == 'i') && (spec [MONO_INST_SRC1] == 'i') && !vreg_to_lvreg [ins->dreg] && (op_to_op_dest_membase (store_opcode, ins->opcode) != -1)) { /* * Instead of emitting a load+store, use a _membase opcode. */ g_assert (var->opcode == OP_REGOFFSET); if (ins->opcode == OP_MOVE) { NULLIFY_INS (ins); def_ins = NULL; } else { dreg_using_dest_to_membase_op = ins->dreg; ins->opcode = op_to_op_dest_membase (store_opcode, ins->opcode); ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; ins->dreg = -1; } spec = INS_INFO (ins->opcode); } else { guint32 lvreg; g_assert (var->opcode == OP_REGOFFSET); prev_dreg = ins->dreg; /* Invalidate any previous lvreg for this vreg */ vreg_to_lvreg [ins->dreg] = 0; lvreg = 0; if (COMPILE_SOFT_FLOAT (cfg) && store_opcode == OP_STORER8_MEMBASE_REG) { regtype = 'l'; store_opcode = OP_STOREI8_MEMBASE_REG; } ins->dreg = alloc_dreg (cfg, stacktypes [regtype]); #if SIZEOF_REGISTER != 8 if (regtype == 'l') { NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET, MONO_LVREG_LS (ins->dreg)); mono_bblock_insert_after_ins (bb, ins, store_ins); NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET, MONO_LVREG_MS (ins->dreg)); mono_bblock_insert_after_ins (bb, ins, store_ins); def_ins = store_ins; } else #endif { g_assert (store_opcode != OP_STOREV_MEMBASE); /* Try to fuse the store into the instruction itself */ /* FIXME: Add more instructions */ if (!lvreg && ((ins->opcode == OP_ICONST) || ((ins->opcode == OP_I8CONST) && (ins->inst_c0 == 0)))) { ins->opcode = store_membase_reg_to_store_membase_imm (store_opcode); ins->inst_imm = ins->inst_c0; ins->inst_destbasereg = var->inst_basereg; ins->inst_offset = var->inst_offset; spec = INS_INFO (ins->opcode); } else if (!lvreg && ((ins->opcode == OP_MOVE) || (ins->opcode == OP_FMOVE) || (ins->opcode == OP_LMOVE) || (ins->opcode == OP_RMOVE))) { ins->opcode = store_opcode; ins->inst_destbasereg = var->inst_basereg; ins->inst_offset = var->inst_offset; no_lvreg = TRUE; tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; store = TRUE; spec2 [MONO_INST_DEST] = ' '; spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1]; spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST]; spec2 [MONO_INST_SRC3] = ' '; spec = spec2; } else if (!lvreg && (op_to_op_store_membase (store_opcode, ins->opcode) != -1)) { // FIXME: The backends expect the base reg to be in inst_basereg ins->opcode = op_to_op_store_membase (store_opcode, ins->opcode); ins->dreg = -1; ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; spec = INS_INFO (ins->opcode); } else { /* printf ("INS: "); mono_print_ins (ins); */ /* Create a store instruction */ NEW_STORE_MEMBASE (cfg, store_ins, store_opcode, var->inst_basereg, var->inst_offset, ins->dreg); /* Insert it after the instruction */ mono_bblock_insert_after_ins (bb, ins, store_ins); def_ins = store_ins; /* * We can't assign ins->dreg to var->dreg here, since the * sregs could use it. So set a flag, and do it after * the sregs. */ if ((!cfg->backend->use_fpstack || ((store_opcode != OP_STORER8_MEMBASE_REG) && (store_opcode != OP_STORER4_MEMBASE_REG))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) dest_has_lvreg = TRUE; } } } if (def_ins && !live_range_start [dreg]) { live_range_start [dreg] = def_ins; live_range_start_bb [dreg] = bb; } if (cfg->compute_gc_maps && def_ins && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_DEF); tmp->inst_c1 = dreg; mono_bblock_insert_after_ins (bb, def_ins, tmp); } } /************/ /* SREGS */ /************/ num_sregs = mono_inst_get_src_registers (ins, sregs); for (srcindex = 0; srcindex < 3; ++srcindex) { regtype = spec [MONO_INST_SRC1 + srcindex]; sreg = sregs [srcindex]; g_assert (((sreg == -1) && (regtype == ' ')) || ((sreg != -1) && (regtype != ' '))); if ((sreg != -1) && get_vreg_to_inst (cfg, sreg)) { MonoInst *var = get_vreg_to_inst (cfg, sreg); MonoInst *use_ins = ins; MonoInst *load_ins; guint32 load_opcode; if (var->opcode == OP_REGVAR) { sregs [srcindex] = var->dreg; //mono_inst_set_src_registers (ins, sregs); live_range_end [sreg] = use_ins; live_range_end_bb [sreg] = bb; if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE); /* var->dreg is a hreg */ tmp->inst_c1 = sreg; mono_bblock_insert_after_ins (bb, ins, tmp); } continue; } g_assert (var->opcode == OP_REGOFFSET); load_opcode = mono_type_to_load_membase (cfg, var->inst_vtype); g_assert (load_opcode != OP_LOADV_MEMBASE); if (vreg_to_lvreg [sreg]) { g_assert (vreg_to_lvreg [sreg] != -1); /* The variable is already loaded to an lvreg */ if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("\t\tUse lvreg R%d for R%d.\n", vreg_to_lvreg [sreg], sreg); sregs [srcindex] = vreg_to_lvreg [sreg]; //mono_inst_set_src_registers (ins, sregs); continue; } /* Try to fuse the load into the instruction */ if ((srcindex == 0) && (op_to_op_src1_membase (cfg, load_opcode, ins->opcode) != -1)) { ins->opcode = op_to_op_src1_membase (cfg, load_opcode, ins->opcode); sregs [0] = var->inst_basereg; //mono_inst_set_src_registers (ins, sregs); ins->inst_offset = var->inst_offset; } else if ((srcindex == 1) && (op_to_op_src2_membase (cfg, load_opcode, ins->opcode) != -1)) { ins->opcode = op_to_op_src2_membase (cfg, load_opcode, ins->opcode); sregs [1] = var->inst_basereg; //mono_inst_set_src_registers (ins, sregs); ins->inst_offset = var->inst_offset; } else { if (MONO_IS_REAL_MOVE (ins)) { ins->opcode = OP_NOP; sreg = ins->dreg; } else { //printf ("%d ", srcindex); mono_print_ins (ins); sreg = alloc_dreg (cfg, stacktypes [regtype]); if ((!cfg->backend->use_fpstack || ((load_opcode != OP_LOADR8_MEMBASE) && (load_opcode != OP_LOADR4_MEMBASE))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && !no_lvreg) { if (var->dreg == prev_dreg) { /* * sreg refers to the value loaded by the load * emitted below, but we need to use ins->dreg * since it refers to the store emitted earlier. */ sreg = ins->dreg; } g_assert (sreg != -1); if (var->dreg == dreg_using_dest_to_membase_op) { if (cfg->verbose_level > 2) printf ("\tCan't cache R%d because it's part of a dreg dest_membase optimization\n", var->dreg); } else { vreg_to_lvreg [var->dreg] = sreg; } if (lvregs_len >= lvregs_size) { guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2); memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size); lvregs = new_lvregs; lvregs_size *= 2; } lvregs [lvregs_len ++] = var->dreg; } } sregs [srcindex] = sreg; //mono_inst_set_src_registers (ins, sregs); #if SIZEOF_REGISTER != 8 if (regtype == 'l') { NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_MS (sreg), var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET); mono_bblock_insert_before_ins (bb, ins, load_ins); NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_LS (sreg), var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET); mono_bblock_insert_before_ins (bb, ins, load_ins); use_ins = load_ins; } else #endif { #if SIZEOF_REGISTER == 4 g_assert (load_opcode != OP_LOADI8_MEMBASE); #endif NEW_LOAD_MEMBASE (cfg, load_ins, load_opcode, sreg, var->inst_basereg, var->inst_offset); mono_bblock_insert_before_ins (bb, ins, load_ins); use_ins = load_ins; } if (cfg->verbose_level > 2) mono_print_ins_index (0, use_ins); } if (var->dreg < orig_next_vreg) { live_range_end [var->dreg] = use_ins; live_range_end_bb [var->dreg] = bb; } if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE); tmp->inst_c1 = var->dreg; mono_bblock_insert_after_ins (bb, ins, tmp); } } } mono_inst_set_src_registers (ins, sregs); if (dest_has_lvreg) { g_assert (ins->dreg != -1); vreg_to_lvreg [prev_dreg] = ins->dreg; if (lvregs_len >= lvregs_size) { guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2); memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size); lvregs = new_lvregs; lvregs_size *= 2; } lvregs [lvregs_len ++] = prev_dreg; dest_has_lvreg = FALSE; } if (store) { tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; } if (MONO_IS_CALL (ins)) { /* Clear vreg_to_lvreg array */ for (i = 0; i < lvregs_len; i++) vreg_to_lvreg [lvregs [i]] = 0; lvregs_len = 0; } else if (ins->opcode == OP_NOP) { ins->dreg = -1; MONO_INST_NULLIFY_SREGS (ins); } if (cfg->verbose_level > 2) mono_print_ins_index (1, ins); } /* Extend the live range based on the liveness info */ if (cfg->compute_precise_live_ranges && bb->live_out_set && bb->code) { for (i = 0; i < cfg->num_varinfo; i ++) { MonoMethodVar *vi = MONO_VARINFO (cfg, i); if (vreg_is_volatile (cfg, vi->vreg)) /* The liveness info is incomplete */ continue; if (mono_bitset_test_fast (bb->live_in_set, i) && !live_range_start [vi->vreg]) { /* Live from at least the first ins of this bb */ live_range_start [vi->vreg] = bb->code; live_range_start_bb [vi->vreg] = bb; } if (mono_bitset_test_fast (bb->live_out_set, i)) { /* Live at least until the last ins of this bb */ live_range_end [vi->vreg] = bb->last_ins; live_range_end_bb [vi->vreg] = bb; } } } } /* * Emit LIVERANGE_START/LIVERANGE_END opcodes, the backend will implement them * by storing the current native offset into MonoMethodVar->live_range_start/end. */ if (cfg->compute_precise_live_ranges && cfg->comp_done & MONO_COMP_LIVENESS) { for (i = 0; i < cfg->num_varinfo; ++i) { int vreg = MONO_VARINFO (cfg, i)->vreg; MonoInst *ins; if (live_range_start [vreg]) { MONO_INST_NEW (cfg, ins, OP_LIVERANGE_START); ins->inst_c0 = i; ins->inst_c1 = vreg; mono_bblock_insert_after_ins (live_range_start_bb [vreg], live_range_start [vreg], ins); } if (live_range_end [vreg]) { MONO_INST_NEW (cfg, ins, OP_LIVERANGE_END); ins->inst_c0 = i; ins->inst_c1 = vreg; if (live_range_end [vreg] == live_range_end_bb [vreg]->last_ins) mono_add_ins_to_end (live_range_end_bb [vreg], ins); else mono_bblock_insert_after_ins (live_range_end_bb [vreg], live_range_end [vreg], ins); } } } if (cfg->gsharedvt_locals_var_ins) { /* Nullify if unused */ cfg->gsharedvt_locals_var_ins->opcode = OP_PCONST; cfg->gsharedvt_locals_var_ins->inst_imm = 0; } g_free (live_range_start); g_free (live_range_end); g_free (live_range_start_bb); g_free (live_range_end_bb); } /** * FIXME: * - use 'iadd' instead of 'int_add' * - handling ovf opcodes: decompose in method_to_ir. * - unify iregs/fregs * -> partly done, the missing parts are: * - a more complete unification would involve unifying the hregs as well, so * code wouldn't need if (fp) all over the place. but that would mean the hregs * would no longer map to the machine hregs, so the code generators would need to * be modified. Also, on ia64 for example, niregs + nfregs > 256 -> bitmasks * wouldn't work any more. Duplicating the code in mono_local_regalloc () into * fp/non-fp branches speeds it up by about 15%. * - use sext/zext opcodes instead of shifts * - add OP_ICALL * - get rid of TEMPLOADs if possible and use vregs instead * - clean up usage of OP_P/OP_ opcodes * - cleanup usage of DUMMY_USE * - cleanup the setting of ins->type for MonoInst's which are pushed on the * stack * - set the stack type and allocate a dreg in the EMIT_NEW macros * - get rid of all the <foo>2 stuff when the new JIT is ready. * - make sure handle_stack_args () is called before the branch is emitted * - when the new IR is done, get rid of all unused stuff * - COMPARE/BEQ as separate instructions or unify them ? * - keeping them separate allows specialized compare instructions like * compare_imm, compare_membase * - most back ends unify fp compare+branch, fp compare+ceq * - integrate mono_save_args into inline_method * - get rid of the empty bblocks created by MONO_EMIT_NEW_BRACH_BLOCK2 * - handle long shift opts on 32 bit platforms somehow: they require * 3 sregs (2 for arg1 and 1 for arg2) * - make byref a 'normal' type. * - use vregs for bb->out_stacks if possible, handle_global_vreg will make them a * variable if needed. * - do not start a new IL level bblock when cfg->cbb is changed by a function call * like inline_method. * - remove inlining restrictions * - fix LNEG and enable cfold of INEG * - generalize x86 optimizations like ldelema as a peephole optimization * - add store_mem_imm for amd64 * - optimize the loading of the interruption flag in the managed->native wrappers * - avoid special handling of OP_NOP in passes * - move code inserting instructions into one function/macro. * - try a coalescing phase after liveness analysis * - add float -> vreg conversion + local optimizations on !x86 * - figure out how to handle decomposed branches during optimizations, ie. * compare+branch, op_jump_table+op_br etc. * - promote RuntimeXHandles to vregs * - vtype cleanups: * - add a NEW_VARLOADA_VREG macro * - the vtype optimizations are blocked by the LDADDR opcodes generated for * accessing vtype fields. * - get rid of I8CONST on 64 bit platforms * - dealing with the increase in code size due to branches created during opcode * decomposition: * - use extended basic blocks * - all parts of the JIT * - handle_global_vregs () && local regalloc * - avoid introducing global vregs during decomposition, like 'vtable' in isinst * - sources of increase in code size: * - vtypes * - long compares * - isinst and castclass * - lvregs not allocated to global registers even if used multiple times * - call cctors outside the JIT, to make -v output more readable and JIT timings more * meaningful. * - check for fp stack leakage in other opcodes too. (-> 'exceptions' optimization) * - add all micro optimizations from the old JIT * - put tree optimizations into the deadce pass * - decompose op_start_handler/op_endfilter/op_endfinally earlier using an arch * specific function. * - unify the float comparison opcodes with the other comparison opcodes, i.e. * fcompare + branchCC. * - create a helper function for allocating a stack slot, taking into account * MONO_CFG_HAS_SPILLUP. * - merge r68207. * - optimize mono_regstate2_alloc_int/float. * - fix the pessimistic handling of variables accessed in exception handler blocks. * - need to write a tree optimization pass, but the creation of trees is difficult, i.e. * parts of the tree could be separated by other instructions, killing the tree * arguments, or stores killing loads etc. Also, should we fold loads into other * instructions if the result of the load is used multiple times ? * - make the REM_IMM optimization in mini-x86.c arch-independent. * - LAST MERGE: 108395. * - when returning vtypes in registers, generate IR and append it to the end of the * last bb instead of doing it in the epilog. * - change the store opcodes so they use sreg1 instead of dreg to store the base register. */ /* NOTES ----- - When to decompose opcodes: - earlier: this makes some optimizations hard to implement, since the low level IR no longer contains the necessary information. But it is easier to do. - later: harder to implement, enables more optimizations. - Branches inside bblocks: - created when decomposing complex opcodes. - branches to another bblock: harmless, but not tracked by the branch optimizations, so need to branch to a label at the start of the bblock. - branches to inside the same bblock: very problematic, trips up the local reg allocator. Can be fixed by spitting the current bblock, but that is a complex operation, since some local vregs can become global vregs etc. - Local/global vregs: - local vregs: temporary vregs used inside one bblock. Assigned to hregs by the local register allocator. - global vregs: used in more than one bblock. Have an associated MonoMethodVar structure, created by mono_create_var (). Assigned to hregs or the stack by the global register allocator. - When to do optimizations like alu->alu_imm: - earlier -> saves work later on since the IR will be smaller/simpler - later -> can work on more instructions - Handling of valuetypes: - When a vtype is pushed on the stack, a new temporary is created, an instruction computing its address (LDADDR) is emitted and pushed on the stack. Need to optimize cases when the vtype is used immediately as in argument passing, stloc etc. - Instead of the to_end stuff in the old JIT, simply call the function handling the values on the stack before emitting the last instruction of the bb. */ #else /* !DISABLE_JIT */ MONO_EMPTY_SOURCE_FILE (method_to_ir); #endif /* !DISABLE_JIT */
/** * \file * Convert CIL to the JIT internal representation * * Author: * Paolo Molaro (lupus@ximian.com) * Dietmar Maurer (dietmar@ximian.com) * * (C) 2002 Ximian, Inc. * Copyright 2003-2010 Novell, Inc (http://www.novell.com) * Copyright 2011 Xamarin, Inc (http://www.xamarin.com) * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include <config.h> #include <glib.h> #include <mono/utils/mono-compiler.h> #include "mini.h" #ifndef DISABLE_JIT #include <signal.h> #ifdef HAVE_UNISTD_H #include <unistd.h> #endif #include <math.h> #include <string.h> #include <ctype.h> #ifdef HAVE_SYS_TIME_H #include <sys/time.h> #endif #ifdef HAVE_ALLOCA_H #include <alloca.h> #endif #include <mono/utils/memcheck.h> #include <mono/metadata/abi-details.h> #include <mono/metadata/assembly.h> #include <mono/metadata/assembly-internals.h> #include <mono/metadata/attrdefs.h> #include <mono/metadata/loader.h> #include <mono/metadata/tabledefs.h> #include <mono/metadata/class.h> #include <mono/metadata/class-abi-details.h> #include <mono/metadata/object.h> #include <mono/metadata/exception.h> #include <mono/metadata/exception-internals.h> #include <mono/metadata/opcodes.h> #include <mono/metadata/mono-endian.h> #include <mono/metadata/tokentype.h> #include <mono/metadata/tabledefs.h> #include <mono/metadata/marshal.h> #include <mono/metadata/debug-helpers.h> #include <mono/metadata/debug-internals.h> #include <mono/metadata/gc-internals.h> #include <mono/metadata/threads-types.h> #include <mono/metadata/profiler-private.h> #include <mono/metadata/profiler.h> #include <mono/metadata/monitor.h> #include <mono/utils/mono-memory-model.h> #include <mono/utils/mono-error-internals.h> #include <mono/metadata/mono-basic-block.h> #include <mono/metadata/reflection-internals.h> #include <mono/utils/mono-threads-coop.h> #include <mono/utils/mono-utils-debug.h> #include <mono/utils/mono-logger-internals.h> #include <mono/metadata/verify-internals.h> #include <mono/metadata/icall-decl.h> #include "mono/metadata/icall-signatures.h" #include "trace.h" #include "ir-emit.h" #include "jit-icalls.h" #include <mono/jit/jit.h> #include "seq-points.h" #include "aot-compiler.h" #include "mini-llvm.h" #include "mini-runtime.h" #include "llvmonly-runtime.h" #include "mono/utils/mono-tls-inline.h" #define BRANCH_COST 10 #define CALL_COST 10 /* Used for the JIT */ #define INLINE_LENGTH_LIMIT 20 /* * The aot and jit inline limits should be different, * since aot sees the whole program so we can let opt inline methods for us, * while the jit only sees one method, so we have to inline things ourselves. */ /* Used by LLVM AOT */ #define LLVM_AOT_INLINE_LENGTH_LIMIT 30 /* Used to LLVM JIT */ #define LLVM_JIT_INLINE_LENGTH_LIMIT 100 static const gboolean debug_tailcall = FALSE; // logging static const gboolean debug_tailcall_try_all = FALSE; // consider any call followed by ret gboolean mono_tailcall_print_enabled (void) { return debug_tailcall || MONO_TRACE_IS_TRACED (G_LOG_LEVEL_DEBUG, MONO_TRACE_TAILCALL); } void mono_tailcall_print (const char *format, ...) { if (!mono_tailcall_print_enabled ()) return; va_list args; va_start (args, format); g_printv (format, args); va_end (args); } /* These have 'cfg' as an implicit argument */ #define INLINE_FAILURE(msg) do { \ if ((cfg->method != cfg->current_method) && (cfg->current_method->wrapper_type == MONO_WRAPPER_NONE)) { \ inline_failure (cfg, msg); \ goto exception_exit; \ } \ } while (0) #define CHECK_CFG_EXCEPTION do {\ if (cfg->exception_type != MONO_EXCEPTION_NONE) \ goto exception_exit; \ } while (0) #define FIELD_ACCESS_FAILURE(method, field) do { \ field_access_failure ((cfg), (method), (field)); \ goto exception_exit; \ } while (0) #define GENERIC_SHARING_FAILURE(opcode) do { \ if (cfg->gshared) { \ gshared_failure (cfg, opcode, __FILE__, __LINE__); \ goto exception_exit; \ } \ } while (0) #define GSHAREDVT_FAILURE(opcode) do { \ if (cfg->gsharedvt) { \ gsharedvt_failure (cfg, opcode, __FILE__, __LINE__); \ goto exception_exit; \ } \ } while (0) #define OUT_OF_MEMORY_FAILURE do { \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \ mono_error_set_out_of_memory (cfg->error, ""); \ goto exception_exit; \ } while (0) #define DISABLE_AOT(cfg) do { \ if ((cfg)->verbose_level >= 2) \ printf ("AOT disabled: %s:%d\n", __FILE__, __LINE__); \ (cfg)->disable_aot = TRUE; \ } while (0) #define LOAD_ERROR do { \ break_on_unverified (); \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_TYPE_LOAD); \ goto exception_exit; \ } while (0) #define TYPE_LOAD_ERROR(klass) do { \ cfg->exception_ptr = klass; \ LOAD_ERROR; \ } while (0) #define CHECK_CFG_ERROR do {\ if (!is_ok (cfg->error)) { \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); \ goto mono_error_exit; \ } \ } while (0) int mono_op_to_op_imm (int opcode); int mono_op_to_op_imm_noemul (int opcode); static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty); static MonoInst* convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins); /* helper methods signatures */ /* type loading helpers */ static GENERATE_GET_CLASS_WITH_CACHE (iequatable, "System", "IEquatable`1") static GENERATE_GET_CLASS_WITH_CACHE (geqcomparer, "System.Collections.Generic", "GenericEqualityComparer`1"); /* * Instruction metadata */ #ifdef MINI_OP #undef MINI_OP #endif #ifdef MINI_OP3 #undef MINI_OP3 #endif #define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ', #define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3, #define NONE ' ' #define IREG 'i' #define FREG 'f' #define VREG 'v' #define XREG 'x' #if SIZEOF_REGISTER == 8 && SIZEOF_REGISTER == TARGET_SIZEOF_VOID_P #define LREG IREG #else #define LREG 'l' #endif /* keep in sync with the enum in mini.h */ const char mini_ins_info[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 #define MINI_OP(a,b,dest,src1,src2) ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0)), #define MINI_OP3(a,b,dest,src1,src2,src3) ((src3) != NONE ? 3 : ((src2) != NONE ? 2 : ((src1) != NONE ? 1 : 0))), /* * This should contain the index of the last sreg + 1. This is not the same * as the number of sregs for opcodes like IA64_CMP_EQ_IMM. */ const gint8 mini_ins_sreg_counts[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 guint32 mono_alloc_ireg (MonoCompile *cfg) { return alloc_ireg (cfg); } guint32 mono_alloc_lreg (MonoCompile *cfg) { return alloc_lreg (cfg); } guint32 mono_alloc_freg (MonoCompile *cfg) { return alloc_freg (cfg); } guint32 mono_alloc_preg (MonoCompile *cfg) { return alloc_preg (cfg); } guint32 mono_alloc_dreg (MonoCompile *cfg, MonoStackType stack_type) { return alloc_dreg (cfg, stack_type); } /* * mono_alloc_ireg_ref: * * Allocate an IREG, and mark it as holding a GC ref. */ guint32 mono_alloc_ireg_ref (MonoCompile *cfg) { return alloc_ireg_ref (cfg); } /* * mono_alloc_ireg_mp: * * Allocate an IREG, and mark it as holding a managed pointer. */ guint32 mono_alloc_ireg_mp (MonoCompile *cfg) { return alloc_ireg_mp (cfg); } /* * mono_alloc_ireg_copy: * * Allocate an IREG with the same GC type as VREG. */ guint32 mono_alloc_ireg_copy (MonoCompile *cfg, guint32 vreg) { if (vreg_is_ref (cfg, vreg)) return alloc_ireg_ref (cfg); else if (vreg_is_mp (cfg, vreg)) return alloc_ireg_mp (cfg); else return alloc_ireg (cfg); } guint mono_type_to_regmove (MonoCompile *cfg, MonoType *type) { if (m_type_is_byref (type)) return OP_MOVE; type = mini_get_underlying_type (type); handle_enum: switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_MOVE; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_MOVE; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_MOVE; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: return OP_MOVE; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: return OP_MOVE; case MONO_TYPE_I8: case MONO_TYPE_U8: #if SIZEOF_REGISTER == 8 return OP_MOVE; #else return OP_LMOVE; #endif case MONO_TYPE_R4: return cfg->r4fp ? OP_RMOVE : OP_FMOVE; case MONO_TYPE_R8: return OP_FMOVE; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type))) return OP_XMOVE; return OP_VMOVE; case MONO_TYPE_TYPEDBYREF: return OP_VMOVE; case MONO_TYPE_GENERICINST: if (MONO_CLASS_IS_SIMD (cfg, mono_class_from_mono_type_internal (type))) return OP_XMOVE; type = m_class_get_byval_arg (type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_type_var_is_vt (type)) return OP_VMOVE; else return mono_type_to_regmove (cfg, mini_get_underlying_type (type)); default: g_error ("unknown type 0x%02x in type_to_regstore", type->type); } return -1; } void mono_print_bb (MonoBasicBlock *bb, const char *msg) { int i; MonoInst *tree; GString *str = g_string_new (""); g_string_append_printf (str, "%s %d: [IN: ", msg, bb->block_num); for (i = 0; i < bb->in_count; ++i) g_string_append_printf (str, " BB%d(%d)", bb->in_bb [i]->block_num, bb->in_bb [i]->dfn); g_string_append_printf (str, ", OUT: "); for (i = 0; i < bb->out_count; ++i) g_string_append_printf (str, " BB%d(%d)", bb->out_bb [i]->block_num, bb->out_bb [i]->dfn); g_string_append_printf (str, " ]\n"); g_print ("%s", str->str); g_string_free (str, TRUE); for (tree = bb->code; tree; tree = tree->next) mono_print_ins_index (-1, tree); } static MONO_NEVER_INLINE gboolean break_on_unverified (void) { if (mini_debug_options.break_on_unverified) { G_BREAKPOINT (); return TRUE; } return FALSE; } static void clear_cfg_error (MonoCompile *cfg) { mono_error_cleanup (cfg->error); error_init (cfg->error); } static MONO_NEVER_INLINE void field_access_failure (MonoCompile *cfg, MonoMethod *method, MonoClassField *field) { char *method_fname = mono_method_full_name (method, TRUE); char *field_fname = mono_field_full_name (field); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_set_generic_error (cfg->error, "System", "FieldAccessException", "Field `%s' is inaccessible from method `%s'\n", field_fname, method_fname); g_free (method_fname); g_free (field_fname); } static MONO_NEVER_INLINE void inline_failure (MonoCompile *cfg, const char *msg) { if (cfg->verbose_level >= 2) printf ("inline failed: %s\n", msg); mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED); } static MONO_NEVER_INLINE void gshared_failure (MonoCompile *cfg, int opcode, const char *file, int line) { if (cfg->verbose_level > 2) printf ("sharing failed for method %s.%s.%s/%d opcode %s line %d\n", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name (opcode), line); mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); } static MONO_NEVER_INLINE void gsharedvt_failure (MonoCompile *cfg, int opcode, const char *file, int line) { cfg->exception_message = g_strdup_printf ("gsharedvt failed for method %s.%s.%s/%d opcode %s %s:%d", m_class_get_name_space (cfg->current_method->klass), m_class_get_name (cfg->current_method->klass), cfg->current_method->name, cfg->current_method->signature->param_count, mono_opcode_name ((opcode)), file, line); if (cfg->verbose_level >= 2) printf ("%s\n", cfg->exception_message); mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); } void mini_set_inline_failure (MonoCompile *cfg, const char *msg) { if (cfg->verbose_level >= 2) printf ("inline failed: %s\n", msg); mono_cfg_set_exception (cfg, MONO_EXCEPTION_INLINE_FAILED); } /* * When using gsharedvt, some instatiations might be verifiable, and some might be not. i.e. * foo<T> (int i) { ldarg.0; box T; } */ #define UNVERIFIED do { \ if (cfg->gsharedvt) { \ if (cfg->verbose_level > 2) \ printf ("gsharedvt method failed to verify, falling back to instantiation.\n"); \ mono_cfg_set_exception (cfg, MONO_EXCEPTION_GENERIC_SHARING_FAILED); \ goto exception_exit; \ } \ break_on_unverified (); \ goto unverified; \ } while (0) #define GET_BBLOCK(cfg,tblock,ip) do { \ (tblock) = cfg->cil_offset_to_bb [(ip) - cfg->cil_start]; \ if (!(tblock)) { \ if ((ip) >= end || (ip) < header->code) UNVERIFIED; \ NEW_BBLOCK (cfg, (tblock)); \ (tblock)->cil_code = (ip); \ ADD_BBLOCK (cfg, (tblock)); \ } \ } while (0) /* Emit conversions so both operands of a binary opcode are of the same type */ static void add_widen_op (MonoCompile *cfg, MonoInst *ins, MonoInst **arg1_ref, MonoInst **arg2_ref) { MonoInst *arg1 = *arg1_ref; MonoInst *arg2 = *arg2_ref; if (cfg->r4fp && ((arg1->type == STACK_R4 && arg2->type == STACK_R8) || (arg1->type == STACK_R8 && arg2->type == STACK_R4))) { MonoInst *conv; /* Mixing r4/r8 is allowed by the spec */ if (arg1->type == STACK_R4) { int dreg = alloc_freg (cfg); EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg1->dreg); conv->type = STACK_R8; ins->sreg1 = dreg; *arg1_ref = conv; } if (arg2->type == STACK_R4) { int dreg = alloc_freg (cfg); EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, arg2->dreg); conv->type = STACK_R8; ins->sreg2 = dreg; *arg2_ref = conv; } } #if SIZEOF_REGISTER == 8 /* FIXME: Need to add many more cases */ if ((arg1)->type == STACK_PTR && (arg2)->type == STACK_I4) { MonoInst *widen; int dr = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, widen, OP_SEXT_I4, dr, (arg2)->dreg); (ins)->sreg2 = widen->dreg; } #endif } #define ADD_UNOP(op) do { \ MONO_INST_NEW (cfg, ins, (op)); \ sp--; \ ins->sreg1 = sp [0]->dreg; \ type_from_op (cfg, ins, sp [0], NULL); \ CHECK_TYPE (ins); \ (ins)->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); \ MONO_ADD_INS ((cfg)->cbb, (ins)); \ *sp++ = mono_decompose_opcode (cfg, ins); \ } while (0) #define ADD_BINCOND(next_block) do { \ MonoInst *cmp; \ sp -= 2; \ MONO_INST_NEW(cfg, cmp, OP_COMPARE); \ cmp->sreg1 = sp [0]->dreg; \ cmp->sreg2 = sp [1]->dreg; \ add_widen_op (cfg, cmp, &sp [0], &sp [1]); \ type_from_op (cfg, cmp, sp [0], sp [1]); \ CHECK_TYPE (cmp); \ type_from_op (cfg, ins, sp [0], sp [1]); \ ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof(gpointer)*2); \ GET_BBLOCK (cfg, tblock, target); \ link_bblock (cfg, cfg->cbb, tblock); \ ins->inst_true_bb = tblock; \ if ((next_block)) { \ link_bblock (cfg, cfg->cbb, (next_block)); \ ins->inst_false_bb = (next_block); \ start_new_bblock = 1; \ } else { \ GET_BBLOCK (cfg, tblock, next_ip); \ link_bblock (cfg, cfg->cbb, tblock); \ ins->inst_false_bb = tblock; \ start_new_bblock = 2; \ } \ if (sp != stack_start) { \ handle_stack_args (cfg, stack_start, sp - stack_start); \ CHECK_UNVERIFIABLE (cfg); \ } \ MONO_ADD_INS (cfg->cbb, cmp); \ MONO_ADD_INS (cfg->cbb, ins); \ } while (0) /* * * link_bblock: Links two basic blocks * * links two basic blocks in the control flow graph, the 'from' * argument is the starting block and the 'to' argument is the block * the control flow ends to after 'from'. */ static void link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to) { MonoBasicBlock **newa; int i, found; #if 0 if (from->cil_code) { if (to->cil_code) printf ("edge from IL%04x to IL_%04x\n", from->cil_code - cfg->cil_code, to->cil_code - cfg->cil_code); else printf ("edge from IL%04x to exit\n", from->cil_code - cfg->cil_code); } else { if (to->cil_code) printf ("edge from entry to IL_%04x\n", to->cil_code - cfg->cil_code); else printf ("edge from entry to exit\n"); } #endif found = FALSE; for (i = 0; i < from->out_count; ++i) { if (to == from->out_bb [i]) { found = TRUE; break; } } if (!found) { newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (from->out_count + 1)); for (i = 0; i < from->out_count; ++i) { newa [i] = from->out_bb [i]; } newa [i] = to; from->out_count++; from->out_bb = newa; } found = FALSE; for (i = 0; i < to->in_count; ++i) { if (from == to->in_bb [i]) { found = TRUE; break; } } if (!found) { newa = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * (to->in_count + 1)); for (i = 0; i < to->in_count; ++i) { newa [i] = to->in_bb [i]; } newa [i] = from; to->in_count++; to->in_bb = newa; } } void mono_link_bblock (MonoCompile *cfg, MonoBasicBlock *from, MonoBasicBlock* to) { link_bblock (cfg, from, to); } static void mono_create_spvar_for_region (MonoCompile *cfg, int region); static void mark_bb_in_region (MonoCompile *cfg, guint region, uint32_t start, uint32_t end) { MonoBasicBlock *bb = cfg->cil_offset_to_bb [start]; //start must exist in cil_offset_to_bb as those are il offsets used by EH which should have GET_BBLOCK early. g_assert (bb); if (cfg->verbose_level > 1) g_print ("FIRST BB for %d is BB_%d\n", start, bb->block_num); for (; bb && bb->real_offset < end; bb = bb->next_bb) { //no one claimed this bb, take it. if (bb->region == -1) { bb->region = region; continue; } //current region is an early handler, bail if ((bb->region & (0xf << 4)) != MONO_REGION_TRY) { continue; } //current region is a try, only overwrite if new region is a handler if ((region & (0xf << 4)) != MONO_REGION_TRY) { bb->region = region; } } if (cfg->spvars) mono_create_spvar_for_region (cfg, region); } static void compute_bb_regions (MonoCompile *cfg) { MonoBasicBlock *bb; MonoMethodHeader *header = cfg->header; int i; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) bb->region = -1; for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) mark_bb_in_region (cfg, ((i + 1) << 8) | MONO_REGION_FILTER | clause->flags, clause->data.filter_offset, clause->handler_offset); guint handler_region; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) handler_region = ((i + 1) << 8) | MONO_REGION_FINALLY | clause->flags; else if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) handler_region = ((i + 1) << 8) | MONO_REGION_FAULT | clause->flags; else handler_region = ((i + 1) << 8) | MONO_REGION_CATCH | clause->flags; mark_bb_in_region (cfg, handler_region, clause->handler_offset, clause->handler_offset + clause->handler_len); mark_bb_in_region (cfg, ((i + 1) << 8) | clause->flags, clause->try_offset, clause->try_offset + clause->try_len); } if (cfg->verbose_level > 2) { MonoBasicBlock *bb; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) g_print ("REGION BB%d IL_%04x ID_%08X\n", bb->block_num, bb->real_offset, bb->region); } } static gboolean ip_in_finally_clause (MonoCompile *cfg, int offset) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT) continue; if (MONO_OFFSET_IN_HANDLER (clause, offset)) return TRUE; } return FALSE; } /* Find clauses between ip and target, from inner to outer */ static GList* mono_find_leave_clauses (MonoCompile *cfg, guchar *ip, guchar *target) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; GList *res = NULL; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (clause, (ip - header->code)) && (!MONO_OFFSET_IN_CLAUSE (clause, (target - header->code)))) { MonoLeaveClause *leave = mono_mempool_alloc0 (cfg->mempool, sizeof (MonoLeaveClause)); leave->index = i; leave->clause = clause; res = g_list_append_mempool (cfg->mempool, res, leave); } } return res; } static void mono_create_spvar_for_region (MonoCompile *cfg, int region) { MonoInst *var; var = (MonoInst *)g_hash_table_lookup (cfg->spvars, GINT_TO_POINTER (region)); if (var) return; var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ var->flags |= MONO_INST_VOLATILE; g_hash_table_insert (cfg->spvars, GINT_TO_POINTER (region), var); } MonoInst * mono_find_exvar_for_offset (MonoCompile *cfg, int offset) { return (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset)); } static MonoInst* mono_create_exvar_for_offset (MonoCompile *cfg, int offset) { MonoInst *var; var = (MonoInst *)g_hash_table_lookup (cfg->exvars, GINT_TO_POINTER (offset)); if (var) return var; var = mono_compile_create_var (cfg, mono_get_object_type (), OP_LOCAL); /* prevent it from being register allocated */ var->flags |= MONO_INST_VOLATILE; g_hash_table_insert (cfg->exvars, GINT_TO_POINTER (offset), var); return var; } /* * Returns the type used in the eval stack when @type is loaded. * FIXME: return a MonoType/MonoClass for the byref and VALUETYPE cases. */ void mini_type_to_eval_stack_type (MonoCompile *cfg, MonoType *type, MonoInst *inst) { MonoClass *klass; type = mini_get_underlying_type (type); inst->klass = klass = mono_class_from_mono_type_internal (type); if (m_type_is_byref (type)) { inst->type = STACK_MP; return; } handle_enum: switch (type->type) { case MONO_TYPE_VOID: inst->type = STACK_INV; return; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: inst->type = STACK_I4; return; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: inst->type = STACK_PTR; return; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: inst->type = STACK_OBJ; return; case MONO_TYPE_I8: case MONO_TYPE_U8: inst->type = STACK_I8; return; case MONO_TYPE_R4: inst->type = cfg->r4_stack_type; break; case MONO_TYPE_R8: inst->type = STACK_R8; return; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } else { inst->klass = klass; inst->type = STACK_VTYPE; return; } case MONO_TYPE_TYPEDBYREF: inst->klass = mono_defaults.typed_reference_class; inst->type = STACK_VTYPE; return; case MONO_TYPE_GENERICINST: type = m_class_get_byval_arg (type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_is_gsharedvt_type (type)) { g_assert (cfg->gsharedvt); inst->type = STACK_VTYPE; } else { mini_type_to_eval_stack_type (cfg, mini_get_underlying_type (type), inst); } return; default: g_error ("unknown type 0x%02x in eval stack type", type->type); } } /* * The following tables are used to quickly validate the IL code in type_from_op (). */ #define IF_P8(v) (SIZEOF_VOID_P == 8 ? v : STACK_INV) #define IF_P8_I8 IF_P8(STACK_I8) #define IF_P8_PTR IF_P8(STACK_PTR) static const char bin_num_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV}, {STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_MP, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R8}, {STACK_INV, STACK_MP, STACK_INV, STACK_MP, STACK_INV, STACK_PTR, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4} }; static const char neg_table [] = { STACK_INV, STACK_I4, STACK_I8, STACK_PTR, STACK_R8, STACK_INV, STACK_INV, STACK_INV, STACK_R4 }; /* reduce the size of this table */ static const char bin_int_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, IF_P8_I8, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, IF_P8_I8, STACK_I8, IF_P8_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, IF_P8_PTR, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV} }; #define P1 (SIZEOF_VOID_P == 8) static const char bin_comp_table [STACK_MAX] [STACK_MAX] = { /* Inv i L p F & O vt r4 */ {0}, {0, 1, 0, 1, 0, 0, 0, 0}, /* i, int32 */ {0, 0, 1,P1, 0, 0, 0, 0}, /* L, int64 */ {0, 1,P1, 1, 0, 2, 4, 0}, /* p, ptr */ {0, 0, 0, 0, 1, 0, 0, 0, 1}, /* F, R8 */ {0, 0, 0, 2, 0, 1, 0, 0}, /* &, managed pointer */ {0, 0, 0, 4, 0, 0, 3, 0}, /* O, reference */ {0, 0, 0, 0, 0, 0, 0, 0}, /* vt value type */ {0, 0, 0, 0, 1, 0, 0, 0, 1}, /* r, r4 */ }; #undef P1 /* reduce the size of this table */ static const char shift_table [STACK_MAX] [STACK_MAX] = { {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I4, STACK_INV, STACK_I4, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_I8, STACK_INV, STACK_I8, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_PTR, STACK_INV, STACK_PTR, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV}, {STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV, STACK_INV} }; /* * Tables to map from the non-specific opcode to the matching * type-specific opcode. */ /* handles from CEE_ADD to CEE_SHR_UN (CEE_REM_UN for floats) */ static const guint16 binops_op_map [STACK_MAX] = { 0, OP_IADD-CEE_ADD, OP_LADD-CEE_ADD, OP_PADD-CEE_ADD, OP_FADD-CEE_ADD, OP_PADD-CEE_ADD, 0, 0, OP_RADD-CEE_ADD }; /* handles from CEE_NEG to CEE_CONV_U8 */ static const guint16 unops_op_map [STACK_MAX] = { 0, OP_INEG-CEE_NEG, OP_LNEG-CEE_NEG, OP_PNEG-CEE_NEG, OP_FNEG-CEE_NEG, OP_PNEG-CEE_NEG, 0, 0, OP_RNEG-CEE_NEG }; /* handles from CEE_CONV_U2 to CEE_SUB_OVF_UN */ static const guint16 ovfops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_U2-CEE_CONV_U2, OP_LCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_FCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, OP_PCONV_TO_U2-CEE_CONV_U2, 0, OP_RCONV_TO_U2-CEE_CONV_U2 }; /* handles from CEE_CONV_OVF_I1_UN to CEE_CONV_OVF_U_UN */ static const guint16 ovf2ops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_LCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_FCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, OP_PCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN, 0, 0, OP_RCONV_TO_OVF_I1_UN-CEE_CONV_OVF_I1_UN }; /* handles from CEE_CONV_OVF_I1 to CEE_CONV_OVF_U8 */ static const guint16 ovf3ops_op_map [STACK_MAX] = { 0, OP_ICONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_LCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_FCONV_TO_OVF_I1-CEE_CONV_OVF_I1, OP_PCONV_TO_OVF_I1-CEE_CONV_OVF_I1, 0, 0, OP_RCONV_TO_OVF_I1-CEE_CONV_OVF_I1 }; /* handles from CEE_BEQ to CEE_BLT_UN */ static const guint16 beqops_op_map [STACK_MAX] = { 0, OP_IBEQ-CEE_BEQ, OP_LBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_FBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, OP_PBEQ-CEE_BEQ, 0, OP_FBEQ-CEE_BEQ }; /* handles from CEE_CEQ to CEE_CLT_UN */ static const guint16 ceqops_op_map [STACK_MAX] = { 0, OP_ICEQ-OP_CEQ, OP_LCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_FCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, OP_PCEQ-OP_CEQ, 0, OP_RCEQ-OP_CEQ }; /* * Sets ins->type (the type on the eval stack) according to the * type of the opcode and the arguments to it. * Invalid IL code is marked by setting ins->type to the invalid value STACK_INV. * * FIXME: this function sets ins->type unconditionally in some cases, but * it should set it to invalid for some types (a conv.x on an object) */ static void type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2) { switch (ins->opcode) { /* binops */ case MONO_CEE_ADD: case MONO_CEE_SUB: case MONO_CEE_MUL: case MONO_CEE_DIV: case MONO_CEE_REM: /* FIXME: check unverifiable args for STACK_MP */ ins->type = bin_num_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case MONO_CEE_DIV_UN: case MONO_CEE_REM_UN: case MONO_CEE_AND: case MONO_CEE_OR: case MONO_CEE_XOR: ins->type = bin_int_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case MONO_CEE_SHL: case MONO_CEE_SHR: case MONO_CEE_SHR_UN: ins->type = shift_table [src1->type] [src2->type]; ins->opcode += binops_op_map [ins->type]; break; case OP_COMPARE: case OP_LCOMPARE: case OP_ICOMPARE: ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV; if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP)))) ins->opcode = OP_LCOMPARE; else if (src1->type == STACK_R4) ins->opcode = OP_RCOMPARE; else if (src1->type == STACK_R8) ins->opcode = OP_FCOMPARE; else ins->opcode = OP_ICOMPARE; break; case OP_ICOMPARE_IMM: ins->type = bin_comp_table [src1->type] [src1->type] ? STACK_I4 : STACK_INV; if ((src1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((src1->type == STACK_PTR) || (src1->type == STACK_OBJ) || (src1->type == STACK_MP)))) ins->opcode = OP_LCOMPARE_IMM; break; case MONO_CEE_BEQ: case MONO_CEE_BGE: case MONO_CEE_BGT: case MONO_CEE_BLE: case MONO_CEE_BLT: case MONO_CEE_BNE_UN: case MONO_CEE_BGE_UN: case MONO_CEE_BGT_UN: case MONO_CEE_BLE_UN: case MONO_CEE_BLT_UN: ins->opcode += beqops_op_map [src1->type]; break; case OP_CEQ: ins->type = bin_comp_table [src1->type] [src2->type] ? STACK_I4: STACK_INV; ins->opcode += ceqops_op_map [src1->type]; break; case OP_CGT: case OP_CGT_UN: case OP_CLT: case OP_CLT_UN: ins->type = (bin_comp_table [src1->type] [src2->type] & 1) ? STACK_I4: STACK_INV; ins->opcode += ceqops_op_map [src1->type]; break; /* unops */ case MONO_CEE_NEG: ins->type = neg_table [src1->type]; ins->opcode += unops_op_map [ins->type]; break; case MONO_CEE_NOT: if (src1->type >= STACK_I4 && src1->type <= STACK_PTR) ins->type = src1->type; else ins->type = STACK_INV; ins->opcode += unops_op_map [ins->type]; break; case MONO_CEE_CONV_I1: case MONO_CEE_CONV_I2: case MONO_CEE_CONV_I4: case MONO_CEE_CONV_U4: ins->type = STACK_I4; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_R_UN: ins->type = STACK_R8; switch (src1->type) { case STACK_I4: case STACK_PTR: ins->opcode = OP_ICONV_TO_R_UN; break; case STACK_I8: ins->opcode = OP_LCONV_TO_R_UN; break; case STACK_R4: ins->opcode = OP_RCONV_TO_R8; break; case STACK_R8: ins->opcode = OP_FMOVE; break; } break; case MONO_CEE_CONV_OVF_I1: case MONO_CEE_CONV_OVF_U1: case MONO_CEE_CONV_OVF_I2: case MONO_CEE_CONV_OVF_U2: case MONO_CEE_CONV_OVF_I4: case MONO_CEE_CONV_OVF_U4: ins->type = STACK_I4; ins->opcode += ovf3ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I_UN: case MONO_CEE_CONV_OVF_U_UN: ins->type = STACK_PTR; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I1_UN: case MONO_CEE_CONV_OVF_I2_UN: case MONO_CEE_CONV_OVF_I4_UN: case MONO_CEE_CONV_OVF_U1_UN: case MONO_CEE_CONV_OVF_U2_UN: case MONO_CEE_CONV_OVF_U4_UN: ins->type = STACK_I4; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_U: ins->type = STACK_PTR; switch (src1->type) { case STACK_I4: ins->opcode = OP_ICONV_TO_U; break; case STACK_PTR: case STACK_MP: case STACK_OBJ: #if TARGET_SIZEOF_VOID_P == 8 ins->opcode = OP_LCONV_TO_U; #else ins->opcode = OP_MOVE; #endif break; case STACK_I8: ins->opcode = OP_LCONV_TO_U; break; case STACK_R8: if (TARGET_SIZEOF_VOID_P == 8) ins->opcode = OP_FCONV_TO_U8; else ins->opcode = OP_FCONV_TO_U4; break; case STACK_R4: if (TARGET_SIZEOF_VOID_P == 8) ins->opcode = OP_RCONV_TO_U8; else ins->opcode = OP_RCONV_TO_U4; break; } break; case MONO_CEE_CONV_I8: case MONO_CEE_CONV_U8: ins->type = STACK_I8; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_I8: case MONO_CEE_CONV_OVF_U8: ins->type = STACK_I8; ins->opcode += ovf3ops_op_map [src1->type]; break; case MONO_CEE_CONV_OVF_U8_UN: case MONO_CEE_CONV_OVF_I8_UN: ins->type = STACK_I8; ins->opcode += ovf2ops_op_map [src1->type]; break; case MONO_CEE_CONV_R4: ins->type = cfg->r4_stack_type; ins->opcode += unops_op_map [src1->type]; break; case MONO_CEE_CONV_R8: ins->type = STACK_R8; ins->opcode += unops_op_map [src1->type]; break; case OP_CKFINITE: ins->type = STACK_R8; break; case MONO_CEE_CONV_U2: case MONO_CEE_CONV_U1: ins->type = STACK_I4; ins->opcode += ovfops_op_map [src1->type]; break; case MONO_CEE_CONV_I: case MONO_CEE_CONV_OVF_I: case MONO_CEE_CONV_OVF_U: ins->type = STACK_PTR; ins->opcode += ovfops_op_map [src1->type]; switch (ins->opcode) { case OP_FCONV_TO_I: ins->opcode = TARGET_SIZEOF_VOID_P == 4 ? OP_FCONV_TO_I4 : OP_FCONV_TO_I8; break; case OP_RCONV_TO_I: ins->opcode = TARGET_SIZEOF_VOID_P == 4 ? OP_RCONV_TO_I4 : OP_RCONV_TO_I8; break; default: break; } break; case MONO_CEE_ADD_OVF: case MONO_CEE_ADD_OVF_UN: case MONO_CEE_MUL_OVF: case MONO_CEE_MUL_OVF_UN: case MONO_CEE_SUB_OVF: case MONO_CEE_SUB_OVF_UN: ins->type = bin_num_table [src1->type] [src2->type]; ins->opcode += ovfops_op_map [src1->type]; if (ins->type == STACK_R8) ins->type = STACK_INV; break; case OP_LOAD_MEMBASE: ins->type = STACK_PTR; break; case OP_LOADI1_MEMBASE: case OP_LOADU1_MEMBASE: case OP_LOADI2_MEMBASE: case OP_LOADU2_MEMBASE: case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: ins->type = STACK_PTR; break; case OP_LOADI8_MEMBASE: ins->type = STACK_I8; break; case OP_LOADR4_MEMBASE: ins->type = cfg->r4_stack_type; break; case OP_LOADR8_MEMBASE: ins->type = STACK_R8; break; default: g_error ("opcode 0x%04x not handled in type from op", ins->opcode); break; } if (ins->type == STACK_MP) { if (src1->type == STACK_MP) ins->klass = src1->klass; else ins->klass = mono_defaults.object_class; } } void mini_type_from_op (MonoCompile *cfg, MonoInst *ins, MonoInst *src1, MonoInst *src2) { type_from_op (cfg, ins, src1, src2); } static MonoClass* ldind_to_type (int op) { switch (op) { case MONO_CEE_LDIND_I1: return mono_defaults.sbyte_class; case MONO_CEE_LDIND_U1: return mono_defaults.byte_class; case MONO_CEE_LDIND_I2: return mono_defaults.int16_class; case MONO_CEE_LDIND_U2: return mono_defaults.uint16_class; case MONO_CEE_LDIND_I4: return mono_defaults.int32_class; case MONO_CEE_LDIND_U4: return mono_defaults.uint32_class; case MONO_CEE_LDIND_I8: return mono_defaults.int64_class; case MONO_CEE_LDIND_I: return mono_defaults.int_class; case MONO_CEE_LDIND_R4: return mono_defaults.single_class; case MONO_CEE_LDIND_R8: return mono_defaults.double_class; case MONO_CEE_LDIND_REF:return mono_defaults.object_class; //FIXME we should try to return a more specific type default: g_error ("Unknown ldind type %d", op); } } static MonoClass* stind_to_type (int op) { switch (op) { case MONO_CEE_STIND_I1: return mono_defaults.sbyte_class; case MONO_CEE_STIND_I2: return mono_defaults.int16_class; case MONO_CEE_STIND_I4: return mono_defaults.int32_class; case MONO_CEE_STIND_I8: return mono_defaults.int64_class; case MONO_CEE_STIND_I: return mono_defaults.int_class; case MONO_CEE_STIND_R4: return mono_defaults.single_class; case MONO_CEE_STIND_R8: return mono_defaults.double_class; case MONO_CEE_STIND_REF: return mono_defaults.object_class; default: g_error ("Unknown stind type %d", op); } } #if 0 static const char param_table [STACK_MAX] [STACK_MAX] = { {0}, }; static int check_values_to_signature (MonoInst *args, MonoType *this_ins, MonoMethodSignature *sig) { int i; if (sig->hasthis) { switch (args->type) { case STACK_I4: case STACK_I8: case STACK_R8: case STACK_VTYPE: case STACK_INV: return 0; } args++; } for (i = 0; i < sig->param_count; ++i) { switch (args [i].type) { case STACK_INV: return 0; case STACK_MP: if (m_type_is_byref (!sig->params [i])) return 0; continue; case STACK_OBJ: if (m_type_is_byref (sig->params [i])) return 0; switch (m_type_is_byref (sig->params [i])) { case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: break; default: return 0; } continue; case STACK_R8: if (m_type_is_byref (sig->params [i])) return 0; if (sig->params [i]->type != MONO_TYPE_R4 && sig->params [i]->type != MONO_TYPE_R8) return 0; continue; case STACK_PTR: case STACK_I4: case STACK_I8: case STACK_VTYPE: break; } /*if (!param_table [args [i].type] [sig->params [i]->type]) return 0;*/ } return 1; } #endif /* * The got_var contains the address of the Global Offset Table when AOT * compiling. */ MonoInst * mono_get_got_var (MonoCompile *cfg) { if (!cfg->compile_aot || !cfg->backend->need_got_var || cfg->llvm_only) return NULL; if (!cfg->got_var) { cfg->got_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); } return cfg->got_var; } static void mono_create_rgctx_var (MonoCompile *cfg) { if (!cfg->rgctx_var) { cfg->rgctx_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* force the var to be stack allocated */ if (!cfg->llvm_only) cfg->rgctx_var->flags |= MONO_INST_VOLATILE; } } static MonoInst * mono_get_mrgctx_var (MonoCompile *cfg) { g_assert (cfg->gshared); mono_create_rgctx_var (cfg); return cfg->rgctx_var; } static MonoInst * mono_get_vtable_var (MonoCompile *cfg) { g_assert (cfg->gshared); /* The mrgctx and the vtable are stored in the same var */ mono_create_rgctx_var (cfg); return cfg->rgctx_var; } static MonoType* type_from_stack_type (MonoInst *ins) { switch (ins->type) { case STACK_I4: return mono_get_int32_type (); case STACK_I8: return m_class_get_byval_arg (mono_defaults.int64_class); case STACK_PTR: return mono_get_int_type (); case STACK_R4: return m_class_get_byval_arg (mono_defaults.single_class); case STACK_R8: return m_class_get_byval_arg (mono_defaults.double_class); case STACK_MP: return m_class_get_this_arg (ins->klass); case STACK_OBJ: return mono_get_object_type (); case STACK_VTYPE: return m_class_get_byval_arg (ins->klass); default: g_error ("stack type %d to monotype not handled\n", ins->type); } return NULL; } MonoStackType mini_type_to_stack_type (MonoCompile *cfg, MonoType *t) { t = mini_type_get_underlying_type (t); switch (t->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: return STACK_I4; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: return STACK_PTR; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: return STACK_OBJ; case MONO_TYPE_I8: case MONO_TYPE_U8: return STACK_I8; case MONO_TYPE_R4: return (MonoStackType)cfg->r4_stack_type; case MONO_TYPE_R8: return STACK_R8; case MONO_TYPE_VALUETYPE: case MONO_TYPE_TYPEDBYREF: return STACK_VTYPE; case MONO_TYPE_GENERICINST: if (mono_type_generic_inst_is_valuetype (t)) return STACK_VTYPE; else return STACK_OBJ; break; default: g_assert_not_reached (); } return (MonoStackType)-1; } static MonoClass* array_access_to_klass (int opcode) { switch (opcode) { case MONO_CEE_LDELEM_U1: return mono_defaults.byte_class; case MONO_CEE_LDELEM_U2: return mono_defaults.uint16_class; case MONO_CEE_LDELEM_I: case MONO_CEE_STELEM_I: return mono_defaults.int_class; case MONO_CEE_LDELEM_I1: case MONO_CEE_STELEM_I1: return mono_defaults.sbyte_class; case MONO_CEE_LDELEM_I2: case MONO_CEE_STELEM_I2: return mono_defaults.int16_class; case MONO_CEE_LDELEM_I4: case MONO_CEE_STELEM_I4: return mono_defaults.int32_class; case MONO_CEE_LDELEM_U4: return mono_defaults.uint32_class; case MONO_CEE_LDELEM_I8: case MONO_CEE_STELEM_I8: return mono_defaults.int64_class; case MONO_CEE_LDELEM_R4: case MONO_CEE_STELEM_R4: return mono_defaults.single_class; case MONO_CEE_LDELEM_R8: case MONO_CEE_STELEM_R8: return mono_defaults.double_class; case MONO_CEE_LDELEM_REF: case MONO_CEE_STELEM_REF: return mono_defaults.object_class; default: g_assert_not_reached (); } return NULL; } /* * We try to share variables when possible */ static MonoInst * mono_compile_get_interface_var (MonoCompile *cfg, int slot, MonoInst *ins) { MonoInst *res; int pos, vnum; MonoType *type; type = type_from_stack_type (ins); /* inlining can result in deeper stacks */ if (cfg->inline_depth || slot >= cfg->header->max_stack) return mono_compile_create_var (cfg, type, OP_LOCAL); pos = ins->type - 1 + slot * STACK_MAX; switch (ins->type) { case STACK_I4: case STACK_I8: case STACK_R8: case STACK_PTR: case STACK_MP: case STACK_OBJ: if ((vnum = cfg->intvars [pos])) return cfg->varinfo [vnum]; res = mono_compile_create_var (cfg, type, OP_LOCAL); cfg->intvars [pos] = res->inst_c0; break; default: res = mono_compile_create_var (cfg, type, OP_LOCAL); } return res; } static void mono_save_token_info (MonoCompile *cfg, MonoImage *image, guint32 token, gpointer key) { /* * Don't use this if a generic_context is set, since that means AOT can't * look up the method using just the image+token. * table == 0 means this is a reference made from a wrapper. */ if (cfg->compile_aot && !cfg->generic_context && (mono_metadata_token_table (token) > 0)) { MonoJumpInfoToken *jump_info_token = (MonoJumpInfoToken *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoToken)); jump_info_token->image = image; jump_info_token->token = token; g_hash_table_insert (cfg->token_info_hash, key, jump_info_token); } } /* * This function is called to handle items that are left on the evaluation stack * at basic block boundaries. What happens is that we save the values to local variables * and we reload them later when first entering the target basic block (with the * handle_loaded_temps () function). * A single joint point will use the same variables (stored in the array bb->out_stack or * bb->in_stack, if the basic block is before or after the joint point). * * This function needs to be called _before_ emitting the last instruction of * the bb (i.e. before emitting a branch). * If the stack merge fails at a join point, cfg->unverifiable is set. */ static void handle_stack_args (MonoCompile *cfg, MonoInst **sp, int count) { int i, bindex; MonoBasicBlock *bb = cfg->cbb; MonoBasicBlock *outb; MonoInst *inst, **locals; gboolean found; if (!count) return; if (cfg->verbose_level > 3) printf ("%d item(s) on exit from B%d\n", count, bb->block_num); if (!bb->out_scount) { bb->out_scount = count; //printf ("bblock %d has out:", bb->block_num); found = FALSE; for (i = 0; i < bb->out_count; ++i) { outb = bb->out_bb [i]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) continue; //printf (" %d", outb->block_num); if (outb->in_stack) { found = TRUE; bb->out_stack = outb->in_stack; break; } } //printf ("\n"); if (!found) { bb->out_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * count); for (i = 0; i < count; ++i) { /* * try to reuse temps already allocated for this purpouse, if they occupy the same * stack slot and if they are of the same type. * This won't cause conflicts since if 'local' is used to * store one of the values in the in_stack of a bblock, then * the same variable will be used for the same outgoing stack * slot as well. * This doesn't work when inlining methods, since the bblocks * in the inlined methods do not inherit their in_stack from * the bblock they are inlined to. See bug #58863 for an * example. */ bb->out_stack [i] = mono_compile_get_interface_var (cfg, i, sp [i]); } } } for (i = 0; i < bb->out_count; ++i) { outb = bb->out_bb [i]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) continue; if (outb->in_scount) { if (outb->in_scount != bb->out_scount) { cfg->unverifiable = TRUE; return; } continue; /* check they are the same locals */ } outb->in_scount = count; outb->in_stack = bb->out_stack; } locals = bb->out_stack; cfg->cbb = bb; for (i = 0; i < count; ++i) { sp [i] = convert_value (cfg, locals [i]->inst_vtype, sp [i]); EMIT_NEW_TEMPSTORE (cfg, inst, locals [i]->inst_c0, sp [i]); inst->cil_code = sp [i]->cil_code; sp [i] = locals [i]; if (cfg->verbose_level > 3) printf ("storing %d to temp %d\n", i, (int)locals [i]->inst_c0); } /* * It is possible that the out bblocks already have in_stack assigned, and * the in_stacks differ. In this case, we will store to all the different * in_stacks. */ found = TRUE; bindex = 0; while (found) { /* Find a bblock which has a different in_stack */ found = FALSE; while (bindex < bb->out_count) { outb = bb->out_bb [bindex]; /* exception handlers are linked, but they should not be considered for stack args */ if (outb->flags & BB_EXCEPTION_HANDLER) { bindex++; continue; } if (outb->in_stack != locals) { for (i = 0; i < count; ++i) { sp [i] = convert_value (cfg, outb->in_stack [i]->inst_vtype, sp [i]); EMIT_NEW_TEMPSTORE (cfg, inst, outb->in_stack [i]->inst_c0, sp [i]); inst->cil_code = sp [i]->cil_code; sp [i] = locals [i]; if (cfg->verbose_level > 3) printf ("storing %d to temp %d\n", i, (int)outb->in_stack [i]->inst_c0); } locals = outb->in_stack; found = TRUE; break; } bindex ++; } } } MonoInst* mini_emit_runtime_constant (MonoCompile *cfg, MonoJumpInfoType patch_type, gpointer data) { MonoInst *ins; if (cfg->compile_aot) { MONO_DISABLE_WARNING (4306) // 'type cast': conversion from 'MonoJumpInfoType' to 'MonoInst *' of greater size EMIT_NEW_AOTCONST (cfg, ins, patch_type, data); MONO_RESTORE_WARNING } else { MonoJumpInfo ji; gpointer target; ERROR_DECL (error); ji.type = patch_type; ji.data.target = data; target = mono_resolve_patch_target_ext (cfg->mem_manager, NULL, NULL, &ji, FALSE, error); mono_error_assert_ok (error); EMIT_NEW_PCONST (cfg, ins, target); } return ins; } static MonoInst* mono_create_fast_tls_getter (MonoCompile *cfg, MonoTlsKey key) { int tls_offset = mono_tls_get_tls_offset (key); if (cfg->compile_aot) return NULL; if (tls_offset != -1 && mono_arch_have_fast_tls ()) { MonoInst *ins; MONO_INST_NEW (cfg, ins, OP_TLS_GET); ins->dreg = mono_alloc_preg (cfg); ins->inst_offset = tls_offset; return ins; } return NULL; } static MonoInst* mono_create_tls_get (MonoCompile *cfg, MonoTlsKey key) { MonoInst *fast_tls = NULL; if (!mini_debug_options.use_fallback_tls) fast_tls = mono_create_fast_tls_getter (cfg, key); if (fast_tls) { MONO_ADD_INS (cfg->cbb, fast_tls); return fast_tls; } const MonoJitICallId jit_icall_id = mono_get_tls_key_to_jit_icall_id (key); if (cfg->compile_aot && !cfg->llvm_only) { MonoInst *addr; /* * tls getters are critical pieces of code and we don't want to resolve them * through the standard plt/tramp mechanism since we might expose ourselves * to crashes and infinite recursions. * Therefore the NOCALL part of MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, FALSE in is_plt_patch. */ EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id)); return mini_emit_calli (cfg, mono_icall_sig_ptr, NULL, addr, NULL, NULL); } else { return mono_emit_jit_icall_id (cfg, jit_icall_id, NULL); } } /* * emit_push_lmf: * * Emit IR to push the current LMF onto the LMF stack. */ static void emit_push_lmf (MonoCompile *cfg) { /* * Emit IR to push the LMF: * lmf_addr = <lmf_addr from tls> * lmf->lmf_addr = lmf_addr * lmf->prev_lmf = *lmf_addr * *lmf_addr = lmf */ MonoInst *ins, *lmf_ins; if (!cfg->lmf_ir) return; int lmf_reg, prev_lmf_reg; /* * Store lmf_addr in a variable, so it can be allocated to a global register. */ if (!cfg->lmf_addr_var) cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); if (!cfg->lmf_var) { MonoInst *lmf_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); lmf_var->flags |= MONO_INST_VOLATILE; lmf_var->flags |= MONO_INST_LMF; cfg->lmf_var = lmf_var; } lmf_ins = mono_create_tls_get (cfg, TLS_KEY_LMF_ADDR); g_assert (lmf_ins); lmf_ins->dreg = cfg->lmf_addr_var->dreg; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; prev_lmf_reg = alloc_preg (cfg); /* Save previous_lmf */ EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, cfg->lmf_addr_var->dreg, 0); if (cfg->deopt) /* Mark this as an LMFExt */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_POR_IMM, prev_lmf_reg, prev_lmf_reg, 2); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), prev_lmf_reg); /* Set new lmf */ EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, cfg->lmf_addr_var->dreg, 0, lmf_reg); } /* * emit_pop_lmf: * * Emit IR to pop the current LMF from the LMF stack. */ static void emit_pop_lmf (MonoCompile *cfg) { int lmf_reg, lmf_addr_reg; MonoInst *ins; if (!cfg->lmf_ir) return; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; int prev_lmf_reg; /* * Emit IR to pop the LMF: * *(lmf->lmf_addr) = lmf->prev_lmf */ /* This could be called before emit_push_lmf () */ if (!cfg->lmf_addr_var) cfg->lmf_addr_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); lmf_addr_reg = cfg->lmf_addr_var->dreg; prev_lmf_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, prev_lmf_reg, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf)); if (cfg->deopt) /* Clear out the bit set by push_lmf () to mark this as LMFExt */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PXOR_IMM, prev_lmf_reg, prev_lmf_reg, 2); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_addr_reg, 0, prev_lmf_reg); } /* * target_type_is_incompatible: * @cfg: MonoCompile context * * Check that the item @arg on the evaluation stack can be stored * in the target type (can be a local, or field, etc). * The cfg arg can be used to check if we need verification or just * validity checks. * * Returns: non-0 value if arg can't be stored on a target. */ static int target_type_is_incompatible (MonoCompile *cfg, MonoType *target, MonoInst *arg) { MonoType *simple_type; MonoClass *klass; if (m_type_is_byref (target)) { /* FIXME: check that the pointed to types match */ if (arg->type == STACK_MP) { /* This is needed to handle gshared types + ldaddr. We lower the types so we can handle enums and other typedef-like types. */ MonoClass *target_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (mono_class_from_mono_type_internal (target)))); MonoClass *source_class_lowered = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass))); /* if the target is native int& or X* or same type */ if (target->type == MONO_TYPE_I || target->type == MONO_TYPE_PTR || target_class_lowered == source_class_lowered) return 0; /* Both are primitive type byrefs and the source points to a larger type that the destination */ if (MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (target_class_lowered)) && MONO_TYPE_IS_PRIMITIVE_SCALAR (m_class_get_byval_arg (source_class_lowered)) && mono_class_instance_size (target_class_lowered) <= mono_class_instance_size (source_class_lowered)) return 0; return 1; } if (arg->type == STACK_PTR) return 0; return 1; } simple_type = mini_get_underlying_type (target); switch (simple_type->type) { case MONO_TYPE_VOID: return 1; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: if (arg->type != STACK_I4 && arg->type != STACK_PTR) return 1; return 0; case MONO_TYPE_PTR: /* STACK_MP is needed when setting pinned locals */ if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP) #if SIZEOF_VOID_P == 8 if (arg->type != STACK_I8) #endif return 1; return 0; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_FNPTR: /* * Some opcodes like ldloca returns 'transient pointers' which can be stored in * in native int. (#688008). */ if (arg->type != STACK_I4 && arg->type != STACK_PTR && arg->type != STACK_MP) return 1; return 0; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (arg->type != STACK_OBJ) return 1; /* FIXME: check type compatibility */ return 0; case MONO_TYPE_I8: case MONO_TYPE_U8: if (arg->type != STACK_I8) #if SIZEOF_VOID_P == 8 if (arg->type != STACK_PTR) #endif return 1; return 0; case MONO_TYPE_R4: if (arg->type != cfg->r4_stack_type) return 1; return 0; case MONO_TYPE_R8: if (arg->type != STACK_R8) return 1; return 0; case MONO_TYPE_VALUETYPE: if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); if (klass != arg->klass) return 1; return 0; case MONO_TYPE_TYPEDBYREF: if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); if (klass != arg->klass) return 1; return 0; case MONO_TYPE_GENERICINST: if (mono_type_generic_inst_is_valuetype (simple_type)) { MonoClass *target_class; if (arg->type != STACK_VTYPE) return 1; klass = mono_class_from_mono_type_internal (simple_type); target_class = mono_class_from_mono_type_internal (target); /* The second cases is needed when doing partial sharing */ if (klass != arg->klass && target_class != arg->klass && target_class != mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (arg->klass)))) return 1; return 0; } else { if (arg->type != STACK_OBJ) return 1; /* FIXME: check type compatibility */ return 0; } case MONO_TYPE_VAR: case MONO_TYPE_MVAR: g_assert (cfg->gshared); if (mini_type_var_is_vt (simple_type)) { if (arg->type != STACK_VTYPE) return 1; } else { if (arg->type != STACK_OBJ) return 1; } return 0; default: g_error ("unknown type 0x%02x in target_type_is_incompatible", simple_type->type); } return 1; } /* * convert_value: * * Emit some implicit conversions which are not part of the .net spec, but are allowed by MS.NET. */ static MonoInst* convert_value (MonoCompile *cfg, MonoType *type, MonoInst *ins) { if (!cfg->r4fp) return ins; type = mini_get_underlying_type (type); switch (type->type) { case MONO_TYPE_R4: if (ins->type == STACK_R8) { int dreg = alloc_freg (cfg); MonoInst *conv; EMIT_NEW_UNALU (cfg, conv, OP_FCONV_TO_R4, dreg, ins->dreg); conv->type = STACK_R4; return conv; } break; case MONO_TYPE_R8: if (ins->type == STACK_R4) { int dreg = alloc_freg (cfg); MonoInst *conv; EMIT_NEW_UNALU (cfg, conv, OP_RCONV_TO_R8, dreg, ins->dreg); conv->type = STACK_R8; return conv; } break; default: break; } return ins; } /* * Prepare arguments for passing to a function call. * Return a non-zero value if the arguments can't be passed to the given * signature. * The type checks are not yet complete and some conversions may need * casts on 32 or 64 bit architectures. * * FIXME: implement this using target_type_is_incompatible () */ static gboolean check_call_signature (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **args) { MonoType *simple_type; int i; if (sig->hasthis) { if (args [0]->type != STACK_OBJ && args [0]->type != STACK_MP && args [0]->type != STACK_PTR) return TRUE; args++; } for (i = 0; i < sig->param_count; ++i) { if (m_type_is_byref (sig->params [i])) { if (args [i]->type != STACK_MP && args [i]->type != STACK_PTR) return TRUE; continue; } simple_type = mini_get_underlying_type (sig->params [i]); handle_enum: switch (simple_type->type) { case MONO_TYPE_VOID: return TRUE; case MONO_TYPE_I1: case MONO_TYPE_U1: case MONO_TYPE_I2: case MONO_TYPE_U2: case MONO_TYPE_I4: case MONO_TYPE_U4: if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR) return TRUE; continue; case MONO_TYPE_I: case MONO_TYPE_U: if (args [i]->type != STACK_I4 && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: if (args [i]->type != STACK_I4 && !(SIZEOF_VOID_P == 8 && args [i]->type == STACK_I8) && args [i]->type != STACK_PTR && args [i]->type != STACK_MP && args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (args [i]->type != STACK_OBJ) return TRUE; continue; case MONO_TYPE_I8: case MONO_TYPE_U8: if (args [i]->type != STACK_I8 && !(SIZEOF_VOID_P == 8 && (args [i]->type == STACK_I4 || args [i]->type == STACK_PTR))) return TRUE; continue; case MONO_TYPE_R4: if (args [i]->type != cfg->r4_stack_type) return TRUE; continue; case MONO_TYPE_R8: if (args [i]->type != STACK_R8) return TRUE; continue; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (simple_type->data.klass)) { simple_type = mono_class_enum_basetype_internal (simple_type->data.klass); goto handle_enum; } if (args [i]->type != STACK_VTYPE) return TRUE; continue; case MONO_TYPE_TYPEDBYREF: if (args [i]->type != STACK_VTYPE) return TRUE; continue; case MONO_TYPE_GENERICINST: simple_type = m_class_get_byval_arg (simple_type->data.generic_class->container_class); goto handle_enum; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: /* gsharedvt */ if (args [i]->type != STACK_VTYPE) return TRUE; continue; default: g_error ("unknown type 0x%02x in check_call_signature", simple_type->type); } } return FALSE; } MonoJumpInfo * mono_patch_info_new (MonoMemPool *mp, int ip, MonoJumpInfoType type, gconstpointer target) { MonoJumpInfo *ji = (MonoJumpInfo *)mono_mempool_alloc (mp, sizeof (MonoJumpInfo)); ji->ip.i = ip; ji->type = type; ji->data.target = target; return ji; } int mini_class_check_context_used (MonoCompile *cfg, MonoClass *klass) { if (cfg->gshared) return mono_class_check_context_used (klass); else return 0; } int mini_method_check_context_used (MonoCompile *cfg, MonoMethod *method) { if (cfg->gshared) return mono_method_check_context_used (method); else return 0; } /* * check_method_sharing: * * Check whenever the vtable or an mrgctx needs to be passed when calling CMETHOD. */ static void check_method_sharing (MonoCompile *cfg, MonoMethod *cmethod, gboolean *out_pass_vtable, gboolean *out_pass_mrgctx) { gboolean pass_vtable = FALSE; gboolean pass_mrgctx = FALSE; if (((cmethod->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cmethod->klass)) && (mono_class_is_ginst (cmethod->klass) || mono_class_is_gtd (cmethod->klass))) { gboolean sharable = FALSE; if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) sharable = TRUE; /* * Pass vtable iff target method might * be shared, which means that sharing * is enabled for its class and its * context is sharable (and it's not a * generic method). */ if (sharable && !(mini_method_get_context (cmethod) && mini_method_get_context (cmethod)->method_inst)) pass_vtable = TRUE; } if (mini_method_needs_mrgctx (cmethod)) { if (mini_method_is_default_method (cmethod)) pass_vtable = FALSE; else g_assert (!pass_vtable); if (mono_method_is_generic_sharable_full (cmethod, TRUE, TRUE, TRUE)) { pass_mrgctx = TRUE; } else { if (cfg->gsharedvt && mini_is_gsharedvt_signature (mono_method_signature_internal (cmethod))) pass_mrgctx = TRUE; } } if (out_pass_vtable) *out_pass_vtable = pass_vtable; if (out_pass_mrgctx) *out_pass_mrgctx = pass_mrgctx; } static gboolean direct_icalls_enabled (MonoCompile *cfg, MonoMethod *method) { if (cfg->gen_sdb_seq_points || cfg->disable_direct_icalls) return FALSE; if (method && cfg->compile_aot && mono_aot_direct_icalls_enabled_for_method (cfg, method)) return TRUE; /* LLVM on amd64 can't handle calls to non-32 bit addresses */ #ifdef TARGET_AMD64 if (cfg->compile_llvm && !cfg->llvm_only) return FALSE; #endif return FALSE; } MonoInst* mono_emit_jit_icall_by_info (MonoCompile *cfg, int il_offset, MonoJitICallInfo *info, MonoInst **args) { /* * Call the jit icall without a wrapper if possible. * The wrapper is needed to be able to do stack walks for asynchronously suspended * threads when debugging. */ if (direct_icalls_enabled (cfg, NULL)) { int costs; if (!info->wrapper_method) { info->wrapper_method = mono_marshal_get_icall_wrapper (info, TRUE); mono_memory_barrier (); } /* * Inline the wrapper method, which is basically a call to the C icall, and * an exception check. */ costs = inline_method (cfg, info->wrapper_method, NULL, args, NULL, il_offset, TRUE, NULL); g_assert (costs > 0); g_assert (!MONO_TYPE_IS_VOID (info->sig->ret)); return args [0]; } return mono_emit_jit_icall_id (cfg, mono_jit_icall_info_id (info), args); } static MonoInst* mono_emit_widen_call_res (MonoCompile *cfg, MonoInst *ins, MonoMethodSignature *fsig) { if (!MONO_TYPE_IS_VOID (fsig->ret)) { if ((fsig->pinvoke || LLVM_ENABLED) && !m_type_is_byref (fsig->ret)) { int widen_op = -1; /* * Native code might return non register sized integers * without initializing the upper bits. */ switch (mono_type_to_load_membase (cfg, fsig->ret)) { case OP_LOADI1_MEMBASE: widen_op = OP_ICONV_TO_I1; break; case OP_LOADU1_MEMBASE: widen_op = OP_ICONV_TO_U1; break; case OP_LOADI2_MEMBASE: widen_op = OP_ICONV_TO_I2; break; case OP_LOADU2_MEMBASE: widen_op = OP_ICONV_TO_U2; break; default: break; } if (widen_op != -1) { int dreg = alloc_preg (cfg); MonoInst *widen; EMIT_NEW_UNALU (cfg, widen, widen_op, dreg, ins->dreg); widen->type = ins->type; ins = widen; } } } return ins; } static MonoInst* emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type); static void emit_method_access_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee) { MonoInst *args [2]; args [0] = emit_get_rgctx_method (cfg, mono_method_check_context_used (caller), caller, MONO_RGCTX_INFO_METHOD); args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (callee), callee, MONO_RGCTX_INFO_METHOD); mono_emit_jit_icall (cfg, mono_throw_method_access, args); } static void emit_bad_image_failure (MonoCompile *cfg, MonoMethod *caller, MonoMethod *callee) { mono_emit_jit_icall (cfg, mono_throw_bad_image, NULL); } static void emit_not_supported_failure (MonoCompile *cfg) { mono_emit_jit_icall (cfg, mono_throw_not_supported, NULL); } static void emit_invalid_program_with_msg (MonoCompile *cfg, MonoError *error_msg, MonoMethod *caller, MonoMethod *callee) { g_assert (!is_ok (error_msg)); char *str = mono_mem_manager_strdup (cfg->mem_manager, mono_error_get_message (error_msg)); MonoInst *iargs[1]; if (cfg->compile_aot) EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str); else EMIT_NEW_PCONST (cfg, iargs [0], str); mono_emit_jit_icall (cfg, mono_throw_invalid_program, iargs); } // FIXME Consolidate the multiple functions named get_method_nofail. static MonoMethod* get_method_nofail (MonoClass *klass, const char *method_name, int num_params, int flags) { MonoMethod *method; ERROR_DECL (error); method = mono_class_get_method_from_name_checked (klass, method_name, num_params, flags, error); mono_error_assert_ok (error); g_assertf (method, "Could not lookup method %s in %s", method_name, m_class_get_name (klass)); return method; } MonoMethod* mini_get_memcpy_method (void) { static MonoMethod *memcpy_method = NULL; if (!memcpy_method) { memcpy_method = get_method_nofail (mono_defaults.string_class, "memcpy", 3, 0); if (!memcpy_method) g_error ("Old corlib found. Install a new one"); } return memcpy_method; } MonoInst* mini_emit_storing_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value) { MonoInst *store; /* * Add a release memory barrier so the object contents are flushed * to memory before storing the reference into another object. */ if (!mini_debug_options.weak_memory_model) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE (cfg, store, OP_STORE_MEMBASE_REG, ptr->dreg, 0, value->dreg); mini_emit_write_barrier (cfg, ptr, value); return store; } void mini_emit_write_barrier (MonoCompile *cfg, MonoInst *ptr, MonoInst *value) { int card_table_shift_bits; target_mgreg_t card_table_mask; guint8 *card_table; MonoInst *dummy_use; int nursery_shift_bits; size_t nursery_size; if (!cfg->gen_write_barriers) return; //method->wrapper_type != MONO_WRAPPER_WRITE_BARRIER && !MONO_INS_IS_PCONST_NULL (sp [1]) card_table = mono_gc_get_target_card_table (&card_table_shift_bits, &card_table_mask); mono_gc_get_nursery (&nursery_shift_bits, &nursery_size); if (cfg->backend->have_card_table_wb && !cfg->compile_aot && card_table && nursery_shift_bits > 0 && !COMPILE_LLVM (cfg)) { MonoInst *wbarrier; MONO_INST_NEW (cfg, wbarrier, OP_CARD_TABLE_WBARRIER); wbarrier->sreg1 = ptr->dreg; wbarrier->sreg2 = value->dreg; MONO_ADD_INS (cfg->cbb, wbarrier); } else if (card_table) { int offset_reg = alloc_preg (cfg); int card_reg; MonoInst *ins; /* * We emit a fast light weight write barrier. This always marks cards as in the concurrent * collector case, so, for the serial collector, it might slightly slow down nursery * collections. We also expect that the host system and the target system have the same card * table configuration, which is the case if they have the same pointer size. */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHR_UN_IMM, offset_reg, ptr->dreg, card_table_shift_bits); if (card_table_mask) MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, offset_reg, offset_reg, card_table_mask); /*We can't use PADD_IMM since the cardtable might end up in high addresses and amd64 doesn't support * IMM's larger than 32bits. */ ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_GC_CARD_TABLE_ADDR, NULL); card_reg = ins->dreg; MONO_EMIT_NEW_BIALU (cfg, OP_PADD, offset_reg, offset_reg, card_reg); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, offset_reg, 0, 1); } else { MonoMethod *write_barrier = mono_gc_get_write_barrier (); mono_emit_method_call (cfg, write_barrier, &ptr, NULL); } EMIT_NEW_DUMMY_USE (cfg, dummy_use, value); } MonoMethod* mini_get_memset_method (void) { static MonoMethod *memset_method = NULL; if (!memset_method) { memset_method = get_method_nofail (mono_defaults.string_class, "memset", 3, 0); if (!memset_method) g_error ("Old corlib found. Install a new one"); } return memset_method; } void mini_emit_initobj (MonoCompile *cfg, MonoInst *dest, const guchar *ip, MonoClass *klass) { MonoInst *iargs [3]; int n; guint32 align; MonoMethod *memset_method; MonoInst *size_ins = NULL; MonoInst *bzero_ins = NULL; static MonoMethod *bzero_method; /* FIXME: Optimize this for the case when dest is an LDADDR */ mono_class_init_internal (klass); if (mini_is_gsharedvt_klass (klass)) { size_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_VALUE_SIZE); bzero_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_BZERO); if (!bzero_method) bzero_method = get_method_nofail (mono_defaults.string_class, "bzero_aligned_1", 2, 0); g_assert (bzero_method); iargs [0] = dest; iargs [1] = size_ins; mini_emit_calli (cfg, mono_method_signature_internal (bzero_method), iargs, bzero_ins, NULL, NULL); return; } klass = mono_class_from_mono_type_internal (mini_get_underlying_type (m_class_get_byval_arg (klass))); n = mono_class_value_size (klass, &align); if (n <= TARGET_SIZEOF_VOID_P * 8) { mini_emit_memset (cfg, dest->dreg, 0, n, 0, align); } else { memset_method = mini_get_memset_method (); iargs [0] = dest; EMIT_NEW_ICONST (cfg, iargs [1], 0); EMIT_NEW_ICONST (cfg, iargs [2], n); mono_emit_method_call (cfg, memset_method, iargs, NULL); } } static gboolean context_used_is_mrgctx (MonoCompile *cfg, int context_used) { /* gshared dim methods use an mrgctx */ if (mini_method_is_default_method (cfg->method)) return context_used != 0; return context_used & MONO_GENERIC_CONTEXT_USED_METHOD; } /* * emit_get_rgctx: * * Emit IR to return either the vtable or the mrgctx. */ static MonoInst* emit_get_rgctx (MonoCompile *cfg, int context_used) { MonoMethod *method = cfg->method; g_assert (cfg->gshared); /* Data whose context contains method type vars is stored in the mrgctx */ if (context_used_is_mrgctx (cfg, context_used)) { MonoInst *mrgctx_loc, *mrgctx_var; g_assert (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX); if (!mini_method_is_default_method (method)) g_assert (method->is_inflated && mono_method_get_context (method)->method_inst); if (cfg->llvm_only) { mrgctx_var = mono_get_mrgctx_var (cfg); } else { /* Volatile */ mrgctx_loc = mono_get_mrgctx_var (cfg); g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0); } return mrgctx_var; } /* * The rest of the entries are stored in vtable->runtime_generic_context so * have to return a vtable. */ if (cfg->rgctx_access == MONO_RGCTX_ACCESS_MRGCTX) { MonoInst *mrgctx_loc, *mrgctx_var, *vtable_var; int vtable_reg; /* We are passed an mrgctx, return mrgctx->class_vtable */ if (cfg->llvm_only) { mrgctx_var = mono_get_mrgctx_var (cfg); } else { mrgctx_loc = mono_get_mrgctx_var (cfg); g_assert (mrgctx_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, mrgctx_var, mrgctx_loc->inst_c0); } vtable_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, vtable_var, OP_LOAD_MEMBASE, vtable_reg, mrgctx_var->dreg, MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable)); vtable_var->type = STACK_PTR; return vtable_var; } else if (cfg->rgctx_access == MONO_RGCTX_ACCESS_VTABLE) { MonoInst *vtable_loc, *vtable_var; /* We are passed a vtable, return it */ if (cfg->llvm_only) { vtable_var = mono_get_vtable_var (cfg); } else { vtable_loc = mono_get_vtable_var (cfg); g_assert (vtable_loc->flags & MONO_INST_VOLATILE); EMIT_NEW_TEMPLOAD (cfg, vtable_var, vtable_loc->inst_c0); } vtable_var->type = STACK_PTR; return vtable_var; } else { MonoInst *ins, *this_ins; int vtable_reg; /* We are passed a this pointer, return this->vtable */ EMIT_NEW_VARLOAD (cfg, this_ins, cfg->this_arg, mono_get_object_type ()); vtable_reg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, vtable_reg, this_ins->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable)); return ins; } } static MonoJumpInfoRgctxEntry * mono_patch_info_rgctx_entry_new (MonoMemPool *mp, MonoMethod *method, gboolean in_mrgctx, MonoJumpInfoType patch_type, gconstpointer patch_data, MonoRgctxInfoType info_type) { MonoJumpInfoRgctxEntry *res = (MonoJumpInfoRgctxEntry *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfoRgctxEntry)); if (in_mrgctx) res->d.method = method; else res->d.klass = method->klass; res->in_mrgctx = in_mrgctx; res->data = (MonoJumpInfo *)mono_mempool_alloc0 (mp, sizeof (MonoJumpInfo)); res->data->type = patch_type; res->data->data.target = patch_data; res->info_type = info_type; return res; } static MonoInst* emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type); static MonoInst* emit_rgctx_fetch_inline (MonoCompile *cfg, MonoInst *rgctx, MonoJumpInfoRgctxEntry *entry) { MonoInst *call; MonoInst *slot_ins; EMIT_NEW_AOTCONST (cfg, slot_ins, MONO_PATCH_INFO_RGCTX_SLOT_INDEX, entry); // Can't add basic blocks during interp entry mode if (cfg->disable_inline_rgctx_fetch || cfg->interp_entry_only) { MonoInst *args [2] = { rgctx, slot_ins }; if (entry->in_mrgctx) call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args); else call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args); return call; } MonoBasicBlock *slowpath_bb, *end_bb; MonoInst *ins, *res; int rgctx_reg, res_reg; /* * rgctx = vtable->runtime_generic_context; * if (rgctx) { * val = rgctx [slot + 1]; * if (val) * return val; * } * <slowpath> */ NEW_BBLOCK (cfg, end_bb); NEW_BBLOCK (cfg, slowpath_bb); if (entry->in_mrgctx) { rgctx_reg = rgctx->dreg; } else { rgctx_reg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, rgctx_reg, rgctx->dreg, MONO_STRUCT_OFFSET (MonoVTable, runtime_generic_context)); // FIXME: Avoid this check by allocating the table when the vtable is created etc. MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rgctx_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb); } int table_size = mono_class_rgctx_get_array_size (0, entry->in_mrgctx); if (entry->in_mrgctx) table_size -= MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT / TARGET_SIZEOF_VOID_P; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, slot_ins->dreg, table_size - 1); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBGE, slowpath_bb); int shifted_slot_reg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_ISHL_IMM, shifted_slot_reg, slot_ins->dreg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2); int addr_reg = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, addr_reg, rgctx_reg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, addr_reg, addr_reg, shifted_slot_reg); int val_reg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, val_reg, addr_reg, TARGET_SIZEOF_VOID_P + (entry->in_mrgctx ? MONO_SIZEOF_METHOD_RUNTIME_GENERIC_CONTEXT : 0)); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, val_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, slowpath_bb); res_reg = alloc_preg (cfg); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, val_reg); res = ins; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, slowpath_bb); slowpath_bb->out_of_line = TRUE; MonoInst *args[2] = { rgctx, slot_ins }; if (entry->in_mrgctx) call = mono_emit_jit_icall (cfg, mono_fill_method_rgctx, args); else call = mono_emit_jit_icall (cfg, mono_fill_class_rgctx, args); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, res_reg, call->dreg); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); return res; } /* * emit_rgctx_fetch: * * Emit IR to load the value of the rgctx entry ENTRY from the rgctx. */ static MonoInst* emit_rgctx_fetch (MonoCompile *cfg, int context_used, MonoJumpInfoRgctxEntry *entry) { MonoInst *rgctx = emit_get_rgctx (cfg, context_used); if (cfg->llvm_only) return emit_rgctx_fetch_inline (cfg, rgctx, entry); else return mini_emit_abs_call (cfg, MONO_PATCH_INFO_RGCTX_FETCH, entry, mono_icall_sig_ptr_ptr, &rgctx); } /* * mini_emit_get_rgctx_klass: * * Emit IR to load the property RGCTX_TYPE of KLASS. If context_used is 0, emit * normal constants, else emit a load from the rgctx. */ MonoInst* mini_emit_get_rgctx_klass (MonoCompile *cfg, int context_used, MonoClass *klass, MonoRgctxInfoType rgctx_type) { if (!context_used) { MonoInst *ins; switch (rgctx_type) { case MONO_RGCTX_INFO_KLASS: EMIT_NEW_CLASSCONST (cfg, ins, klass); return ins; case MONO_RGCTX_INFO_VTABLE: { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; EMIT_NEW_VTABLECONST (cfg, ins, vtable); return ins; } default: g_assert_not_reached (); } } // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return mini_emit_get_gsharedvt_info_klass (cfg, klass, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_CLASS, klass, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); mono_error_exit: return NULL; } static MonoInst* emit_get_rgctx_sig (MonoCompile *cfg, int context_used, MonoMethodSignature *sig, MonoRgctxInfoType rgctx_type) { MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_SIGNATURE, sig, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } static MonoInst* emit_get_rgctx_gsharedvt_call (MonoCompile *cfg, int context_used, MonoMethodSignature *sig, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { MonoJumpInfoGSharedVtCall *call_info; MonoJumpInfoRgctxEntry *entry; call_info = (MonoJumpInfoGSharedVtCall *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoGSharedVtCall)); call_info->sig = sig; call_info->method = cmethod; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_CALL, call_info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } /* * emit_get_rgctx_virt_method: * * Return data for method VIRT_METHOD for a receiver of type KLASS. */ static MonoInst* emit_get_rgctx_virt_method (MonoCompile *cfg, int context_used, MonoClass *klass, MonoMethod *virt_method, MonoRgctxInfoType rgctx_type) { MonoJumpInfoVirtMethod *info; MonoJumpInfoRgctxEntry *entry; if (context_used == -1) context_used = mono_class_check_context_used (klass) | mono_method_check_context_used (virt_method); info = (MonoJumpInfoVirtMethod *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoJumpInfoVirtMethod)); info->klass = klass; info->method = virt_method; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_VIRT_METHOD, info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } static MonoInst* emit_get_rgctx_gsharedvt_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoGSharedVtMethodInfo *info) { MonoJumpInfoRgctxEntry *entry; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_GSHAREDVT_METHOD, info, MONO_RGCTX_INFO_METHOD_GSHAREDVT_INFO); return emit_rgctx_fetch (cfg, context_used, entry); } /* * emit_get_rgctx_method: * * Emit IR to load the property RGCTX_TYPE of CMETHOD. If context_used is 0, emit * normal constants, else emit a load from the rgctx. */ static MonoInst* emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { if (context_used == -1) context_used = mono_method_check_context_used (cmethod); if (!context_used) { MonoInst *ins; switch (rgctx_type) { case MONO_RGCTX_INFO_METHOD: EMIT_NEW_METHODCONST (cfg, ins, cmethod); return ins; case MONO_RGCTX_INFO_METHOD_RGCTX: EMIT_NEW_METHOD_RGCTX_CONST (cfg, ins, cmethod); return ins; case MONO_RGCTX_INFO_METHOD_FTNDESC: EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_FTNDESC, cmethod); return ins; case MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY: EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_LLVMONLY_INTERP_ENTRY, cmethod); return ins; default: g_assert_not_reached (); } } else { // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return emit_get_gsharedvt_info (cfg, cmethod, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_METHODCONST, cmethod, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } } static MonoInst* emit_get_rgctx_field (MonoCompile *cfg, int context_used, MonoClassField *field, MonoRgctxInfoType rgctx_type) { // Its cheaper to load these from the gsharedvt info struct if (cfg->llvm_only && cfg->gsharedvt) return emit_get_gsharedvt_info (cfg, field, rgctx_type); MonoJumpInfoRgctxEntry *entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_FIELD, field, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } MonoInst* mini_emit_get_rgctx_method (MonoCompile *cfg, int context_used, MonoMethod *cmethod, MonoRgctxInfoType rgctx_type) { return emit_get_rgctx_method (cfg, context_used, cmethod, rgctx_type); } static int get_gsharedvt_info_slot (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type) { MonoGSharedVtMethodInfo *info = cfg->gsharedvt_info; MonoRuntimeGenericContextInfoTemplate *template_; int i, idx; g_assert (info); for (i = 0; i < info->num_entries; ++i) { MonoRuntimeGenericContextInfoTemplate *otemplate = &info->entries [i]; if (otemplate->info_type == rgctx_type && otemplate->data == data && rgctx_type != MONO_RGCTX_INFO_LOCAL_OFFSET) return i; } if (info->num_entries == info->count_entries) { MonoRuntimeGenericContextInfoTemplate *new_entries; int new_count_entries = info->count_entries ? info->count_entries * 2 : 16; new_entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * new_count_entries); memcpy (new_entries, info->entries, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries); info->entries = new_entries; info->count_entries = new_count_entries; } idx = info->num_entries; template_ = &info->entries [idx]; template_->info_type = rgctx_type; template_->data = data; info->num_entries ++; return idx; } /* * emit_get_gsharedvt_info: * * This is similar to emit_get_rgctx_.., but loads the data from the gsharedvt info var instead of calling an rgctx fetch trampoline. */ static MonoInst* emit_get_gsharedvt_info (MonoCompile *cfg, gpointer data, MonoRgctxInfoType rgctx_type) { MonoInst *ins; int idx, dreg; idx = get_gsharedvt_info_slot (cfg, data, rgctx_type); /* Load info->entries [idx] */ dreg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, cfg->gsharedvt_info_var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P)); return ins; } MonoInst* mini_emit_get_gsharedvt_info_klass (MonoCompile *cfg, MonoClass *klass, MonoRgctxInfoType rgctx_type) { return emit_get_gsharedvt_info (cfg, m_class_get_byval_arg (klass), rgctx_type); } /* * On return the caller must check @klass for load errors. */ static void emit_class_init (MonoCompile *cfg, MonoClass *klass) { MonoInst *vtable_arg; int context_used; context_used = mini_class_check_context_used (cfg, klass); if (context_used) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE); } else { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); if (!is_ok (cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable); } if (!COMPILE_LLVM (cfg) && cfg->backend->have_op_generic_class_init) { MonoInst *ins; /* * Using an opcode instead of emitting IR here allows the hiding of the call inside the opcode, * so this doesn't have to clobber any regs and it doesn't break basic blocks. */ MONO_INST_NEW (cfg, ins, OP_GENERIC_CLASS_INIT); ins->sreg1 = vtable_arg->dreg; MONO_ADD_INS (cfg->cbb, ins); } else { int inited_reg; MonoBasicBlock *inited_bb; inited_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, inited_reg, vtable_arg->dreg, MONO_STRUCT_OFFSET (MonoVTable, initialized)); NEW_BBLOCK (cfg, inited_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, inited_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBNE_UN, inited_bb); cfg->cbb->out_of_line = TRUE; mono_emit_jit_icall (cfg, mono_generic_class_init, &vtable_arg); MONO_START_BB (cfg, inited_bb); } } static void emit_seq_point (MonoCompile *cfg, MonoMethod *method, guint8* ip, gboolean intr_loc, gboolean nonempty_stack) { MonoInst *ins; if (cfg->gen_seq_points && cfg->method == method) { NEW_SEQ_POINT (cfg, ins, ip - cfg->header->code, intr_loc); if (nonempty_stack) ins->flags |= MONO_INST_NONEMPTY_STACK; MONO_ADD_INS (cfg->cbb, ins); cfg->last_seq_point = ins; } } void mini_save_cast_details (MonoCompile *cfg, MonoClass *klass, int obj_reg, gboolean null_check) { if (mini_debug_options.better_cast_details) { int vtable_reg = alloc_preg (cfg); int klass_reg = alloc_preg (cfg); MonoBasicBlock *is_null_bb = NULL; MonoInst *tls_get; if (null_check) { NEW_BBLOCK (cfg, is_null_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, obj_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb); } tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS); if (!tls_get) { fprintf (stderr, "error: --debug=casts not supported on this platform.\n."); exit (1); } MONO_EMIT_NEW_LOAD_MEMBASE (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable)); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), klass_reg); MonoInst *class_ins = mini_emit_get_rgctx_klass (cfg, mini_class_check_context_used (cfg, klass), klass, MONO_RGCTX_INFO_KLASS); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_to), class_ins->dreg); if (null_check) MONO_START_BB (cfg, is_null_bb); } } void mini_reset_cast_details (MonoCompile *cfg) { /* Reset the variables holding the cast details */ if (mini_debug_options.better_cast_details) { MonoInst *tls_get = mono_create_tls_get (cfg, TLS_KEY_JIT_TLS); /* It is enough to reset the from field */ MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, tls_get->dreg, MONO_STRUCT_OFFSET (MonoJitTlsData, class_cast_from), 0); } } /* * On return the caller must check @array_class for load errors */ static void mini_emit_check_array_type (MonoCompile *cfg, MonoInst *obj, MonoClass *array_class) { int vtable_reg = alloc_preg (cfg); int context_used; context_used = mini_class_check_context_used (cfg, array_class); mini_save_cast_details (cfg, array_class, obj->dreg, FALSE); MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj->dreg, MONO_STRUCT_OFFSET (MonoObject, vtable)); if (context_used) { MonoInst *vtable_ins; vtable_ins = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vtable_ins->dreg); } else { if (cfg->compile_aot) { int vt_reg; MonoVTable *vtable; if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } vt_reg = alloc_preg (cfg); MONO_EMIT_NEW_VTABLECONST (cfg, vt_reg, vtable); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, vtable_reg, vt_reg); } else { MonoVTable *vtable; if (!(vtable = mono_class_vtable_checked (array_class, cfg->error))) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return; } MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, vtable_reg, (gssize)vtable); } } MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "ArrayTypeMismatchException"); mini_reset_cast_details (cfg); } /** * Handles unbox of a Nullable<T>. If context_used is non zero, then shared * generic code is generated. */ static MonoInst* handle_unbox_nullable (MonoCompile* cfg, MonoInst* val, MonoClass* klass, int context_used) { MonoMethod* method; if (m_class_is_enumtype (mono_class_get_nullable_param_internal (klass))) method = get_method_nofail (klass, "UnboxExact", 1, 0); else method = get_method_nofail (klass, "Unbox", 1, 0); g_assert (method); if (context_used) { MonoInst *rgctx, *addr; /* FIXME: What if the class is shared? We might not have to get the address of the method from the RGCTX. */ if (cfg->llvm_only) { addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_METHOD_FTNDESC); cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, mono_method_signature_internal (method)); return mini_emit_llvmonly_calli (cfg, mono_method_signature_internal (method), &val, addr); } else { addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); rgctx = emit_get_rgctx (cfg, context_used); return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx); } } else { gboolean pass_vtable, pass_mrgctx; MonoInst *rgctx_arg = NULL; check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx); g_assert (!pass_mrgctx); if (pass_vtable) { MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); mono_error_assert_ok (cfg->error); EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable); } return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg); } } MonoInst* mini_handle_unbox (MonoCompile *cfg, MonoClass *klass, MonoInst *val, int context_used) { MonoInst *add; int obj_reg; int vtable_reg = alloc_dreg (cfg ,STACK_PTR); int klass_reg = alloc_dreg (cfg ,STACK_PTR); int eclass_reg = alloc_dreg (cfg ,STACK_PTR); int rank_reg = alloc_dreg (cfg ,STACK_I4); obj_reg = val->dreg; MONO_EMIT_NEW_LOAD_MEMBASE_FAULT (cfg, vtable_reg, obj_reg, MONO_STRUCT_OFFSET (MonoObject, vtable)); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADU1_MEMBASE, rank_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, rank)); /* FIXME: generics */ g_assert (m_class_get_rank (klass) == 0); // Check rank == 0 MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, rank_reg, 0); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, vtable_reg, MONO_STRUCT_OFFSET (MonoVTable, klass)); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, eclass_reg, klass_reg, m_class_offsetof_element_class ()); if (context_used) { MonoInst *element_class; /* This assertion is from the unboxcast insn */ g_assert (m_class_get_rank (klass) == 0); element_class = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_ELEMENT_KLASS); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, eclass_reg, element_class->dreg); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); } else { mini_save_cast_details (cfg, m_class_get_element_class (klass), obj_reg, FALSE); mini_emit_class_check (cfg, eclass_reg, m_class_get_element_class (klass)); mini_reset_cast_details (cfg); } NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), obj_reg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, add); add->type = STACK_MP; add->klass = klass; return add; } static MonoInst* handle_unbox_gsharedvt (MonoCompile *cfg, MonoClass *klass, MonoInst *obj) { MonoInst *addr, *klass_inst, *is_ref, *args[16]; MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb; MonoInst *ins; int dreg, addr_reg; klass_inst = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_KLASS); /* obj */ args [0] = obj; /* klass */ args [1] = klass_inst; /* CASTCLASS */ obj = mono_emit_jit_icall (cfg, mono_object_castclass_unbox, args); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, is_nullable_bb); NEW_BBLOCK (cfg, end_bb); is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb); /* This will contain either the address of the unboxed vtype, or an address of the temporary where the ref is stored */ addr_reg = alloc_dreg (cfg, STACK_MP); /* Non-ref case */ /* UNBOX */ NEW_BIALU_IMM (cfg, addr, OP_ADD_IMM, addr_reg, obj->dreg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, addr); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); /* Save the ref to a temporary */ dreg = alloc_ireg (cfg); EMIT_NEW_VARLOADA_VREG (cfg, addr, dreg, m_class_get_byval_arg (klass)); addr->dreg = addr_reg; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, obj->dreg); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Nullable case */ MONO_START_BB (cfg, is_nullable_bb); { MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_UNBOX); MonoInst *unbox_call; MonoMethodSignature *unbox_sig; unbox_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *))); unbox_sig->ret = m_class_get_byval_arg (klass); unbox_sig->param_count = 1; unbox_sig->params [0] = mono_get_object_type (); if (cfg->llvm_only) unbox_call = mini_emit_llvmonly_calli (cfg, unbox_sig, &obj, addr); else unbox_call = mini_emit_calli (cfg, unbox_sig, &obj, addr, NULL, NULL); EMIT_NEW_VARLOADA_VREG (cfg, addr, unbox_call->dreg, m_class_get_byval_arg (klass)); addr->dreg = addr_reg; } MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* End */ MONO_START_BB (cfg, end_bb); /* LDOBJ */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr_reg, 0); return ins; } /* * Returns NULL and set the cfg exception on error. */ static MonoInst* handle_alloc (MonoCompile *cfg, MonoClass *klass, gboolean for_box, int context_used) { MonoInst *iargs [2]; MonoJitICallId alloc_ftn; if (mono_class_get_flags (klass) & TYPE_ATTRIBUTE_ABSTRACT) { char* full_name = mono_type_get_full_name (klass); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_set_member_access (cfg->error, "Cannot create an abstract class: %s", full_name); g_free (full_name); return NULL; } if (context_used) { gboolean known_instance_size = !mini_is_gsharedvt_klass (klass); MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, known_instance_size); iargs [0] = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_VTABLE); alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific; if (managed_alloc) { if (known_instance_size) { int size = mono_class_instance_size (klass); if (size < MONO_ABI_SIZEOF (MonoObject)) g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass)); EMIT_NEW_ICONST (cfg, iargs [1], size); } return mono_emit_method_call (cfg, managed_alloc, iargs, NULL); } return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs); } if (cfg->compile_aot && cfg->cbb->out_of_line && m_class_get_type_token (klass) && m_class_get_image (klass) == mono_defaults.corlib && !mono_class_is_ginst (klass)) { /* This happens often in argument checking code, eg. throw new FooException... */ /* Avoid relocations and save some space by calling a helper function specialized to mscorlib */ EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (m_class_get_type_token (klass))); alloc_ftn = MONO_JIT_ICALL_mono_helper_newobj_mscorlib; } else { MonoVTable *vtable = mono_class_vtable_checked (klass, cfg->error); if (!is_ok (cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return NULL; } MonoMethod *managed_alloc = mono_gc_get_managed_allocator (klass, for_box, TRUE); if (managed_alloc) { int size = mono_class_instance_size (klass); if (size < MONO_ABI_SIZEOF (MonoObject)) g_error ("Invalid size %d for class %s", size, mono_type_get_full_name (klass)); EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); EMIT_NEW_ICONST (cfg, iargs [1], size); return mono_emit_method_call (cfg, managed_alloc, iargs, NULL); } alloc_ftn = MONO_JIT_ICALL_ves_icall_object_new_specific; EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); } return mono_emit_jit_icall_id (cfg, alloc_ftn, iargs); } /* * Returns NULL and set the cfg exception on error. */ MonoInst* mini_emit_box (MonoCompile *cfg, MonoInst *val, MonoClass *klass, int context_used) { MonoInst *alloc, *ins; if (G_UNLIKELY (m_class_is_byreflike (klass))) { mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Cannot box IsByRefLike type '%s.%s'", m_class_get_name_space (klass), m_class_get_name (klass)); mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); return NULL; } if (mono_class_is_nullable (klass)) { MonoMethod* method = get_method_nofail (klass, "Box", 1, 0); if (context_used) { if (cfg->llvm_only) { MonoMethodSignature *sig = mono_method_signature_internal (method); MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_METHOD_FTNDESC); cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig); return mini_emit_llvmonly_calli (cfg, sig, &val, addr); } else { /* FIXME: What if the class is shared? We might not have to get the method address from the RGCTX. */ MonoInst *addr = emit_get_rgctx_method (cfg, context_used, method, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); MonoInst *rgctx = emit_get_rgctx (cfg, context_used); return mini_emit_calli (cfg, mono_method_signature_internal (method), &val, addr, NULL, rgctx); } } else { gboolean pass_vtable, pass_mrgctx; MonoInst *rgctx_arg = NULL; check_method_sharing (cfg, method, &pass_vtable, &pass_mrgctx); g_assert (!pass_mrgctx); if (pass_vtable) { MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); mono_error_assert_ok (cfg->error); EMIT_NEW_VTABLECONST (cfg, rgctx_arg, vtable); } return mini_emit_method_call_full (cfg, method, NULL, FALSE, &val, NULL, NULL, rgctx_arg); } } if (mini_is_gsharedvt_klass (klass)) { MonoBasicBlock *is_ref_bb, *is_nullable_bb, *end_bb; MonoInst *res, *is_ref, *src_var, *addr; int dreg; dreg = alloc_ireg (cfg); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, is_nullable_bb); NEW_BBLOCK (cfg, end_bb); is_ref = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, is_ref->dreg, MONO_GSHAREDVT_BOX_TYPE_NULLABLE); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_nullable_bb); /* Non-ref case */ alloc = handle_alloc (cfg, klass, TRUE, context_used); if (!alloc) return NULL; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg); ins->opcode = OP_STOREV_MEMBASE; EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, alloc->dreg); res->type = STACK_OBJ; res->klass = klass; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); /* val is a vtype, so has to load the value manually */ src_var = get_vreg_to_inst (cfg, val->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, val->dreg); EMIT_NEW_VARLOADA (cfg, addr, src_var, src_var->inst_vtype); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, addr->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Nullable case */ MONO_START_BB (cfg, is_nullable_bb); { MonoInst *addr = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_NULLABLE_CLASS_BOX); MonoInst *box_call; MonoMethodSignature *box_sig; /* * klass is Nullable<T>, need to call Nullable<T>.Box () using a gsharedvt signature, but we cannot * construct that method at JIT time, so have to do things by hand. */ box_sig = (MonoMethodSignature *)mono_mempool_alloc0 (cfg->mempool, MONO_SIZEOF_METHOD_SIGNATURE + (1 * sizeof (MonoType *))); box_sig->ret = mono_get_object_type (); box_sig->param_count = 1; box_sig->params [0] = m_class_get_byval_arg (klass); if (cfg->llvm_only) box_call = mini_emit_llvmonly_calli (cfg, box_sig, &val, addr); else box_call = mini_emit_calli (cfg, box_sig, &val, addr, NULL, NULL); EMIT_NEW_UNALU (cfg, res, OP_MOVE, dreg, box_call->dreg); res->type = STACK_OBJ; res->klass = klass; } MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); return res; } alloc = handle_alloc (cfg, klass, TRUE, context_used); if (!alloc) return NULL; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), alloc->dreg, MONO_ABI_SIZEOF (MonoObject), val->dreg); return alloc; } static gboolean method_needs_stack_walk (MonoCompile *cfg, MonoMethod *cmethod) { if (cmethod->klass == mono_defaults.systemtype_class) { if (!strcmp (cmethod->name, "GetType")) return TRUE; } /* * In corelib code, methods which need to do a stack walk declare a StackCrawlMark local and pass it as an * arguments until it reaches an icall. Its hard to detect which methods do that especially with * StackCrawlMark.LookForMyCallersCaller, so for now, just hardcode the classes which contain the public * methods whose caller is needed. */ if (mono_is_corlib_image (m_class_get_image (cmethod->klass))) { const char *cname = m_class_get_name (cmethod->klass); if (!strcmp (cname, "Assembly") || !strcmp (cname, "AssemblyLoadContext") || (!strcmp (cname, "Activator"))) { if (!strcmp (cmethod->name, "op_Equality")) return FALSE; return TRUE; } } return FALSE; } G_GNUC_UNUSED MonoInst* mini_handle_enum_has_flag (MonoCompile *cfg, MonoClass *klass, MonoInst *enum_this, int enum_val_reg, MonoInst *enum_flag) { MonoType *enum_type = mono_type_get_underlying_type (m_class_get_byval_arg (klass)); guint32 load_opc = mono_type_to_load_membase (cfg, enum_type); gboolean is_i4; switch (enum_type->type) { case MONO_TYPE_I8: case MONO_TYPE_U8: #if SIZEOF_REGISTER == 8 case MONO_TYPE_I: case MONO_TYPE_U: #endif is_i4 = FALSE; break; default: is_i4 = TRUE; break; } { MonoInst *load = NULL, *and_, *cmp, *ceq; int enum_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg); int and_reg = is_i4 ? alloc_ireg (cfg) : alloc_lreg (cfg); int dest_reg = alloc_ireg (cfg); if (enum_this) { EMIT_NEW_LOAD_MEMBASE (cfg, load, load_opc, enum_reg, enum_this->dreg, 0); } else { g_assert (enum_val_reg != -1); enum_reg = enum_val_reg; } EMIT_NEW_BIALU (cfg, and_, is_i4 ? OP_IAND : OP_LAND, and_reg, enum_reg, enum_flag->dreg); EMIT_NEW_BIALU (cfg, cmp, is_i4 ? OP_ICOMPARE : OP_LCOMPARE, -1, and_reg, enum_flag->dreg); EMIT_NEW_UNALU (cfg, ceq, is_i4 ? OP_ICEQ : OP_LCEQ, dest_reg, -1); ceq->type = STACK_I4; if (!is_i4) { load = load ? mono_decompose_opcode (cfg, load) : NULL; and_ = mono_decompose_opcode (cfg, and_); cmp = mono_decompose_opcode (cfg, cmp); ceq = mono_decompose_opcode (cfg, ceq); } return ceq; } } static void emit_set_deopt_il_offset (MonoCompile *cfg, int offset) { MonoInst *ins; if (!(cfg->deopt && cfg->method == cfg->current_method)) return; EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, ins->dreg, MONO_STRUCT_OFFSET (MonoMethodILState, il_offset), offset); } static MonoInst* emit_get_rgctx_dele_tramp (MonoCompile *cfg, int context_used, MonoClass *klass, MonoMethod *virt_method, gboolean _virtual, MonoRgctxInfoType rgctx_type) { MonoDelegateClassMethodPair *info; MonoJumpInfoRgctxEntry *entry; info = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair)); info->klass = klass; info->method = virt_method; info->is_virtual = _virtual; entry = mono_patch_info_rgctx_entry_new (cfg->mempool, cfg->method, context_used_is_mrgctx (cfg, context_used), MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, info, rgctx_type); return emit_rgctx_fetch (cfg, context_used, entry); } /* * Returns NULL and set the cfg exception on error. */ static G_GNUC_UNUSED MonoInst* handle_delegate_ctor (MonoCompile *cfg, MonoClass *klass, MonoInst *target, MonoMethod *method, int target_method_context_used, int invoke_context_used, gboolean virtual_) { MonoInst *ptr; int dreg; gpointer trampoline; MonoInst *obj, *tramp_ins; guint8 **code_slot; if (virtual_ && !cfg->llvm_only) { MonoMethod *invoke = mono_get_delegate_invoke_internal (klass); g_assert (invoke); //FIXME verify & fix any issue with removing invoke_context_used restriction if (invoke_context_used || !mono_get_delegate_virtual_invoke_impl (mono_method_signature_internal (invoke), target_method_context_used ? NULL : method)) return NULL; } obj = handle_alloc (cfg, klass, FALSE, invoke_context_used); if (!obj) return NULL; /* Inline the contents of mono_delegate_ctor */ /* Set target field */ /* Optimize away setting of NULL target */ if (!MONO_INS_IS_PCONST_NULL (target)) { if (!(method->flags & METHOD_ATTRIBUTE_STATIC)) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); } if (!mini_debug_options.weak_memory_model) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target), target->dreg); if (cfg->gen_write_barriers) { dreg = alloc_preg (cfg); EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, target)); mini_emit_write_barrier (cfg, ptr, target); } } /* Set method field */ if (!(target_method_context_used || invoke_context_used) && !cfg->llvm_only) { //If compiling with gsharing enabled, it's faster to load method the delegate trampoline info than to use a rgctx slot MonoInst *method_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), method_ins->dreg); } if (cfg->llvm_only) { if (virtual_) { MonoInst *args [ ] = { obj, target, emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD) }; mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate_virtual, args); return obj; } } /* * To avoid looking up the compiled code belonging to the target method * in mono_delegate_trampoline (), we allocate a per-domain memory slot to * store it, and we fill it after the method has been compiled. */ if (!method->dynamic && !cfg->llvm_only) { MonoInst *code_slot_ins; if (target_method_context_used) { code_slot_ins = emit_get_rgctx_method (cfg, target_method_context_used, method, MONO_RGCTX_INFO_METHOD_DELEGATE_CODE); } else { MonoJitMemoryManager *jit_mm = (MonoJitMemoryManager*)cfg->jit_mm; jit_mm_lock (jit_mm); if (!jit_mm->method_code_hash) jit_mm->method_code_hash = g_hash_table_new (NULL, NULL); code_slot = (guint8 **)g_hash_table_lookup (jit_mm->method_code_hash, method); if (!code_slot) { code_slot = (guint8 **)mono_mem_manager_alloc0 (jit_mm->mem_manager, sizeof (gpointer)); g_hash_table_insert (jit_mm->method_code_hash, method, code_slot); } jit_mm_unlock (jit_mm); code_slot_ins = mini_emit_runtime_constant (cfg, MONO_PATCH_INFO_METHOD_CODE_SLOT, method); } MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_code), code_slot_ins->dreg); } if (target_method_context_used || invoke_context_used) { tramp_ins = emit_get_rgctx_dele_tramp (cfg, target_method_context_used | invoke_context_used, klass, method, virtual_, MONO_RGCTX_INFO_DELEGATE_TRAMP_INFO); //This is emited as a contant store for the non-shared case. //We copy from the delegate trampoline info as it's faster than a rgctx fetch dreg = alloc_preg (cfg); if (!cfg->llvm_only) { MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method), dreg); } } else if (cfg->compile_aot) { MonoDelegateClassMethodPair *del_tramp; del_tramp = (MonoDelegateClassMethodPair *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoDelegateClassMethodPair)); del_tramp->klass = klass; del_tramp->method = method; del_tramp->is_virtual = virtual_; EMIT_NEW_AOTCONST (cfg, tramp_ins, MONO_PATCH_INFO_DELEGATE_TRAMPOLINE, del_tramp); } else { if (virtual_) trampoline = mono_create_delegate_virtual_trampoline (klass, method); else trampoline = mono_create_delegate_trampoline_info (klass, method); EMIT_NEW_PCONST (cfg, tramp_ins, trampoline); } if (cfg->llvm_only) { MonoInst *args [ ] = { obj, tramp_ins }; mono_emit_jit_icall (cfg, mini_llvmonly_init_delegate, args); return obj; } /* Set invoke_impl field */ if (virtual_) { MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), tramp_ins->dreg); } else { dreg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, invoke_impl)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, invoke_impl), dreg); dreg = alloc_preg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, dreg, tramp_ins->dreg, MONO_STRUCT_OFFSET (MonoDelegateTrampInfo, method_ptr)); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr), dreg); } dreg = alloc_preg (cfg); MONO_EMIT_NEW_ICONST (cfg, dreg, virtual_ ? 1 : 0); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, obj->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_is_virtual), dreg); /* All the checks which are in mono_delegate_ctor () are done by the delegate trampoline */ return obj; } /* * handle_constrained_gsharedvt_call: * * Handle constrained calls where the receiver is a gsharedvt type. * Return the instruction representing the call. Set the cfg exception on failure. */ static MonoInst* handle_constrained_gsharedvt_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, MonoClass *constrained_class, gboolean *ref_emit_widen) { MonoInst *ins = NULL; gboolean emit_widen = *ref_emit_widen; gboolean supported; /* * Constrained calls need to behave differently at runtime dependending on whenever the receiver is instantiated as ref type or as a vtype. * This is hard to do with the current call code, since we would have to emit a branch and two different calls. So instead, we * pack the arguments into an array, and do the rest of the work in in an icall. */ supported = ((cmethod->klass == mono_defaults.object_class) || mono_class_is_interface (cmethod->klass) || (!m_class_is_valuetype (cmethod->klass) && m_class_get_image (cmethod->klass) != mono_defaults.corlib)); if (supported) supported = (MONO_TYPE_IS_VOID (fsig->ret) || MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_IS_REFERENCE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret)) || mini_is_gsharedvt_type (fsig->ret)); if (supported) { if (fsig->param_count == 0 || (!fsig->hasthis && fsig->param_count == 1)) { supported = TRUE; } else { supported = TRUE; for (int i = 0; i < fsig->param_count; ++i) { if (!(m_type_is_byref (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_IS_REFERENCE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i]) || mini_is_gsharedvt_type (fsig->params [i]))) supported = FALSE; } } } if (supported) { MonoInst *args [5]; /* * This case handles calls to * - object:ToString()/Equals()/GetHashCode(), * - System.IComparable<T>:CompareTo() * - System.IEquatable<T>:Equals () * plus some simple interface calls enough to support AsyncTaskMethodBuilder. */ if (fsig->hasthis) args [0] = sp [0]; else EMIT_NEW_PCONST (cfg, args [0], NULL); args [1] = emit_get_rgctx_method (cfg, mono_method_check_context_used (cmethod), cmethod, MONO_RGCTX_INFO_METHOD); args [2] = mini_emit_get_rgctx_klass (cfg, mono_class_check_context_used (constrained_class), constrained_class, MONO_RGCTX_INFO_KLASS); /* !fsig->hasthis is for the wrapper for the Object.GetType () icall or static virtual methods */ if ((fsig->hasthis || m_method_is_static (cmethod)) && fsig->param_count) { /* Call mono_gsharedvt_constrained_call (gpointer mp, MonoMethod *cmethod, MonoClass *klass, gboolean *deref_args, gpointer *args) */ gboolean has_gsharedvt = FALSE; for (int i = 0; i < fsig->param_count; ++i) { if (mini_is_gsharedvt_type (fsig->params [i])) has_gsharedvt = TRUE; } /* Pass an array of bools which signal whenever the corresponding argument is a gsharedvt ref type */ if (has_gsharedvt) { MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = fsig->param_count; MONO_ADD_INS (cfg->cbb, ins); args [3] = ins; } else { EMIT_NEW_PCONST (cfg, args [3], 0); } /* Pass the arguments using a localloc-ed array using the format expected by runtime_invoke () */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = fsig->param_count * sizeof (target_mgreg_t); MONO_ADD_INS (cfg->cbb, ins); args [4] = ins; for (int i = 0; i < fsig->param_count; ++i) { int addr_reg; if (mini_is_gsharedvt_type (fsig->params [i])) { MonoInst *is_deref; int deref_arg_reg; ins = mini_emit_get_gsharedvt_info_klass (cfg, mono_class_from_mono_type_internal (fsig->params [i]), MONO_RGCTX_INFO_CLASS_BOX_TYPE); deref_arg_reg = alloc_preg (cfg); /* deref_arg = BOX_TYPE != MONO_GSHAREDVT_BOX_TYPE_VTYPE */ EMIT_NEW_BIALU_IMM (cfg, is_deref, OP_ISUB_IMM, deref_arg_reg, ins->dreg, 1); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREI1_MEMBASE_REG, args [3]->dreg, i, is_deref->dreg); } else if (has_gsharedvt) { MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI1_MEMBASE_IMM, args [3]->dreg, i, 0); } MonoInst *arg = sp [i + fsig->hasthis]; if (mini_is_gsharedvt_type (fsig->params [i]) || MONO_TYPE_IS_PRIMITIVE (fsig->params [i]) || MONO_TYPE_ISSTRUCT (fsig->params [i])) { EMIT_NEW_VARLOADA_VREG (cfg, ins, arg->dreg, fsig->params [i]); addr_reg = ins->dreg; EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), addr_reg); } else { EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args [4]->dreg, i * sizeof (target_mgreg_t), arg->dreg); } } } else { EMIT_NEW_ICONST (cfg, args [3], 0); EMIT_NEW_ICONST (cfg, args [4], 0); } ins = mono_emit_jit_icall (cfg, mono_gsharedvt_constrained_call, args); emit_widen = FALSE; if (mini_is_gsharedvt_type (fsig->ret)) { ins = handle_unbox_gsharedvt (cfg, mono_class_from_mono_type_internal (fsig->ret), ins); } else if (MONO_TYPE_IS_PRIMITIVE (fsig->ret) || MONO_TYPE_ISSTRUCT (fsig->ret) || m_class_is_enumtype (mono_class_from_mono_type_internal (fsig->ret))) { MonoInst *add; /* Unbox */ NEW_BIALU_IMM (cfg, add, OP_ADD_IMM, alloc_dreg (cfg, STACK_MP), ins->dreg, MONO_ABI_SIZEOF (MonoObject)); MONO_ADD_INS (cfg->cbb, add); /* Load value */ NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, add->dreg, 0); MONO_ADD_INS (cfg->cbb, ins); /* ins represents the call result */ } } else { GSHAREDVT_FAILURE (CEE_CALLVIRT); } *ref_emit_widen = emit_widen; return ins; exception_exit: return NULL; } static void mono_emit_load_got_addr (MonoCompile *cfg) { MonoInst *getaddr, *dummy_use; if (!cfg->got_var || cfg->got_var_allocated) return; MONO_INST_NEW (cfg, getaddr, OP_LOAD_GOTADDR); getaddr->cil_code = cfg->header->code; getaddr->dreg = cfg->got_var->dreg; /* Add it to the start of the first bblock */ if (cfg->bb_entry->code) { getaddr->next = cfg->bb_entry->code; cfg->bb_entry->code = getaddr; } else MONO_ADD_INS (cfg->bb_entry, getaddr); cfg->got_var_allocated = TRUE; /* * Add a dummy use to keep the got_var alive, since real uses might * only be generated by the back ends. * Add it to end_bblock, so the variable's lifetime covers the whole * method. * It would be better to make the usage of the got var explicit in all * cases when the backend needs it (i.e. calls, throw etc.), so this * wouldn't be needed. */ NEW_DUMMY_USE (cfg, dummy_use, cfg->got_var); MONO_ADD_INS (cfg->bb_exit, dummy_use); } static MonoMethod* get_constrained_method (MonoCompile *cfg, MonoImage *image, guint32 token, MonoMethod *cil_method, MonoClass *constrained_class, MonoGenericContext *generic_context) { MonoMethod *cmethod = cil_method; gboolean constrained_is_generic_param = m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR || m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR; if (cfg->current_method->wrapper_type != MONO_WRAPPER_NONE) { if (cfg->verbose_level > 2) printf ("DM Constrained call to %s\n", mono_type_get_full_name (constrained_class)); if (!(constrained_is_generic_param && cfg->gshared)) { cmethod = mono_get_method_constrained_with_method (image, cil_method, constrained_class, generic_context, cfg->error); CHECK_CFG_ERROR; } } else { if (cfg->verbose_level > 2) printf ("Constrained call to %s\n", mono_type_get_full_name (constrained_class)); if (constrained_is_generic_param && cfg->gshared) { /* * This is needed since get_method_constrained can't find * the method in klass representing a type var. * The type var is guaranteed to be a reference type in this * case. */ if (!mini_is_gsharedvt_klass (constrained_class)) g_assert (!m_class_is_valuetype (cmethod->klass)); } else { cmethod = mono_get_method_constrained_checked (image, token, constrained_class, generic_context, &cil_method, cfg->error); CHECK_CFG_ERROR; } } return cmethod; mono_error_exit: return NULL; } static gboolean method_does_not_return (MonoMethod *method) { // FIXME: Under netcore, these are decorated with the [DoesNotReturn] attribute return m_class_get_image (method->klass) == mono_defaults.corlib && !strcmp (m_class_get_name (method->klass), "ThrowHelper") && strstr (method->name, "Throw") == method->name && !method->is_inflated; } static int inline_limit, llvm_jit_inline_limit, llvm_aot_inline_limit; static gboolean inline_limit_inited; static gboolean mono_method_check_inlining (MonoCompile *cfg, MonoMethod *method) { MonoMethodHeaderSummary header; MonoVTable *vtable; int limit; #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK MonoMethodSignature *sig = mono_method_signature_internal (method); int i; #endif if (cfg->disable_inline) return FALSE; if (cfg->gsharedvt) return FALSE; if (cfg->inline_depth > 10) return FALSE; if (!mono_method_get_header_summary (method, &header)) return FALSE; /*runtime, icall and pinvoke are checked by summary call*/ if ((method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || (method->iflags & METHOD_IMPL_ATTRIBUTE_SYNCHRONIZED) || header.has_clauses) return FALSE; if (method->flags & METHOD_ATTRIBUTE_REQSECOBJ) /* Used to mark methods containing StackCrawlMark locals */ return FALSE; /* also consider num_locals? */ /* Do the size check early to avoid creating vtables */ if (!inline_limit_inited) { char *inlinelimit; if ((inlinelimit = g_getenv ("MONO_INLINELIMIT"))) { inline_limit = atoi (inlinelimit); llvm_jit_inline_limit = inline_limit; llvm_aot_inline_limit = inline_limit; g_free (inlinelimit); } else { inline_limit = INLINE_LENGTH_LIMIT; llvm_jit_inline_limit = LLVM_JIT_INLINE_LENGTH_LIMIT; llvm_aot_inline_limit = LLVM_AOT_INLINE_LENGTH_LIMIT; } inline_limit_inited = TRUE; } if (COMPILE_LLVM (cfg)) { if (cfg->compile_aot) limit = llvm_aot_inline_limit; else limit = llvm_jit_inline_limit; } else { limit = inline_limit; } if (header.code_size >= limit && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING)) return FALSE; /* * if we can initialize the class of the method right away, we do, * otherwise we don't allow inlining if the class needs initialization, * since it would mean inserting a call to mono_runtime_class_init() * inside the inlined code */ if (cfg->gshared && m_class_has_cctor (method->klass) && mini_class_check_context_used (cfg, method->klass)) return FALSE; { /* The AggressiveInlining hint is a good excuse to force that cctor to run. */ if ((cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) || method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) { if (m_class_has_cctor (method->klass)) { ERROR_DECL (error); vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } if (!cfg->compile_aot) { if (!mono_runtime_class_init_full (vtable, error)) { mono_error_cleanup (error); return FALSE; } } } } else if (mono_class_is_before_field_init (method->klass)) { if (cfg->run_cctors && m_class_has_cctor (method->klass)) { ERROR_DECL (error); /*FIXME it would easier and lazier to just use mono_class_try_get_vtable */ if (!m_class_get_runtime_vtable (method->klass)) /* No vtable created yet */ return FALSE; vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } /* This makes so that inline cannot trigger */ /* .cctors: too many apps depend on them */ /* running with a specific order... */ if (! vtable->initialized) return FALSE; if (!mono_runtime_class_init_full (vtable, error)) { mono_error_cleanup (error); return FALSE; } } } else if (mono_class_needs_cctor_run (method->klass, NULL)) { ERROR_DECL (error); if (!m_class_get_runtime_vtable (method->klass)) /* No vtable created yet */ return FALSE; vtable = mono_class_vtable_checked (method->klass, error); if (!is_ok (error)) { mono_error_cleanup (error); return FALSE; } if (!vtable->initialized) return FALSE; } } #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK if (mono_arch_is_soft_float ()) { /* FIXME: */ if (sig->ret && sig->ret->type == MONO_TYPE_R4) return FALSE; for (i = 0; i < sig->param_count; ++i) if (!m_type_is_byref (sig->params [i]) && sig->params [i]->type == MONO_TYPE_R4) return FALSE; } #endif if (g_list_find (cfg->dont_inline, method)) return FALSE; if (mono_profiler_get_call_instrumentation_flags (method)) return FALSE; if (mono_profiler_coverage_instrumentation_enabled (method)) return FALSE; if (method_does_not_return (method)) return FALSE; return TRUE; } static gboolean mini_field_access_needs_cctor_run (MonoCompile *cfg, MonoMethod *method, MonoClass *klass, MonoVTable *vtable) { if (!cfg->compile_aot) { g_assert (vtable); if (vtable->initialized) return FALSE; } if (mono_class_is_before_field_init (klass)) { if (cfg->method == method) return FALSE; } if (!mono_class_needs_cctor_run (klass, method)) return FALSE; if (! (method->flags & METHOD_ATTRIBUTE_STATIC) && (klass == method->klass)) /* The initialization is already done before the method is called */ return FALSE; return TRUE; } int mini_emit_sext_index_reg (MonoCompile *cfg, MonoInst *index) { int index_reg = index->dreg; int index2_reg; #if SIZEOF_REGISTER == 8 /* The array reg is 64 bits but the index reg is only 32 */ if (COMPILE_LLVM (cfg)) { /* * abcrem can't handle the OP_SEXT_I4, so add this after abcrem, * during OP_BOUNDS_CHECK decomposition, and in the implementation * of OP_X86_LEA for llvm. */ index2_reg = index_reg; } else { index2_reg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, index2_reg, index_reg); } #else if (index->type == STACK_I8) { index2_reg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_LCONV_TO_I4, index2_reg, index_reg); } else { index2_reg = index_reg; } #endif return index2_reg; } MonoInst* mini_emit_ldelema_1_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index, gboolean bcheck, gboolean bounded) { MonoInst *ins; guint32 size; int mult_reg, add_reg, array_reg, index2_reg, bounds_reg, lower_bound_reg, realidx2_reg; int context_used; if (mini_is_gsharedvt_variable_klass (klass)) { size = -1; } else { mono_class_init_internal (klass); size = mono_class_array_element_size (klass); } mult_reg = alloc_preg (cfg); array_reg = arr->dreg; realidx2_reg = index2_reg = mini_emit_sext_index_reg (cfg, index); if (bounded) { bounds_reg = alloc_preg (cfg); lower_bound_reg = alloc_preg (cfg); realidx2_reg = alloc_preg (cfg); MonoBasicBlock *is_null_bb = NULL; NEW_BBLOCK (cfg, is_null_bb); // gint32 lower_bound = 0; // if (arr->bounds) // lower_bound = arr->bounds.lower_bound; // realidx2 = index2 - lower_bound; MONO_EMIT_NEW_PCONST (cfg, lower_bound_reg, NULL); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds)); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, bounds_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, is_null_bb); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, lower_bound_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_START_BB (cfg, is_null_bb); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2_reg, lower_bound_reg); } if (bcheck) MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, realidx2_reg); #if defined(TARGET_X86) || defined(TARGET_AMD64) if (size == 1 || size == 2 || size == 4 || size == 8) { static const int fast_log2 [] = { 1, 0, 1, -1, 2, -1, -1, -1, 3 }; EMIT_NEW_X86_LEA (cfg, ins, array_reg, realidx2_reg, fast_log2 [size], MONO_STRUCT_OFFSET (MonoArray, vector)); ins->klass = klass; ins->type = STACK_MP; return ins; } #endif add_reg = alloc_ireg_mp (cfg); if (size == -1) { MonoInst *rgctx_ins; /* gsharedvt */ g_assert (cfg->gshared); context_used = mini_class_check_context_used (cfg, klass); g_assert (context_used); rgctx_ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_ARRAY_ELEMENT_SIZE); MONO_EMIT_NEW_BIALU (cfg, OP_IMUL, mult_reg, realidx2_reg, rgctx_ins->dreg); } else { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_MUL_IMM, mult_reg, realidx2_reg, size); } MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, array_reg, mult_reg); NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector)); ins->klass = klass; ins->type = STACK_MP; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* mini_emit_ldelema_2_ins (MonoCompile *cfg, MonoClass *klass, MonoInst *arr, MonoInst *index_ins1, MonoInst *index_ins2) { int bounds_reg = alloc_preg (cfg); int add_reg = alloc_ireg_mp (cfg); int mult_reg = alloc_preg (cfg); int mult2_reg = alloc_preg (cfg); int low1_reg = alloc_preg (cfg); int low2_reg = alloc_preg (cfg); int high1_reg = alloc_preg (cfg); int high2_reg = alloc_preg (cfg); int realidx1_reg = alloc_preg (cfg); int realidx2_reg = alloc_preg (cfg); int sum_reg = alloc_preg (cfg); int index1, index2; MonoInst *ins; guint32 size; mono_class_init_internal (klass); size = mono_class_array_element_size (klass); index1 = index_ins1->dreg; index2 = index_ins2->dreg; #if SIZEOF_REGISTER == 8 /* The array reg is 64 bits but the index reg is only 32 */ if (COMPILE_LLVM (cfg)) { /* Not needed */ } else { int tmpreg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index1); index1 = tmpreg; tmpreg = alloc_preg (cfg); MONO_EMIT_NEW_UNALU (cfg, OP_SEXT_I4, tmpreg, index2); index2 = tmpreg; } #else // FIXME: Do we need to do something here for i8 indexes, like in ldelema_1_ins ? #endif /* range checking */ MONO_EMIT_NEW_LOAD_MEMBASE (cfg, bounds_reg, arr->dreg, MONO_STRUCT_OFFSET (MonoArray, bounds)); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low1_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx1_reg, index1, low1_reg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high1_reg, bounds_reg, MONO_STRUCT_OFFSET (MonoArrayBounds, length)); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high1_reg, realidx1_reg); MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException"); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, low2_reg, bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, lower_bound)); MONO_EMIT_NEW_BIALU (cfg, OP_PSUB, realidx2_reg, index2, low2_reg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, high2_reg, bounds_reg, sizeof (MonoArrayBounds) + MONO_STRUCT_OFFSET (MonoArrayBounds, length)); MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, high2_reg, realidx2_reg); MONO_EMIT_NEW_COND_EXC (cfg, LE_UN, "IndexOutOfRangeException"); MONO_EMIT_NEW_BIALU (cfg, OP_PMUL, mult_reg, high2_reg, realidx1_reg); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, mult_reg, realidx2_reg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PMUL_IMM, mult2_reg, sum_reg, size); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, add_reg, mult2_reg, arr->dreg); NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, add_reg, add_reg, MONO_STRUCT_OFFSET (MonoArray, vector)); ins->type = STACK_MP; ins->klass = klass; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* mini_emit_ldelema_ins (MonoCompile *cfg, MonoMethod *cmethod, MonoInst **sp, guchar *ip, gboolean is_set) { int rank; MonoInst *addr; MonoMethod *addr_method; int element_size; MonoClass *eclass = m_class_get_element_class (cmethod->klass); gboolean bounded = m_class_get_byval_arg (cmethod->klass) ? m_class_get_byval_arg (cmethod->klass)->type == MONO_TYPE_ARRAY : FALSE; rank = mono_method_signature_internal (cmethod)->param_count - (is_set? 1: 0); if (rank == 1) return mini_emit_ldelema_1_ins (cfg, eclass, sp [0], sp [1], TRUE, bounded); /* emit_ldelema_2 depends on OP_LMUL */ if (!cfg->backend->emulate_mul_div && rank == 2 && (cfg->opt & MONO_OPT_INTRINS) && !mini_is_gsharedvt_variable_klass (eclass)) { return mini_emit_ldelema_2_ins (cfg, eclass, sp [0], sp [1], sp [2]); } if (mini_is_gsharedvt_variable_klass (eclass)) element_size = 0; else element_size = mono_class_array_element_size (eclass); addr_method = mono_marshal_get_array_address (rank, element_size); addr = mono_emit_method_call (cfg, addr_method, sp, NULL); return addr; } static gboolean mini_class_is_reference (MonoClass *klass) { return mini_type_is_reference (m_class_get_byval_arg (klass)); } MonoInst* mini_emit_array_store (MonoCompile *cfg, MonoClass *klass, MonoInst **sp, gboolean safety_checks) { if (safety_checks && mini_class_is_reference (klass) && !(MONO_INS_IS_PCONST_NULL (sp [2]))) { MonoClass *obj_array = mono_array_class_get_cached (mono_defaults.object_class); MonoMethod *helper; MonoInst *iargs [3]; if (sp [0]->type != STACK_OBJ) return NULL; if (sp [2]->type != STACK_OBJ) return NULL; iargs [2] = sp [2]; iargs [1] = sp [1]; iargs [0] = sp [0]; MonoClass *array_class = sp [0]->klass; if (array_class && m_class_get_rank (array_class) == 1) { MonoClass *eclass = m_class_get_element_class (array_class); if (m_class_is_sealed (eclass)) { helper = mono_marshal_get_virtual_stelemref (array_class); /* Make a non-virtual call if possible */ return mono_emit_method_call (cfg, helper, iargs, NULL); } } helper = mono_marshal_get_virtual_stelemref (obj_array); if (!helper->slot) mono_class_setup_vtable (obj_array); g_assert (helper->slot); return mono_emit_method_call (cfg, helper, iargs, sp [0]); } else { MonoInst *ins; if (mini_is_gsharedvt_variable_klass (klass)) { MonoInst *addr; // FIXME-VT: OP_ICONST optimization addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg); ins->opcode = OP_STOREV_MEMBASE; } else if (sp [1]->opcode == OP_ICONST) { int array_reg = sp [0]->dreg; int index_reg = sp [1]->dreg; int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector); if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg) && sp [1]->inst_c0 < 0) MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg); if (safety_checks) MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset, sp [2]->dreg); } else { MonoInst *addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], safety_checks, FALSE); if (!mini_debug_options.weak_memory_model && mini_class_is_reference (klass)) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0, sp [2]->dreg); if (mini_class_is_reference (klass)) mini_emit_write_barrier (cfg, addr, sp [2]); } return ins; } } MonoInst* mini_emit_memory_barrier (MonoCompile *cfg, int kind) { MonoInst *ins = NULL; MONO_INST_NEW (cfg, ins, OP_MEMORY_BARRIER); MONO_ADD_INS (cfg->cbb, ins); ins->backend.memory_barrier_kind = kind; return ins; } /* * This entry point could be used later for arbitrary method * redirection. */ inline static MonoInst* mini_redirect_call (MonoCompile *cfg, MonoMethod *method, MonoMethodSignature *signature, MonoInst **args, MonoInst *this_ins) { if (method->klass == mono_defaults.string_class) { /* managed string allocation support */ if (strcmp (method->name, "FastAllocateString") == 0) { MonoInst *iargs [2]; MonoVTable *vtable = mono_class_vtable_checked (method->klass, cfg->error); MonoMethod *managed_alloc = NULL; mono_error_assert_ok (cfg->error); /*Should not fail since it System.String*/ #ifndef MONO_CROSS_COMPILE managed_alloc = mono_gc_get_managed_allocator (method->klass, FALSE, FALSE); #endif if (!managed_alloc) return NULL; EMIT_NEW_VTABLECONST (cfg, iargs [0], vtable); iargs [1] = args [0]; return mono_emit_method_call (cfg, managed_alloc, iargs, this_ins); } } return NULL; } static void mono_save_args (MonoCompile *cfg, MonoMethodSignature *sig, MonoInst **sp) { MonoInst *store, *temp; int i; for (i = 0; i < sig->param_count + sig->hasthis; ++i) { MonoType *argtype = (sig->hasthis && (i == 0)) ? type_from_stack_type (*sp) : sig->params [i - sig->hasthis]; /* * FIXME: We should use *args++ = sp [0], but that would mean the arg * would be different than the MonoInst's used to represent arguments, and * the ldelema implementation can't deal with that. * Solution: When ldelema is used on an inline argument, create a var for * it, emit ldelema on that var, and emit the saving code below in * inline_method () if needed. */ temp = mono_compile_create_var (cfg, argtype, OP_LOCAL); cfg->args [i] = temp; /* This uses cfg->args [i] which is set by the preceding line */ EMIT_NEW_ARGSTORE (cfg, store, i, *sp); store->cil_code = sp [0]->cil_code; sp++; } } #define MONO_INLINE_CALLED_LIMITED_METHODS 1 #define MONO_INLINE_CALLER_LIMITED_METHODS 1 #if (MONO_INLINE_CALLED_LIMITED_METHODS) static gboolean check_inline_called_method_name_limit (MonoMethod *called_method) { int strncmp_result; static const char *limit = NULL; if (limit == NULL) { const char *limit_string = g_getenv ("MONO_INLINE_CALLED_METHOD_NAME_LIMIT"); if (limit_string != NULL) limit = limit_string; else limit = ""; } if (limit [0] != '\0') { char *called_method_name = mono_method_full_name (called_method, TRUE); strncmp_result = strncmp (called_method_name, limit, strlen (limit)); g_free (called_method_name); //return (strncmp_result <= 0); return (strncmp_result == 0); } else { return TRUE; } } #endif #if (MONO_INLINE_CALLER_LIMITED_METHODS) static gboolean check_inline_caller_method_name_limit (MonoMethod *caller_method) { int strncmp_result; static const char *limit = NULL; if (limit == NULL) { const char *limit_string = g_getenv ("MONO_INLINE_CALLER_METHOD_NAME_LIMIT"); if (limit_string != NULL) { limit = limit_string; } else { limit = ""; } } if (limit [0] != '\0') { char *caller_method_name = mono_method_full_name (caller_method, TRUE); strncmp_result = strncmp (caller_method_name, limit, strlen (limit)); g_free (caller_method_name); //return (strncmp_result <= 0); return (strncmp_result == 0); } else { return TRUE; } } #endif void mini_emit_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype) { static double r8_0 = 0.0; static float r4_0 = 0.0; MonoInst *ins; int t; rtype = mini_get_underlying_type (rtype); t = rtype->type; if (m_type_is_byref (rtype)) { MONO_EMIT_NEW_PCONST (cfg, dreg, NULL); } else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) { MONO_EMIT_NEW_ICONST (cfg, dreg, 0); } else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) { MONO_EMIT_NEW_I8CONST (cfg, dreg, 0); } else if (cfg->r4fp && t == MONO_TYPE_R4) { MONO_INST_NEW (cfg, ins, OP_R4CONST); ins->type = STACK_R4; ins->inst_p0 = (void*)&r4_0; ins->dreg = dreg; MONO_ADD_INS (cfg->cbb, ins); } else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) { MONO_INST_NEW (cfg, ins, OP_R8CONST); ins->type = STACK_R8; ins->inst_p0 = (void*)&r8_0; ins->dreg = dreg; MONO_ADD_INS (cfg->cbb, ins); } else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) || ((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) { MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype)); } else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) { MONO_EMIT_NEW_VZERO (cfg, dreg, mono_class_from_mono_type_internal (rtype)); } else { MONO_EMIT_NEW_PCONST (cfg, dreg, NULL); } } static void emit_dummy_init_rvar (MonoCompile *cfg, int dreg, MonoType *rtype) { int t; rtype = mini_get_underlying_type (rtype); t = rtype->type; if (m_type_is_byref (rtype)) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_PCONST); } else if (t >= MONO_TYPE_BOOLEAN && t <= MONO_TYPE_U4) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_ICONST); } else if (t == MONO_TYPE_I8 || t == MONO_TYPE_U8) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_I8CONST); } else if (cfg->r4fp && t == MONO_TYPE_R4) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R4CONST); } else if (t == MONO_TYPE_R4 || t == MONO_TYPE_R8) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_R8CONST); } else if ((t == MONO_TYPE_VALUETYPE) || (t == MONO_TYPE_TYPEDBYREF) || ((t == MONO_TYPE_GENERICINST) && mono_type_generic_inst_is_valuetype (rtype))) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO); } else if (((t == MONO_TYPE_VAR) || (t == MONO_TYPE_MVAR)) && mini_type_var_is_vt (rtype)) { MONO_EMIT_NEW_DUMMY_INIT (cfg, dreg, OP_DUMMY_VZERO); } else { mini_emit_init_rvar (cfg, dreg, rtype); } } /* If INIT is FALSE, emit dummy initialization statements to keep the IR valid */ static void emit_init_local (MonoCompile *cfg, int local, MonoType *type, gboolean init) { MonoInst *var = cfg->locals [local]; if (COMPILE_SOFT_FLOAT (cfg)) { MonoInst *store; int reg = alloc_dreg (cfg, (MonoStackType)var->type); mini_emit_init_rvar (cfg, reg, type); EMIT_NEW_LOCSTORE (cfg, store, local, cfg->cbb->last_ins); } else { if (init) mini_emit_init_rvar (cfg, var->dreg, type); else emit_dummy_init_rvar (cfg, var->dreg, type); } } int mini_inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always) { return inline_method (cfg, cmethod, fsig, sp, ip, real_offset, inline_always, NULL); } /* * inline_method: * * Return the cost of inlining CMETHOD, or zero if it should not be inlined. */ static int inline_method (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **sp, guchar *ip, guint real_offset, gboolean inline_always, gboolean *is_empty) { ERROR_DECL (error); MonoInst *ins, *rvar = NULL; MonoMethodHeader *cheader; MonoBasicBlock *ebblock, *sbblock; int i, costs; MonoInst **prev_locals, **prev_args; MonoType **prev_arg_types; guint prev_real_offset; GHashTable *prev_cbb_hash; MonoBasicBlock **prev_cil_offset_to_bb; MonoBasicBlock *prev_cbb; const guchar *prev_ip; guchar *prev_cil_start; guint32 prev_cil_offset_to_bb_len; MonoMethod *prev_current_method; MonoGenericContext *prev_generic_context; gboolean ret_var_set, prev_ret_var_set, prev_disable_inline, virtual_ = FALSE; g_assert (cfg->exception_type == MONO_EXCEPTION_NONE); #if (MONO_INLINE_CALLED_LIMITED_METHODS) if ((! inline_always) && ! check_inline_called_method_name_limit (cmethod)) return 0; #endif #if (MONO_INLINE_CALLER_LIMITED_METHODS) if ((! inline_always) && ! check_inline_caller_method_name_limit (cfg->method)) return 0; #endif if (!fsig) fsig = mono_method_signature_internal (cmethod); if (cfg->verbose_level > 2) printf ("INLINE START %p %s -> %s\n", cmethod, mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE)); if (!cmethod->inline_info) { cfg->stat_inlineable_methods++; cmethod->inline_info = 1; } if (is_empty) *is_empty = FALSE; /* allocate local variables */ cheader = mono_method_get_header_checked (cmethod, error); if (!cheader) { if (inline_always) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); mono_error_move (cfg->error, error); } else { mono_error_cleanup (error); } return 0; } if (is_empty && cheader->code_size == 1 && cheader->code [0] == CEE_RET) *is_empty = TRUE; /* allocate space to store the return value */ if (!MONO_TYPE_IS_VOID (fsig->ret)) { rvar = mono_compile_create_var (cfg, fsig->ret, OP_LOCAL); } prev_locals = cfg->locals; cfg->locals = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, cheader->num_locals * sizeof (MonoInst*)); for (i = 0; i < cheader->num_locals; ++i) cfg->locals [i] = mono_compile_create_var (cfg, cheader->locals [i], OP_LOCAL); /* allocate start and end blocks */ /* This is needed so if the inline is aborted, we can clean up */ NEW_BBLOCK (cfg, sbblock); sbblock->real_offset = real_offset; NEW_BBLOCK (cfg, ebblock); ebblock->block_num = cfg->num_bblocks++; ebblock->real_offset = real_offset; prev_args = cfg->args; prev_arg_types = cfg->arg_types; prev_ret_var_set = cfg->ret_var_set; prev_real_offset = cfg->real_offset; prev_cbb_hash = cfg->cbb_hash; prev_cil_offset_to_bb = cfg->cil_offset_to_bb; prev_cil_offset_to_bb_len = cfg->cil_offset_to_bb_len; prev_cil_start = cfg->cil_start; prev_ip = cfg->ip; prev_cbb = cfg->cbb; prev_current_method = cfg->current_method; prev_generic_context = cfg->generic_context; prev_disable_inline = cfg->disable_inline; cfg->ret_var_set = FALSE; cfg->inline_depth ++; if (ip && *ip == CEE_CALLVIRT && !(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) virtual_ = TRUE; costs = mono_method_to_ir (cfg, cmethod, sbblock, ebblock, rvar, sp, real_offset, virtual_); ret_var_set = cfg->ret_var_set; cfg->real_offset = prev_real_offset; cfg->cbb_hash = prev_cbb_hash; cfg->cil_offset_to_bb = prev_cil_offset_to_bb; cfg->cil_offset_to_bb_len = prev_cil_offset_to_bb_len; cfg->cil_start = prev_cil_start; cfg->ip = prev_ip; cfg->locals = prev_locals; cfg->args = prev_args; cfg->arg_types = prev_arg_types; cfg->current_method = prev_current_method; cfg->generic_context = prev_generic_context; cfg->ret_var_set = prev_ret_var_set; cfg->disable_inline = prev_disable_inline; cfg->inline_depth --; if ((costs >= 0 && costs < 60) || inline_always || (costs >= 0 && (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING))) { if (cfg->verbose_level > 2) printf ("INLINE END %s -> %s\n", mono_method_full_name (cfg->method, TRUE), mono_method_full_name (cmethod, TRUE)); mono_error_assert_ok (cfg->error); cfg->stat_inlined_methods++; /* always add some code to avoid block split failures */ MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (prev_cbb, ins); prev_cbb->next_bb = sbblock; link_bblock (cfg, prev_cbb, sbblock); /* * Get rid of the begin and end bblocks if possible to aid local * optimizations. */ if (prev_cbb->out_count == 1) mono_merge_basic_blocks (cfg, prev_cbb, sbblock); if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] != ebblock)) mono_merge_basic_blocks (cfg, prev_cbb, prev_cbb->out_bb [0]); if ((ebblock->in_count == 1) && ebblock->in_bb [0]->out_count == 1) { MonoBasicBlock *prev = ebblock->in_bb [0]; if (prev->next_bb == ebblock) { mono_merge_basic_blocks (cfg, prev, ebblock); cfg->cbb = prev; if ((prev_cbb->out_count == 1) && (prev_cbb->out_bb [0]->in_count == 1) && (prev_cbb->out_bb [0] == prev)) { mono_merge_basic_blocks (cfg, prev_cbb, prev); cfg->cbb = prev_cbb; } } else { /* There could be a bblock after 'prev', and making 'prev' the current bb could cause problems */ cfg->cbb = ebblock; } } else { /* * Its possible that the rvar is set in some prev bblock, but not in others. * (#1835). */ if (rvar) { MonoBasicBlock *bb; for (i = 0; i < ebblock->in_count; ++i) { bb = ebblock->in_bb [i]; if (bb->last_ins && bb->last_ins->opcode == OP_NOT_REACHED) { cfg->cbb = bb; mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret); } } } cfg->cbb = ebblock; } if (rvar) { /* * If the inlined method contains only a throw, then the ret var is not * set, so set it to a dummy value. */ if (!ret_var_set) mini_emit_init_rvar (cfg, rvar->dreg, fsig->ret); EMIT_NEW_TEMPLOAD (cfg, ins, rvar->inst_c0); *sp++ = ins; } cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader); return costs + 1; } else { if (cfg->verbose_level > 2) { const char *msg = mono_error_get_message (cfg->error); printf ("INLINE ABORTED %s (cost %d) %s\n", mono_method_full_name (cmethod, TRUE), costs, msg ? msg : ""); } cfg->exception_type = MONO_EXCEPTION_NONE; clear_cfg_error (cfg); /* This gets rid of the newly added bblocks */ cfg->cbb = prev_cbb; } cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, cheader); return 0; } /* * Some of these comments may well be out-of-date. * Design decisions: we do a single pass over the IL code (and we do bblock * splitting/merging in the few cases when it's required: a back jump to an IL * address that was not already seen as bblock starting point). * Code is validated as we go (full verification is still better left to metadata/verify.c). * Complex operations are decomposed in simpler ones right away. We need to let the * arch-specific code peek and poke inside this process somehow (except when the * optimizations can take advantage of the full semantic info of coarse opcodes). * All the opcodes of the form opcode.s are 'normalized' to opcode. * MonoInst->opcode initially is the IL opcode or some simplification of that * (OP_LOAD, OP_STORE). The arch-specific code may rearrange it to an arch-specific * opcode with value bigger than OP_LAST. * At this point the IR can be handed over to an interpreter, a dumb code generator * or to the optimizing code generator that will translate it to SSA form. * * Profiling directed optimizations. * We may compile by default with few or no optimizations and instrument the code * or the user may indicate what methods to optimize the most either in a config file * or through repeated runs where the compiler applies offline the optimizations to * each method and then decides if it was worth it. */ #define CHECK_TYPE(ins) if (!(ins)->type) UNVERIFIED #define CHECK_STACK(num) if ((sp - stack_start) < (num)) UNVERIFIED #define CHECK_STACK_OVF() if (((sp - stack_start) + 1) > header->max_stack) UNVERIFIED #define CHECK_ARG(num) if ((unsigned)(num) >= (unsigned)num_args) UNVERIFIED #define CHECK_LOCAL(num) if ((unsigned)(num) >= (unsigned)header->num_locals) UNVERIFIED #define CHECK_OPSIZE(size) if ((size) < 1 || ip + (size) > end) UNVERIFIED #define CHECK_UNVERIFIABLE(cfg) if (cfg->unverifiable) UNVERIFIED #define CHECK_TYPELOAD(klass) if (!(klass) || mono_class_has_failure (klass)) TYPE_LOAD_ERROR ((klass)) /* offset from br.s -> br like opcodes */ #define BIG_BRANCH_OFFSET 13 static gboolean ip_in_bb (MonoCompile *cfg, MonoBasicBlock *bb, const guint8* ip) { MonoBasicBlock *b = cfg->cil_offset_to_bb [ip - cfg->cil_start]; return b == NULL || b == bb; } static int get_basic_blocks (MonoCompile *cfg, MonoMethodHeader* header, guint real_offset, guchar *start, guchar *end, guchar **pos) { guchar *ip = start; guchar *target; int i; guint cli_addr; MonoBasicBlock *bblock; const MonoOpcode *opcode; while (ip < end) { cli_addr = ip - start; i = mono_opcode_value ((const guint8 **)&ip, end); if (i < 0) UNVERIFIED; opcode = &mono_opcodes [i]; switch (opcode->argument) { case MonoInlineNone: ip++; break; case MonoInlineString: case MonoInlineType: case MonoInlineField: case MonoInlineMethod: case MonoInlineTok: case MonoInlineSig: case MonoShortInlineR: case MonoInlineI: ip += 5; break; case MonoInlineVar: ip += 3; break; case MonoShortInlineVar: case MonoShortInlineI: ip += 2; break; case MonoShortInlineBrTarget: target = start + cli_addr + 2 + (signed char)ip [1]; GET_BBLOCK (cfg, bblock, target); ip += 2; if (ip < end) GET_BBLOCK (cfg, bblock, ip); break; case MonoInlineBrTarget: target = start + cli_addr + 5 + (gint32)read32 (ip + 1); GET_BBLOCK (cfg, bblock, target); ip += 5; if (ip < end) GET_BBLOCK (cfg, bblock, ip); break; case MonoInlineSwitch: { guint32 n = read32 (ip + 1); guint32 j; ip += 5; cli_addr += 5 + 4 * n; target = start + cli_addr; GET_BBLOCK (cfg, bblock, target); for (j = 0; j < n; ++j) { target = start + cli_addr + (gint32)read32 (ip); GET_BBLOCK (cfg, bblock, target); ip += 4; } break; } case MonoInlineR: case MonoInlineI8: ip += 9; break; default: g_assert_not_reached (); } if (i == CEE_THROW) { guchar *bb_start = ip - 1; /* Find the start of the bblock containing the throw */ bblock = NULL; while ((bb_start >= start) && !bblock) { bblock = cfg->cil_offset_to_bb [(bb_start) - start]; bb_start --; } if (bblock) bblock->out_of_line = 1; } } return 0; unverified: exception_exit: *pos = ip; return 1; } static MonoMethod * mini_get_method_allow_open (MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context, MonoError *error) { MonoMethod *method; error_init (error); if (m->wrapper_type != MONO_WRAPPER_NONE) { method = (MonoMethod *)mono_method_get_wrapper_data (m, token); if (context) { method = mono_class_inflate_generic_method_checked (method, context, error); } } else { method = mono_get_method_checked (m_class_get_image (m->klass), token, klass, context, error); } return method; } static MonoMethod * mini_get_method (MonoCompile *cfg, MonoMethod *m, guint32 token, MonoClass *klass, MonoGenericContext *context) { ERROR_DECL (error); MonoMethod *method = mini_get_method_allow_open (m, token, klass, context, cfg ? cfg->error : error); if (method && cfg && !cfg->gshared && mono_class_is_open_constructed_type (m_class_get_byval_arg (method->klass))) { mono_error_set_bad_image (cfg->error, m_class_get_image (cfg->method->klass), "Method with open type while not compiling gshared"); method = NULL; } if (!method && !cfg) mono_error_cleanup (error); /* FIXME don't swallow the error */ return method; } static MonoMethodSignature* mini_get_signature (MonoMethod *method, guint32 token, MonoGenericContext *context, MonoError *error) { MonoMethodSignature *fsig; error_init (error); if (method->wrapper_type != MONO_WRAPPER_NONE) { fsig = (MonoMethodSignature *)mono_method_get_wrapper_data (method, token); } else { fsig = mono_metadata_parse_signature_checked (m_class_get_image (method->klass), token, error); return_val_if_nok (error, NULL); } if (context) { fsig = mono_inflate_generic_signature(fsig, context, error); } return fsig; } /* * Return the original method is a wrapper is specified. We can only access * the custom attributes from the original method. */ static MonoMethod* get_original_method (MonoMethod *method) { if (method->wrapper_type == MONO_WRAPPER_NONE) return method; /* native code (which is like Critical) can call any managed method XXX FIXME XXX to validate all usages */ if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) return NULL; /* in other cases we need to find the original method */ return mono_marshal_method_from_wrapper (method); } static guchar* il_read_op (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op) // If ip is desired_il_op, return the next ip, else NULL. { if (G_LIKELY (ip < end) && G_UNLIKELY (*ip == first_byte)) { MonoOpcodeEnum il_op = MonoOpcodeEnum_Invalid; // mono_opcode_value_and_size updates ip, but not in the expected way. const guchar *temp_ip = ip; const int size = mono_opcode_value_and_size (&temp_ip, end, &il_op); return (G_LIKELY (size > 0) && G_UNLIKELY (il_op == desired_il_op)) ? (ip + size) : NULL; } return NULL; } static guchar* il_read_op_and_token (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, guint32 *token) { ip = il_read_op (ip, end, first_byte, desired_il_op); if (ip) *token = read32 (ip - 4); // could be +1 or +2 from start return ip; } static guchar* il_read_branch_and_target (guchar *ip, guchar *end, guchar first_byte, MonoOpcodeEnum desired_il_op, int size, guchar **target) { ip = il_read_op (ip, end, first_byte, desired_il_op); if (ip) { gint32 delta = 0; switch (size) { case 1: delta = (signed char)ip [-1]; break; case 4: delta = (gint32)read32 (ip - 4); break; } // FIXME verify it is within the function and start of an instruction. *target = ip + delta; return ip; } return NULL; } #define il_read_brtrue(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE, MONO_CEE_BRTRUE, 4, target)) #define il_read_brtrue_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRTRUE_S, MONO_CEE_BRTRUE_S, 1, target)) #define il_read_brfalse(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE, MONO_CEE_BRFALSE, 4, target)) #define il_read_brfalse_s(ip, end, target) (il_read_branch_and_target (ip, end, CEE_BRFALSE_S, MONO_CEE_BRFALSE_S, 1, target)) #define il_read_dup(ip, end) (il_read_op (ip, end, CEE_DUP, MONO_CEE_DUP)) #define il_read_newobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_NEW_OBJ, MONO_CEE_NEWOBJ, token)) #define il_read_ldtoken(ip, end, token) (il_read_op_and_token (ip, end, CEE_LDTOKEN, MONO_CEE_LDTOKEN, token)) #define il_read_call(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALL, MONO_CEE_CALL, token)) #define il_read_callvirt(ip, end, token) (il_read_op_and_token (ip, end, CEE_CALLVIRT, MONO_CEE_CALLVIRT, token)) #define il_read_initobj(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_INITOBJ, token)) #define il_read_constrained(ip, end, token) (il_read_op_and_token (ip, end, CEE_PREFIX1, MONO_CEE_CONSTRAINED_, token)) #define il_read_unbox_any(ip, end, token) (il_read_op_and_token (ip, end, CEE_UNBOX_ANY, MONO_CEE_UNBOX_ANY, token)) /* * Check that the IL instructions at ip are the array initialization * sequence and return the pointer to the data and the size. */ static const char* initialize_array_data (MonoCompile *cfg, MonoMethod *method, gboolean aot, guchar *ip, guchar *end, MonoClass *klass, guint32 len, int *out_size, guint32 *out_field_token, MonoOpcodeEnum *il_op, guchar **next_ip) { /* * newarr[System.Int32] * dup * ldtoken field valuetype ... * call void class [mscorlib]System.Runtime.CompilerServices.RuntimeHelpers::InitializeArray(class [mscorlib]System.Array, valuetype [mscorlib]System.RuntimeFieldHandle) */ guint32 token; guint32 field_token; if ((ip = il_read_dup (ip, end)) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_ldtoken (ip, end, &field_token)) && IS_FIELD_DEF (field_token) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_call (ip, end, &token))) { ERROR_DECL (error); guint32 rva; const char *data_ptr; int size = 0; MonoMethod *cmethod; MonoClass *dummy_class; MonoClassField *field = mono_field_from_token_checked (m_class_get_image (method->klass), field_token, &dummy_class, NULL, error); int dummy_align; if (!field) { mono_error_cleanup (error); /* FIXME don't swallow the error */ return NULL; } *out_field_token = field_token; cmethod = mini_get_method (NULL, method, token, NULL, NULL); if (!cmethod) return NULL; if (strcmp (cmethod->name, "InitializeArray") || strcmp (m_class_get_name (cmethod->klass), "RuntimeHelpers") || m_class_get_image (cmethod->klass) != mono_defaults.corlib) return NULL; switch (mini_get_underlying_type (m_class_get_byval_arg (klass))->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: size = 1; break; /* we need to swap on big endian, so punt. Should we handle R4 and R8 as well? */ #if TARGET_BYTE_ORDER == G_LITTLE_ENDIAN case MONO_TYPE_I2: case MONO_TYPE_U2: size = 2; break; case MONO_TYPE_I4: case MONO_TYPE_U4: case MONO_TYPE_R4: size = 4; break; case MONO_TYPE_R8: case MONO_TYPE_I8: case MONO_TYPE_U8: size = 8; break; #endif default: return NULL; } size *= len; if (size > mono_type_size (field->type, &dummy_align)) return NULL; *out_size = size; /*g_print ("optimized in %s: size: %d, numelems: %d\n", method->name, size, newarr->inst_newa_len->inst_c0);*/ MonoImage *method_klass_image = m_class_get_image (method->klass); if (!image_is_dynamic (method_klass_image)) { guint32 field_index = mono_metadata_token_index (field_token); mono_metadata_field_info (method_klass_image, field_index - 1, NULL, &rva, NULL); data_ptr = mono_image_rva_map (method_klass_image, rva); /*g_print ("field: 0x%08x, rva: %d, rva_ptr: %p\n", read32 (ip + 2), rva, data_ptr);*/ /* for aot code we do the lookup on load */ if (aot && data_ptr) data_ptr = (const char *)GUINT_TO_POINTER (rva); } else { /*FIXME is it possible to AOT a SRE assembly not meant to be saved? */ g_assert (!aot); data_ptr = mono_field_get_data (field); } if (!data_ptr) return NULL; *il_op = MONO_CEE_CALL; *next_ip = ip; return data_ptr; } return NULL; } static void set_exception_type_from_invalid_il (MonoCompile *cfg, MonoMethod *method, guchar *ip) { ERROR_DECL (error); char *method_fname = mono_method_full_name (method, TRUE); char *method_code; MonoMethodHeader *header = mono_method_get_header_checked (method, error); if (!header) { method_code = g_strdup_printf ("could not parse method body due to %s", mono_error_get_message (error)); mono_error_cleanup (error); } else if (header->code_size == 0) method_code = g_strdup ("method body is empty."); else method_code = mono_disasm_code_one (NULL, method, ip, NULL); mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Invalid IL code in %s: %s\n", method_fname, method_code)); g_free (method_fname); g_free (method_code); cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header); } guint32 mono_type_to_stloc_coerce (MonoType *type) { if (m_type_is_byref (type)) return 0; type = mini_get_underlying_type (type); handle_enum: switch (type->type) { case MONO_TYPE_I1: return OP_ICONV_TO_I1; case MONO_TYPE_U1: return OP_ICONV_TO_U1; case MONO_TYPE_I2: return OP_ICONV_TO_I2; case MONO_TYPE_U2: return OP_ICONV_TO_U2; case MONO_TYPE_I4: case MONO_TYPE_U4: case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: case MONO_TYPE_CLASS: case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: case MONO_TYPE_I8: case MONO_TYPE_U8: case MONO_TYPE_R4: case MONO_TYPE_R8: case MONO_TYPE_TYPEDBYREF: case MONO_TYPE_GENERICINST: return 0; case MONO_TYPE_VALUETYPE: if (m_class_is_enumtype (type->data.klass)) { type = mono_class_enum_basetype_internal (type->data.klass); goto handle_enum; } return 0; case MONO_TYPE_VAR: case MONO_TYPE_MVAR: //TODO I believe we don't need to handle gsharedvt as there won't be match and, for example, u1 is not covariant to u32 return 0; default: g_error ("unknown type 0x%02x in mono_type_to_stloc_coerce", type->type); } return -1; } static void emit_stloc_ir (MonoCompile *cfg, MonoInst **sp, MonoMethodHeader *header, int n) { MonoInst *ins; guint32 coerce_op = mono_type_to_stloc_coerce (header->locals [n]); if (coerce_op) { if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) { if (cfg->verbose_level > 2) printf ("Found existing coercing is enough for stloc\n"); } else { MONO_INST_NEW (cfg, ins, coerce_op); ins->dreg = alloc_ireg (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->klass = mono_class_from_mono_type_internal (header->locals [n]); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } } guint32 opcode = mono_type_to_regmove (cfg, header->locals [n]); if (!cfg->deopt && (opcode == OP_MOVE) && cfg->cbb->last_ins == sp [0] && ((sp [0]->opcode == OP_ICONST) || (sp [0]->opcode == OP_I8CONST))) { /* Optimize reg-reg moves away */ /* * Can't optimize other opcodes, since sp[0] might point to * the last ins of a decomposed opcode. */ sp [0]->dreg = (cfg)->locals [n]->dreg; } else { EMIT_NEW_LOCSTORE (cfg, ins, n, *sp); } } static void emit_starg_ir (MonoCompile *cfg, MonoInst **sp, int n) { MonoInst *ins; guint32 coerce_op = mono_type_to_stloc_coerce (cfg->arg_types [n]); if (coerce_op) { if (cfg->cbb->last_ins == sp [0] && sp [0]->opcode == coerce_op) { if (cfg->verbose_level > 2) printf ("Found existing coercing is enough for starg\n"); } else { MONO_INST_NEW (cfg, ins, coerce_op); ins->dreg = alloc_ireg (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->klass = mono_class_from_mono_type_internal (cfg->arg_types [n]); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } } EMIT_NEW_ARGSTORE (cfg, ins, n, *sp); } /* * ldloca inhibits many optimizations so try to get rid of it in common * cases. */ static guchar * emit_optimized_ldloca_ir (MonoCompile *cfg, guchar *ip, guchar *end, int local) { guint32 token; MonoClass *klass; MonoType *type; guchar *start = ip; if ((ip = il_read_initobj (ip, end, &token)) && ip_in_bb (cfg, cfg->cbb, start + 1)) { /* From the INITOBJ case */ klass = mini_get_class (cfg->current_method, token, cfg->generic_context); CHECK_TYPELOAD (klass); type = mini_get_underlying_type (m_class_get_byval_arg (klass)); emit_init_local (cfg, local, type, TRUE); return ip; } exception_exit: return NULL; } static MonoInst* handle_call_res_devirt (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *call_res) { /* * Devirt EqualityComparer.Default.Equals () calls for some types. * The corefx code excepts these calls to be devirtualized. * This depends on the implementation of EqualityComparer.Default, which is * in mcs/class/referencesource/mscorlib/system/collections/generic/equalitycomparer.cs */ if (m_class_get_image (cmethod->klass) == mono_defaults.corlib && !strcmp (m_class_get_name (cmethod->klass), "EqualityComparer`1") && !strcmp (cmethod->name, "get_Default")) { MonoType *param_type = mono_class_get_generic_class (cmethod->klass)->context.class_inst->type_argv [0]; MonoClass *inst; MonoGenericContext ctx; ERROR_DECL (error); memset (&ctx, 0, sizeof (ctx)); MonoType *args [ ] = { param_type }; ctx.class_inst = mono_metadata_get_generic_inst (1, args); inst = mono_class_inflate_generic_class_checked (mono_class_get_iequatable_class (), &ctx, error); mono_error_assert_ok (error); /* EqualityComparer<T>.Default returns specific types depending on T */ // FIXME: Add more /* 1. Implements IEquatable<T> */ /* * Can't use this for string/byte as it might use a different comparer: * * // Specialize type byte for performance reasons * if (t == typeof(byte)) { * return (EqualityComparer<T>)(object)(new ByteEqualityComparer()); * } * #if MOBILE * // Breaks .net serialization compatibility * if (t == typeof (string)) * return (EqualityComparer<T>)(object)new InternalStringComparer (); * #endif */ if (mono_class_is_assignable_from_internal (inst, mono_class_from_mono_type_internal (param_type)) && param_type->type != MONO_TYPE_U1 && param_type->type != MONO_TYPE_STRING) { MonoInst *typed_objref; MonoClass *gcomparer_inst; memset (&ctx, 0, sizeof (ctx)); args [0] = param_type; ctx.class_inst = mono_metadata_get_generic_inst (1, args); MonoClass *gcomparer = mono_class_get_geqcomparer_class (); g_assert (gcomparer); gcomparer_inst = mono_class_inflate_generic_class_checked (gcomparer, &ctx, error); if (is_ok (error)) { MONO_INST_NEW (cfg, typed_objref, OP_TYPED_OBJREF); typed_objref->type = STACK_OBJ; typed_objref->dreg = alloc_ireg_ref (cfg); typed_objref->sreg1 = call_res->dreg; typed_objref->klass = gcomparer_inst; MONO_ADD_INS (cfg->cbb, typed_objref); call_res = typed_objref; /* Force decompose */ cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; } } } return call_res; } static gboolean is_exception_class (MonoClass *klass) { if (G_LIKELY (m_class_get_supertypes (klass))) return mono_class_has_parent_fast (klass, mono_defaults.exception_class); while (klass) { if (klass == mono_defaults.exception_class) return TRUE; klass = m_class_get_parent (klass); } return FALSE; } /* * is_jit_optimizer_disabled: * * Determine whenever M's assembly has a DebuggableAttribute with the * IsJITOptimizerDisabled flag set. */ static gboolean is_jit_optimizer_disabled (MonoMethod *m) { MonoAssembly *ass = m_class_get_image (m->klass)->assembly; g_assert (ass); if (ass->jit_optimizer_disabled_inited) return ass->jit_optimizer_disabled; return mono_assembly_is_jit_optimizer_disabled (ass); } gboolean mono_is_supported_tailcall_helper (gboolean value, const char *svalue) { if (!value) mono_tailcall_print ("%s %s\n", __func__, svalue); return value; } static gboolean mono_is_not_supported_tailcall_helper (gboolean value, const char *svalue, MonoMethod *method, MonoMethod *cmethod) { // Return value, printing if it inhibits tailcall. if (value && mono_tailcall_print_enabled ()) { const char *lparen = strchr (svalue, ' ') ? "(" : ""; const char *rparen = *lparen ? ")" : ""; mono_tailcall_print ("%s %s -> %s %s%s%s:%d\n", __func__, method->name, cmethod->name, lparen, svalue, rparen, value); } return value; } #define IS_NOT_SUPPORTED_TAILCALL(x) (mono_is_not_supported_tailcall_helper((x), #x, method, cmethod)) static gboolean is_supported_tailcall (MonoCompile *cfg, const guint8 *ip, MonoMethod *method, MonoMethod *cmethod, MonoMethodSignature *fsig, gboolean virtual_, gboolean extra_arg, gboolean *ptailcall_calli) { // Some checks apply to "regular", some to "calli", some to both. // To ease burden on caller, always compute regular and calli. gboolean tailcall = TRUE; gboolean tailcall_calli = TRUE; if (IS_NOT_SUPPORTED_TAILCALL (virtual_ && !cfg->backend->have_op_tailcall_membase)) tailcall = FALSE; if (IS_NOT_SUPPORTED_TAILCALL (!cfg->backend->have_op_tailcall_reg)) tailcall_calli = FALSE; if (!tailcall && !tailcall_calli) goto exit; // FIXME in calli, there is no type for for the this parameter, // so we assume it might be valuetype; in future we should issue a range // check, so rule out pointing to frame (for other reference parameters also) if ( IS_NOT_SUPPORTED_TAILCALL (cmethod && fsig->hasthis && m_class_is_valuetype (cmethod->klass)) // This might point to the current method's stack. Emit range check? || IS_NOT_SUPPORTED_TAILCALL (cmethod && (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) || IS_NOT_SUPPORTED_TAILCALL (fsig->pinvoke) // i.e. if !cmethod (calli) || IS_NOT_SUPPORTED_TAILCALL (cfg->method->save_lmf) || IS_NOT_SUPPORTED_TAILCALL (!cmethod && fsig->hasthis) // FIXME could be valuetype to current frame; range check || IS_NOT_SUPPORTED_TAILCALL (cmethod && cmethod->wrapper_type && cmethod->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) // http://www.mono-project.com/docs/advanced/runtime/docs/generic-sharing/ // // 1. Non-generic non-static methods of reference types have access to the // RGCTX via the "this" argument (this->vtable->rgctx). // 2. a Non-generic static methods of reference types and b. non-generic methods // of value types need to be passed a pointer to the caller's class's VTable in the MONO_ARCH_RGCTX_REG register. // 3. Generic methods need to be passed a pointer to the MRGCTX in the MONO_ARCH_RGCTX_REG register // // That is what vtable_arg is here (always?). // // Passing vtable_arg uses (requires?) a volatile non-parameter register, // such as AMD64 rax, r10, r11, or the return register on many architectures. // ARM32 does not always clearly have such a register. ARM32's return register // is a parameter register. // iPhone could use r9 except on old systems. iPhone/ARM32 is not particularly // important. Linux/arm32 is less clear. // ARM32's scratch r12 might work but only with much collateral change. // // Imagine F1 calls F2, and F2 tailcalls F3. // F2 and F3 are managed. F1 is native. // Without a tailcall, F2 can save and restore everything needed for F1. // However if the extra parameter were in a non-volatile, such as ARM32 V5/R8, // F3 cannot easily restore it for F1, in the current scheme. The current // scheme where the extra parameter is not merely an extra parameter, but // passed "outside of the ABI". // // If all native to managed transitions are intercepted and wrapped (w/o tailcall), // then they can preserve this register and the rest of the managed callgraph // treat it as volatile. // // Interface method dispatch has the same problem (imt_arg). || IS_NOT_SUPPORTED_TAILCALL (extra_arg && !cfg->backend->have_volatile_non_param_register) || IS_NOT_SUPPORTED_TAILCALL (cfg->gsharedvt) ) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } for (int i = 0; i < fsig->param_count; ++i) { if (IS_NOT_SUPPORTED_TAILCALL (m_type_is_byref (fsig->params [i]) || fsig->params [i]->type == MONO_TYPE_PTR || fsig->params [i]->type == MONO_TYPE_FNPTR)) { tailcall_calli = FALSE; tailcall = FALSE; // These can point to the current method's stack. Emit range check? goto exit; } } MonoMethodSignature *caller_signature; MonoMethodSignature *callee_signature; caller_signature = mono_method_signature_internal (method); callee_signature = cmethod ? mono_method_signature_internal (cmethod) : fsig; g_assert (caller_signature); g_assert (callee_signature); // Require an exact match on return type due to various conversions in emit_move_return_value that would be skipped. // The main troublesome conversions are double <=> float. // CoreCLR allows some conversions here, such as integer truncation. // As well I <=> I[48] and U <=> U[48] would be ok, for matching size. if (IS_NOT_SUPPORTED_TAILCALL (mini_get_underlying_type (caller_signature->ret)->type != mini_get_underlying_type (callee_signature->ret)->type) || IS_NOT_SUPPORTED_TAILCALL (!mono_arch_tailcall_supported (cfg, caller_signature, callee_signature, virtual_))) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } /* Debugging support */ #if 0 if (!mono_debug_count ()) { tailcall_calli = FALSE; tailcall = FALSE; goto exit; } #endif // See check_sp in mini_emit_calli_full. if (tailcall_calli && IS_NOT_SUPPORTED_TAILCALL (mini_should_check_stack_pointer (cfg))) tailcall_calli = FALSE; exit: mono_tailcall_print ("tail.%s %s -> %s tailcall:%d tailcall_calli:%d gshared:%d extra_arg:%d virtual_:%d\n", mono_opcode_name (*ip), method->name, cmethod ? cmethod->name : "calli", tailcall, tailcall_calli, cfg->gshared, extra_arg, virtual_); *ptailcall_calli = tailcall_calli; return tailcall; } /* * is_addressable_valuetype_load * * Returns true if a previous load can be done without doing an extra copy, given the new instruction ip and the type of the object being loaded ldtype */ static gboolean is_addressable_valuetype_load (MonoCompile* cfg, guint8* ip, MonoType* ldtype) { /* Avoid loading a struct just to load one of its fields */ gboolean is_load_instruction = (*ip == CEE_LDFLD); gboolean is_in_previous_bb = ip_in_bb(cfg, cfg->cbb, ip); gboolean is_struct = MONO_TYPE_ISSTRUCT(ldtype); return is_load_instruction && is_in_previous_bb && is_struct; } /* * handle_ctor_call: * * Handle calls made to ctors from NEWOBJ opcodes. */ static void handle_ctor_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, int context_used, MonoInst **sp, guint8 *ip, int *inline_costs) { MonoInst *vtable_arg = NULL, *callvirt_this_arg = NULL, *ins; if (cmethod && (ins = mini_emit_inst_for_ctor (cfg, cmethod, fsig, sp))) { g_assert (MONO_TYPE_IS_VOID (fsig->ret)); CHECK_CFG_EXCEPTION; return; } if (mono_class_generic_sharing_enabled (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE)) { MonoRgctxAccess access = mini_get_rgctx_access_for_method (cmethod); if (access == MONO_RGCTX_ACCESS_MRGCTX) { mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX); } else if (access == MONO_RGCTX_ACCESS_VTABLE) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); } else { g_assert (access == MONO_RGCTX_ACCESS_THIS); } } /* Avoid virtual calls to ctors if possible */ if ((cfg->opt & MONO_OPT_INLINE) && cmethod && !context_used && !vtable_arg && mono_method_check_inlining (cfg, cmethod) && !mono_class_is_subclass_of_internal (cmethod->klass, mono_defaults.exception_class, FALSE)) { int costs; if ((costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, FALSE, NULL))) { cfg->real_offset += 5; *inline_costs += costs - 5; } else { INLINE_FAILURE ("inline failure"); // FIXME-VT: Clean this up if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE(*ip); mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, NULL); } } else if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { MonoInst *addr; addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE); if (cfg->llvm_only) { // FIXME: Avoid initializing vtable_arg mini_emit_llvmonly_calli (cfg, fsig, sp, addr); } else { mini_emit_calli (cfg, fsig, sp, addr, NULL, vtable_arg); } } else if (context_used && ((!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) || !mono_class_generic_sharing_enabled (cmethod->klass)) || cfg->gsharedvt)) { MonoInst *cmethod_addr; /* Generic calls made out of gsharedvt methods cannot be patched, so use an indirect call */ if (cfg->llvm_only) { MonoInst *addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC); mini_emit_llvmonly_calli (cfg, fsig, sp, addr); } else { cmethod_addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); mini_emit_calli (cfg, fsig, sp, cmethod_addr, NULL, vtable_arg); } } else { INLINE_FAILURE ("ctor call"); ins = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, callvirt_this_arg, NULL, vtable_arg); } exception_exit: mono_error_exit: return; } typedef struct { MonoMethod *method; gboolean inst_tailcall; } HandleCallData; /* * handle_constrained_call: * * Handle constrained calls. Return a MonoInst* representing the call or NULL. * May overwrite sp [0] and modify the ref_... parameters. */ static MonoInst* handle_constrained_call (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoClass *constrained_class, MonoInst **sp, HandleCallData *cdata, MonoMethod **ref_cmethod, gboolean *ref_virtual, gboolean *ref_emit_widen) { MonoInst *ins, *addr; MonoMethod *method = cdata->method; gboolean constrained_partial_call = FALSE; gboolean constrained_is_generic_param = m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_VAR || m_class_get_byval_arg (constrained_class)->type == MONO_TYPE_MVAR; MonoType *gshared_constraint = NULL; if (constrained_is_generic_param && cfg->gshared) { if (!mini_is_gsharedvt_klass (constrained_class)) { g_assert (!m_class_is_valuetype (cmethod->klass)); if (!mini_type_is_reference (m_class_get_byval_arg (constrained_class))) constrained_partial_call = TRUE; MonoType *t = m_class_get_byval_arg (constrained_class); MonoGenericParam *gparam = t->data.generic_param; gshared_constraint = gparam->gshared_constraint; } } if (mini_is_gsharedvt_klass (constrained_class)) { if ((cmethod->klass != mono_defaults.object_class) && m_class_is_valuetype (constrained_class) && m_class_is_valuetype (cmethod->klass)) { /* The 'Own method' case below */ } else if (m_class_get_image (cmethod->klass) != mono_defaults.corlib && !mono_class_is_interface (cmethod->klass) && !m_class_is_valuetype (cmethod->klass)) { /* 'The type parameter is instantiated as a reference type' case below. */ } else { ins = handle_constrained_gsharedvt_call (cfg, cmethod, fsig, sp, constrained_class, ref_emit_widen); CHECK_CFG_EXCEPTION; g_assert (ins); if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_class %s -> %s\n", method->name, cmethod->name); return ins; } } if (m_method_is_static (cmethod)) { /* Call to an abstract static method, handled normally */ return NULL; } else if (constrained_partial_call) { gboolean need_box = TRUE; /* * The receiver is a valuetype, but the exact type is not known at compile time. This means the * called method is not known at compile time either. The called method could end up being * one of the methods on the parent classes (object/valuetype/enum), in which case we need * to box the receiver. * A simple solution would be to box always and make a normal virtual call, but that would * be bad performance wise. */ if (mono_class_is_interface (cmethod->klass) && mono_class_is_ginst (cmethod->klass) && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT)) { /* * The parent classes implement no generic interfaces, so the called method will be a vtype method, so no boxing necessary. */ /* If the method is not abstract, it's a default interface method, and we need to box */ need_box = FALSE; } if (gshared_constraint && MONO_TYPE_IS_PRIMITIVE (gshared_constraint) && cmethod->klass == mono_defaults.object_class && !strcmp (cmethod->name, "GetHashCode")) { /* * The receiver is constrained to a primitive type or an enum with the same basetype. * Enum.GetHashCode () returns the hash code of the underlying type (see comments in Enum.cs), * so the constrained call can be replaced with a normal call to the basetype GetHashCode () * method. */ MonoClass *gshared_constraint_class = mono_class_from_mono_type_internal (gshared_constraint); cmethod = get_method_nofail (gshared_constraint_class, cmethod->name, 0, 0); g_assert (cmethod); *ref_cmethod = cmethod; *ref_virtual = FALSE; if (cfg->verbose_level) printf (" -> %s\n", mono_method_get_full_name (cmethod)); return NULL; } if (!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class)) { /* The called method is not virtual, i.e. Object:GetType (), the receiver is a vtype, has to box */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } else if (need_box) { MonoInst *box_type; MonoBasicBlock *is_ref_bb, *end_bb; MonoInst *nonbox_call, *addr; /* * Determine at runtime whenever the called method is defined on object/valuetype/enum, and emit a boxing call * if needed. * FIXME: It is possible to inline the called method in a lot of cases, i.e. for T_INT, * the no-box case goes to a method in Int32, while the box case goes to a method in Enum. */ addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); NEW_BBLOCK (cfg, is_ref_bb); NEW_BBLOCK (cfg, end_bb); box_type = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_BOX_TYPE); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, box_type->dreg, MONO_GSHAREDVT_BOX_TYPE_REF); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBEQ, is_ref_bb); /* Non-ref case */ if (cfg->llvm_only) /* addr is an ftndesc in this case */ nonbox_call = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else nonbox_call = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Ref case */ MONO_START_BB (cfg, is_ref_bb); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; if (cfg->llvm_only) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); cfg->cbb = end_bb; nonbox_call->dreg = ins->dreg; if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_partial_need_box %s -> %s\n", method->name, cmethod->name); return ins; } else { g_assert (mono_class_is_interface (cmethod->klass)); addr = emit_get_rgctx_virt_method (cfg, mono_class_check_context_used (constrained_class), constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); if (cfg->llvm_only) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); if (cdata->inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall constrained_partial %s -> %s\n", method->name, cmethod->name); return ins; } } else if (!m_class_is_valuetype (constrained_class)) { int dreg = alloc_ireg_ref (cfg); /* * The type parameter is instantiated as a reference * type. We have a managed pointer on the stack, so * we need to dereference it here. */ EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, 0); ins->type = STACK_OBJ; sp [0] = ins; } else if (cmethod->klass == mono_defaults.object_class || cmethod->klass == m_class_get_parent (mono_defaults.enum_class) || cmethod->klass == mono_defaults.enum_class) { /* * The type parameter is instantiated as a valuetype, * but that type doesn't override the method we're * calling, so we need to box `this'. */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } else { if (cmethod->klass != constrained_class) { /* Enums/default interface methods */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (constrained_class), sp [0]->dreg, 0); ins->klass = constrained_class; sp [0] = mini_emit_box (cfg, ins, constrained_class, mono_class_check_context_used (constrained_class)); CHECK_CFG_EXCEPTION; } *ref_virtual = FALSE; } exception_exit: return NULL; } static void emit_setret (MonoCompile *cfg, MonoInst *val) { MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (cfg->method)->ret); MonoInst *ins; if (mini_type_to_stind (cfg, ret_type) == CEE_STOBJ) { MonoInst *ret_addr; if (!cfg->vret_addr) { EMIT_NEW_VARSTORE (cfg, ins, cfg->ret, ret_type, val); } else { EMIT_NEW_RETLOADA (cfg, ret_addr); MonoClass *ret_class = mono_class_from_mono_type_internal (ret_type); if (MONO_CLASS_IS_SIMD (cfg, ret_class)) EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ret_addr->dreg, 0, val->dreg); else EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREV_MEMBASE, ret_addr->dreg, 0, val->dreg); ins->klass = ret_class; } } else { #ifdef MONO_ARCH_SOFT_FLOAT_FALLBACK if (COMPILE_SOFT_FLOAT (cfg) && !m_type_is_byref (ret_type) && ret_type->type == MONO_TYPE_R4) { MonoInst *conv; MonoInst *iargs [ ] = { val }; conv = mono_emit_jit_icall (cfg, mono_fload_r4_arg, iargs); mono_arch_emit_setret (cfg, cfg->method, conv); } else { mono_arch_emit_setret (cfg, cfg->method, val); } #else mono_arch_emit_setret (cfg, cfg->method, val); #endif } } /* * Emit a call to enter the interpreter for methods with filter clauses. */ static void emit_llvmonly_interp_entry (MonoCompile *cfg, MonoMethodHeader *header) { MonoInst *ins; MonoInst **iargs; MonoMethodSignature *sig = mono_method_signature_internal (cfg->method); MonoInst *ftndesc; cfg->interp_in_signatures = g_slist_prepend_mempool (cfg->mempool, cfg->interp_in_signatures, sig); /* * Emit a call to the interp entry function. We emit it here instead of the llvm backend since * calling conventions etc. are easier to handle here. The LLVM backend will only emit the * entry/exit bblocks. */ g_assert (cfg->cbb == cfg->bb_init); if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (sig)) { /* * Would have to generate a gsharedvt out wrapper which calls the interp entry wrapper, but * the gsharedvt out wrapper might not exist if the caller is also a gsharedvt method since * the concrete signature of the call might not exist in the program. * So transition directly to the interpreter without the wrappers. */ MonoInst *args_ins; MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = sig->param_count * sizeof (target_mgreg_t); MONO_ADD_INS (cfg->cbb, ins); args_ins = ins; for (int i = 0; i < sig->hasthis + sig->param_count; ++i) { MonoInst *arg_addr_ins; EMIT_NEW_VARLOADA ((cfg), arg_addr_ins, cfg->args [i], cfg->arg_types [i]); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, args_ins->dreg, i * sizeof (target_mgreg_t), arg_addr_ins->dreg); } MonoInst *ret_var = NULL; MonoInst *ret_arg_ins; if (!MONO_TYPE_IS_VOID (sig->ret)) { ret_var = mono_compile_create_var (cfg, sig->ret, OP_LOCAL); EMIT_NEW_VARLOADA (cfg, ret_arg_ins, ret_var, sig->ret); } else { EMIT_NEW_PCONST (cfg, ret_arg_ins, NULL); } iargs = g_newa (MonoInst*, 3); iargs [0] = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_INTERP_METHOD); iargs [1] = ret_arg_ins; iargs [2] = args_ins; mono_emit_jit_icall_id (cfg, MONO_JIT_ICALL_mini_llvmonly_interp_entry_gsharedvt, iargs); if (!MONO_TYPE_IS_VOID (sig->ret)) EMIT_NEW_VARLOAD (cfg, ins, ret_var, sig->ret); else ins = NULL; } else { /* Obtain the interp entry function */ ftndesc = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_LLVMONLY_INTERP_ENTRY); /* Call it */ iargs = g_newa (MonoInst*, sig->param_count + 1); for (int i = 0; i < sig->param_count + sig->hasthis; ++i) EMIT_NEW_ARGLOAD (cfg, iargs [i], i); ins = mini_emit_llvmonly_calli (cfg, sig, iargs, ftndesc); } /* Do a normal return */ if (cfg->ret) { emit_setret (cfg, ins); /* * Since only bb_entry/bb_exit is emitted if interp_entry_only is set, * its possible that the return value becomes an OP_PHI node whose inputs * are not emitted. Make it volatile to prevent that. */ cfg->ret->flags |= MONO_INST_VOLATILE; } MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = cfg->bb_exit; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, cfg->bb_exit); } typedef union _MonoOpcodeParameter { gint32 i32; gint64 i64; float f; double d; guchar *branch_target; } MonoOpcodeParameter; typedef struct _MonoOpcodeInfo { guint constant : 4; // private gint pops : 3; // public -1 means variable gint pushes : 3; // public -1 means variable } MonoOpcodeInfo; static const MonoOpcodeInfo* mono_opcode_decode (guchar *ip, guint op_size, MonoOpcodeEnum il_op, MonoOpcodeParameter *parameter) { #define Push0 (0) #define Pop0 (0) #define Push1 (1) #define Pop1 (1) #define PushI (1) #define PopI (1) #define PushI8 (1) #define PopI8 (1) #define PushRef (1) #define PopRef (1) #define PushR4 (1) #define PopR4 (1) #define PushR8 (1) #define PopR8 (1) #define VarPush (-1) #define VarPop (-1) static const MonoOpcodeInfo mono_opcode_info [ ] = { #define OPDEF(name, str, pops, pushes, param, param_constant, a, b, c, flow) {param_constant + 1, pops, pushes }, #include "mono/cil/opcode.def" #undef OPDEF }; #undef Push0 #undef Pop0 #undef Push1 #undef Pop1 #undef PushI #undef PopI #undef PushI8 #undef PopI8 #undef PushRef #undef PopRef #undef PushR4 #undef PopR4 #undef PushR8 #undef PopR8 #undef VarPush #undef VarPop gint32 delta; guchar *next_ip = ip + op_size; const MonoOpcodeInfo *info = &mono_opcode_info [il_op]; switch (mono_opcodes [il_op].argument) { case MonoInlineNone: parameter->i32 = (int)info->constant - 1; break; case MonoInlineString: case MonoInlineType: case MonoInlineField: case MonoInlineMethod: case MonoInlineTok: case MonoInlineSig: case MonoShortInlineR: case MonoInlineI: parameter->i32 = read32 (next_ip - 4); // FIXME check token type? break; case MonoShortInlineI: parameter->i32 = (signed char)next_ip [-1]; break; case MonoInlineVar: parameter->i32 = read16 (next_ip - 2); break; case MonoShortInlineVar: parameter->i32 = next_ip [-1]; break; case MonoInlineR: case MonoInlineI8: parameter->i64 = read64 (next_ip - 8); break; case MonoShortInlineBrTarget: delta = (signed char)next_ip [-1]; goto branch_target; case MonoInlineBrTarget: delta = (gint32)read32 (next_ip - 4); branch_target: parameter->branch_target = delta + next_ip; break; case MonoInlineSwitch: // complicated break; default: g_error ("%s %d %d\n", __func__, il_op, mono_opcodes [il_op].argument); } return info; } /* * mono_method_to_ir: * * Translate the .net IL into linear IR. * * @start_bblock: if not NULL, the starting basic block, used during inlining. * @end_bblock: if not NULL, the ending basic block, used during inlining. * @return_var: if not NULL, the place where the return value is stored, used during inlining. * @inline_args: if not NULL, contains the arguments to the inline call * @inline_offset: if not zero, the real offset from the inline call, or zero otherwise. * @is_virtual_call: whether this method is being called as a result of a call to callvirt * * This method is used to turn ECMA IL into Mono's internal Linear IR * reprensetation. It is used both for entire methods, as well as * inlining existing methods. In the former case, the @start_bblock, * @end_bblock, @return_var, @inline_args are all set to NULL, and the * inline_offset is set to zero. * * Returns: the inline cost, or -1 if there was an error processing this method. */ int mono_method_to_ir (MonoCompile *cfg, MonoMethod *method, MonoBasicBlock *start_bblock, MonoBasicBlock *end_bblock, MonoInst *return_var, MonoInst **inline_args, guint inline_offset, gboolean is_virtual_call) { ERROR_DECL (error); // Buffer to hold parameters to mono_new_array, instead of varargs. MonoInst *array_new_localalloc_ins = NULL; MonoInst *ins, **sp, **stack_start; MonoBasicBlock *tblock = NULL; MonoBasicBlock *init_localsbb = NULL, *init_localsbb2 = NULL; MonoSimpleBasicBlock *bb = NULL, *original_bb = NULL; MonoMethod *method_definition; MonoInst **arg_array; MonoMethodHeader *header; MonoImage *image; guint32 token, ins_flag; MonoClass *klass; MonoClass *constrained_class = NULL; gboolean save_last_error = FALSE; guchar *ip, *end, *target, *err_pos; MonoMethodSignature *sig; MonoGenericContext *generic_context = NULL; MonoGenericContainer *generic_container = NULL; MonoType **param_types; int i, n, start_new_bblock, dreg; int num_calls = 0, inline_costs = 0; guint num_args; GSList *class_inits = NULL; gboolean dont_verify, dont_verify_stloc, readonly = FALSE; int context_used; gboolean init_locals, seq_points, skip_dead_blocks; gboolean sym_seq_points = FALSE; MonoDebugMethodInfo *minfo; MonoBitSet *seq_point_locs = NULL; MonoBitSet *seq_point_set_locs = NULL; const char *ovf_exc = NULL; gboolean emitted_funccall_seq_point = FALSE; gboolean detached_before_ret = FALSE; gboolean ins_has_side_effect; if (!cfg->disable_inline) cfg->disable_inline = (method->iflags & METHOD_IMPL_ATTRIBUTE_NOOPTIMIZATION) || is_jit_optimizer_disabled (method); cfg->current_method = method; image = m_class_get_image (method->klass); /* serialization and xdomain stuff may need access to private fields and methods */ dont_verify = FALSE; dont_verify |= method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; /* bug #77896 */ dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP; dont_verify |= method->wrapper_type == MONO_WRAPPER_COMINTEROP_INVOKE; /* still some type unsafety issues in marshal wrappers... (unknown is PtrToStructure) */ dont_verify_stloc = method->wrapper_type == MONO_WRAPPER_MANAGED_TO_NATIVE; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_OTHER; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED; dont_verify_stloc |= method->wrapper_type == MONO_WRAPPER_STELEMREF; header = mono_method_get_header_checked (method, cfg->error); if (!header) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); goto exception_exit; } else { cfg->headers_to_free = g_slist_prepend_mempool (cfg->mempool, cfg->headers_to_free, header); } generic_container = mono_method_get_generic_container (method); sig = mono_method_signature_internal (method); num_args = sig->hasthis + sig->param_count; ip = (guchar*)header->code; cfg->cil_start = ip; end = ip + header->code_size; cfg->stat_cil_code_size += header->code_size; seq_points = cfg->gen_seq_points && cfg->method == method; if (method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) { /* We could hit a seq point before attaching to the JIT (#8338) */ seq_points = FALSE; } if (method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); if (info->subtype == WRAPPER_SUBTYPE_INTERP_IN) { /* We could hit a seq point before attaching to the JIT (#8338) */ seq_points = FALSE; } } if (cfg->prof_coverage) { if (cfg->compile_aot) g_error ("Coverage profiling is not supported with AOT."); INLINE_FAILURE ("coverage profiling"); cfg->coverage_info = mono_profiler_coverage_alloc (cfg->method, header->code_size); } if ((cfg->gen_sdb_seq_points && cfg->method == method) || cfg->prof_coverage) { minfo = mono_debug_lookup_method (method); if (minfo) { MonoSymSeqPoint *sps; int i, n_il_offsets; mono_debug_get_seq_points (minfo, NULL, NULL, NULL, &sps, &n_il_offsets); seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); sym_seq_points = TRUE; for (i = 0; i < n_il_offsets; ++i) { if (sps [i].il_offset < header->code_size) mono_bitset_set_fast (seq_point_locs, sps [i].il_offset); } g_free (sps); MonoDebugMethodAsyncInfo* asyncMethod = mono_debug_lookup_method_async_debug_info (method); if (asyncMethod) { for (i = 0; asyncMethod != NULL && i < asyncMethod->num_awaits; i++) { mono_bitset_set_fast (seq_point_locs, asyncMethod->resume_offsets[i]); mono_bitset_set_fast (seq_point_locs, asyncMethod->yield_offsets[i]); } mono_debug_free_method_async_debug_info (asyncMethod); } } else if (!method->wrapper_type && !method->dynamic && mono_debug_image_has_debug_info (m_class_get_image (method->klass))) { /* Methods without line number info like auto-generated property accessors */ seq_point_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); seq_point_set_locs = mono_bitset_mem_new (mono_mempool_alloc0 (cfg->mempool, mono_bitset_alloc_size (header->code_size, 0)), header->code_size, 0); sym_seq_points = TRUE; } } /* * Methods without init_locals set could cause asserts in various passes * (#497220). To work around this, we emit dummy initialization opcodes * (OP_DUMMY_ICONST etc.) which generate no code. These are only supported * on some platforms. */ if (cfg->opt & MONO_OPT_UNSAFE) init_locals = header->init_locals; else init_locals = TRUE; method_definition = method; while (method_definition->is_inflated) { MonoMethodInflated *imethod = (MonoMethodInflated *) method_definition; method_definition = imethod->declaring; } if (sig->is_inflated) generic_context = mono_method_get_context (method); else if (generic_container) generic_context = &generic_container->context; cfg->generic_context = generic_context; if (!cfg->gshared) g_assert (!sig->has_type_parameters); if (sig->generic_param_count && method->wrapper_type == MONO_WRAPPER_NONE) { g_assert (method->is_inflated); g_assert (mono_method_get_context (method)->method_inst); } if (method->is_inflated && mono_method_get_context (method)->method_inst) g_assert (sig->generic_param_count); if (cfg->method == method) { cfg->real_offset = 0; } else { cfg->real_offset = inline_offset; } cfg->cil_offset_to_bb = (MonoBasicBlock **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoBasicBlock*) * header->code_size); cfg->cil_offset_to_bb_len = header->code_size; if (cfg->verbose_level > 2) printf ("method to IR %s\n", mono_method_full_name (method, TRUE)); param_types = (MonoType **)mono_mempool_alloc (cfg->mempool, sizeof (MonoType*) * num_args); if (sig->hasthis) param_types [0] = m_class_is_valuetype (method->klass) ? m_class_get_this_arg (method->klass) : m_class_get_byval_arg (method->klass); for (n = 0; n < sig->param_count; ++n) param_types [n + sig->hasthis] = sig->params [n]; cfg->arg_types = param_types; cfg->dont_inline = g_list_prepend (cfg->dont_inline, method); if (cfg->method == method) { /* ENTRY BLOCK */ NEW_BBLOCK (cfg, start_bblock); cfg->bb_entry = start_bblock; start_bblock->cil_code = NULL; start_bblock->cil_length = 0; /* EXIT BLOCK */ NEW_BBLOCK (cfg, end_bblock); cfg->bb_exit = end_bblock; end_bblock->cil_code = NULL; end_bblock->cil_length = 0; end_bblock->flags |= BB_INDIRECT_JUMP_TARGET; g_assert (cfg->num_bblocks == 2); arg_array = cfg->args; if (header->num_clauses) { cfg->spvars = g_hash_table_new (NULL, NULL); cfg->exvars = g_hash_table_new (NULL, NULL); } cfg->clause_is_dead = mono_mempool_alloc0 (cfg->mempool, sizeof (gboolean) * header->num_clauses); /* handle exception clauses */ for (i = 0; i < header->num_clauses; ++i) { MonoBasicBlock *try_bb; MonoExceptionClause *clause = &header->clauses [i]; GET_BBLOCK (cfg, try_bb, ip + clause->try_offset); try_bb->real_offset = clause->try_offset; try_bb->try_start = TRUE; GET_BBLOCK (cfg, tblock, ip + clause->handler_offset); tblock->real_offset = clause->handler_offset; tblock->flags |= BB_EXCEPTION_HANDLER; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY) mono_create_exvar_for_offset (cfg, clause->handler_offset); /* * Linking the try block with the EH block hinders inlining as we won't be able to * merge the bblocks from inlining and produce an artificial hole for no good reason. */ if (COMPILE_LLVM (cfg)) link_bblock (cfg, try_bb, tblock); if (*(ip + clause->handler_offset) == CEE_POP) tblock->flags |= BB_EXCEPTION_DEAD_OBJ; if (clause->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER || clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) { MONO_INST_NEW (cfg, ins, OP_START_HANDLER); MONO_ADD_INS (tblock, ins); if (seq_points && clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FILTER) { /* finally clauses already have a seq point */ /* seq points for filter clauses are emitted below */ NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE); MONO_ADD_INS (tblock, ins); } /* todo: is a fault block unsafe to optimize? */ if (clause->flags == MONO_EXCEPTION_CLAUSE_FAULT) tblock->flags |= BB_EXCEPTION_UNSAFE; } /*printf ("clause try IL_%04x to IL_%04x handler %d at IL_%04x to IL_%04x\n", clause->try_offset, clause->try_offset + clause->try_len, clause->flags, clause->handler_offset, clause->handler_offset + clause->handler_len); while (p < end) { printf ("%s", mono_disasm_code_one (NULL, method, p, &p)); }*/ /* catch and filter blocks get the exception object on the stack */ if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { /* mostly like handle_stack_args (), but just sets the input args */ /* printf ("handling clause at IL_%04x\n", clause->handler_offset); */ tblock->in_scount = 1; tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*)); tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset); cfg->cbb = tblock; #ifdef MONO_CONTEXT_SET_LLVM_EXC_REG /* The EH code passes in the exception in a register to both JITted and LLVM compiled code */ if (!cfg->compile_llvm) { MONO_INST_NEW (cfg, ins, OP_GET_EX_OBJ); ins->dreg = tblock->in_stack [0]->dreg; MONO_ADD_INS (tblock, ins); } #else MonoInst *dummy_use; /* * Add a dummy use for the exvar so its liveness info will be * correct. */ EMIT_NEW_DUMMY_USE (cfg, dummy_use, tblock->in_stack [0]); #endif if (seq_points && clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { NEW_SEQ_POINT (cfg, ins, clause->handler_offset, TRUE); MONO_ADD_INS (tblock, ins); } if (clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) { GET_BBLOCK (cfg, tblock, ip + clause->data.filter_offset); tblock->flags |= BB_EXCEPTION_HANDLER; tblock->real_offset = clause->data.filter_offset; tblock->in_scount = 1; tblock->in_stack = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*)); /* The filter block shares the exvar with the handler block */ tblock->in_stack [0] = mono_create_exvar_for_offset (cfg, clause->handler_offset); MONO_INST_NEW (cfg, ins, OP_START_HANDLER); MONO_ADD_INS (tblock, ins); } } if (clause->flags != MONO_EXCEPTION_CLAUSE_FILTER && clause->data.catch_class && cfg->gshared && mono_class_check_context_used (clause->data.catch_class)) { /* * In shared generic code with catch * clauses containing type variables * the exception handling code has to * be able to get to the rgctx. * Therefore we have to make sure that * the vtable/mrgctx argument (for * static or generic methods) or the * "this" argument (for non-static * methods) are live. */ if ((method->flags & METHOD_ATTRIBUTE_STATIC) || mini_method_get_context (method)->method_inst || m_class_is_valuetype (method->klass)) { mono_get_vtable_var (cfg); } else { MonoInst *dummy_use; EMIT_NEW_DUMMY_USE (cfg, dummy_use, arg_array [0]); } } } } else { arg_array = g_newa (MonoInst*, num_args); cfg->cbb = start_bblock; cfg->args = arg_array; mono_save_args (cfg, sig, inline_args); } if (cfg->method == method && cfg->self_init && cfg->compile_aot && !COMPILE_LLVM (cfg)) { MonoMethod *wrapper; MonoInst *args [2]; int idx; /* * Emit code to initialize this method by calling the init wrapper emitted by LLVM. * This is not efficient right now, but its only used for the methods which fail * LLVM compilation. * FIXME: Optimize this */ g_assert (!cfg->gshared); wrapper = mono_marshal_get_aot_init_wrapper (AOT_INIT_METHOD); /* Emit this into the entry bb so it comes before the GC safe point which depends on an inited GOT */ cfg->cbb = cfg->bb_entry; idx = mono_aot_get_method_index (cfg->method); EMIT_NEW_ICONST (cfg, args [0], idx); /* Dummy */ EMIT_NEW_ICONST (cfg, args [1], 0); mono_emit_method_call (cfg, wrapper, args, NULL); } if (cfg->llvm_only && cfg->interp && cfg->method == method && !cfg->deopt) { if (header->num_clauses) { for (int i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; /* Finally clauses are checked after the remove_finally pass */ if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY) cfg->interp_entry_only = TRUE; } } } /* we use a separate basic block for the initialization code */ NEW_BBLOCK (cfg, init_localsbb); if (cfg->method == method) cfg->bb_init = init_localsbb; init_localsbb->real_offset = cfg->real_offset; start_bblock->next_bb = init_localsbb; link_bblock (cfg, start_bblock, init_localsbb); init_localsbb2 = init_localsbb; cfg->cbb = init_localsbb; if (cfg->gsharedvt && cfg->method == method) { MonoGSharedVtMethodInfo *info; MonoInst *var, *locals_var; int dreg; info = (MonoGSharedVtMethodInfo *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoGSharedVtMethodInfo)); info->method = cfg->method; info->count_entries = 16; info->entries = (MonoRuntimeGenericContextInfoTemplate *)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoRuntimeGenericContextInfoTemplate) * info->count_entries); cfg->gsharedvt_info = info; var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ //var->flags |= MONO_INST_VOLATILE; cfg->gsharedvt_info_var = var; ins = emit_get_rgctx_gsharedvt_method (cfg, mini_method_check_context_used (cfg, method), method, info); MONO_EMIT_NEW_UNALU (cfg, OP_MOVE, var->dreg, ins->dreg); /* Allocate locals */ locals_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ //locals_var->flags |= MONO_INST_VOLATILE; cfg->gsharedvt_locals_var = locals_var; dreg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP (cfg, OP_LOADI4_MEMBASE, dreg, var->dreg, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, locals_size)); MONO_INST_NEW (cfg, ins, OP_LOCALLOC); ins->dreg = locals_var->dreg; ins->sreg1 = dreg; MONO_ADD_INS (cfg->cbb, ins); cfg->gsharedvt_locals_var_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; /* if (init_locals) ins->flags |= MONO_INST_INIT; */ if (cfg->llvm_only) { init_localsbb = cfg->cbb; init_localsbb2 = cfg->cbb; } } if (cfg->deopt) { /* * Push an LMFExt frame which points to a MonoMethodILState structure. */ emit_push_lmf (cfg); /* The type doesn't matter, the llvm backend will use the correct type */ MonoInst *il_state_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); il_state_var->flags |= MONO_INST_VOLATILE; cfg->il_state_var = il_state_var; EMIT_NEW_VARLOADA (cfg, ins, cfg->il_state_var, NULL); int il_state_addr_reg = ins->dreg; /* il_state->method = method */ MonoInst *method_ins = emit_get_rgctx_method (cfg, -1, cfg->method, MONO_RGCTX_INFO_METHOD); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, il_state_addr_reg, MONO_STRUCT_OFFSET (MonoMethodILState, method), method_ins->dreg); EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); int lmf_reg = ins->dreg; /* lmf->kind = MONO_LMFEXT_IL_STATE */ MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STOREI4_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, kind), MONO_LMFEXT_IL_STATE); /* lmf->il_state = il_state */ MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMFExt, il_state), il_state_addr_reg); /* emit_get_rgctx_method () might create new bblocks */ if (cfg->llvm_only) { init_localsbb = cfg->cbb; init_localsbb2 = cfg->cbb; } } if (cfg->llvm_only && cfg->interp && cfg->method == method) { if (cfg->interp_entry_only) emit_llvmonly_interp_entry (cfg, header); } /* FIRST CODE BLOCK */ NEW_BBLOCK (cfg, tblock); tblock->cil_code = ip; cfg->cbb = tblock; cfg->ip = ip; init_localsbb->next_bb = cfg->cbb; link_bblock (cfg, init_localsbb, cfg->cbb); ADD_BBLOCK (cfg, tblock); CHECK_CFG_EXCEPTION; if (header->code_size == 0) UNVERIFIED; if (get_basic_blocks (cfg, header, cfg->real_offset, ip, end, &err_pos)) { ip = err_pos; UNVERIFIED; } if (cfg->method == method) { int breakpoint_id = mono_debugger_method_has_breakpoint (method); if (breakpoint_id) { MONO_INST_NEW (cfg, ins, OP_BREAK); MONO_ADD_INS (cfg->cbb, ins); } mono_debug_init_method (cfg, cfg->cbb, breakpoint_id); } for (n = 0; n < header->num_locals; ++n) { if (header->locals [n]->type == MONO_TYPE_VOID && !m_type_is_byref (header->locals [n])) UNVERIFIED; } class_inits = NULL; /* We force the vtable variable here for all shared methods for the possibility that they might show up in a stack trace where their exact instantiation is needed. */ if (cfg->gshared && method == cfg->method) { if ((method->flags & METHOD_ATTRIBUTE_STATIC) || mini_method_get_context (method)->method_inst || m_class_is_valuetype (method->klass)) { mono_get_vtable_var (cfg); } else { /* FIXME: Is there a better way to do this? We need the variable live for the duration of the whole method. */ cfg->args [0]->flags |= MONO_INST_VOLATILE; } } /* add a check for this != NULL to inlined methods */ if (is_virtual_call) { MonoInst *arg_ins; // // This is just a hack to avoid checks in empty methods which could get inlined // into finally clauses preventing the removal of empty finally clauses, since all // variables in finally clauses are marked volatile so the check can't be removed // if (!(cfg->llvm_only && m_class_is_valuetype (method->klass) && header->code_size == 1 && header->code [0] == CEE_RET)) { NEW_ARGLOAD (cfg, arg_ins, 0); MONO_ADD_INS (cfg->cbb, arg_ins); MONO_EMIT_NEW_CHECK_THIS (cfg, arg_ins->dreg); } } skip_dead_blocks = !dont_verify; if (skip_dead_blocks) { original_bb = bb = mono_basic_block_split (method, cfg->error, header); CHECK_CFG_ERROR; g_assert (bb); } /* we use a spare stack slot in SWITCH and NEWOBJ and others */ stack_start = sp = (MonoInst **)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst*) * (header->max_stack + 1)); ins_flag = 0; start_new_bblock = 0; MonoOpcodeEnum il_op; il_op = MonoOpcodeEnum_Invalid; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); for (guchar *next_ip = ip; ip < end; ip = next_ip) { MonoOpcodeEnum previous_il_op = il_op; const guchar *tmp_ip = ip; const int op_size = mono_opcode_value_and_size (&tmp_ip, end, &il_op); CHECK_OPSIZE (op_size); next_ip += op_size; if (cfg->method == method) cfg->real_offset = ip - header->code; else cfg->real_offset = inline_offset; cfg->ip = ip; context_used = 0; if (start_new_bblock) { cfg->cbb->cil_length = ip - cfg->cbb->cil_code; if (start_new_bblock == 2) { g_assert (ip == tblock->cil_code); } else { GET_BBLOCK (cfg, tblock, ip); } cfg->cbb->next_bb = tblock; cfg->cbb = tblock; start_new_bblock = 0; for (i = 0; i < cfg->cbb->in_scount; ++i) { if (cfg->verbose_level > 3) printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0); EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0); *sp++ = ins; } if (class_inits) g_slist_free (class_inits); class_inits = NULL; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); } else { if ((tblock = cfg->cil_offset_to_bb [ip - cfg->cil_start]) && (tblock != cfg->cbb)) { link_bblock (cfg, cfg->cbb, tblock); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } cfg->cbb->next_bb = tblock; cfg->cbb = tblock; for (i = 0; i < cfg->cbb->in_scount; ++i) { if (cfg->verbose_level > 3) printf ("loading %d from temp %d\n", i, (int)cfg->cbb->in_stack [i]->inst_c0); EMIT_NEW_TEMPLOAD (cfg, ins, cfg->cbb->in_stack [i]->inst_c0); *sp++ = ins; } g_slist_free (class_inits); class_inits = NULL; emit_set_deopt_il_offset (cfg, ip - cfg->cil_start); } } /* * Methods with AggressiveInline flag could be inlined even if the class has a cctor. * This might create a branch so emit it in the first code bblock instead of into initlocals_bb. */ if (ip - header->code == 0 && cfg->method != method && cfg->compile_aot && (method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && mono_class_needs_cctor_run (method->klass, method)) { emit_class_init (cfg, method->klass); } if (skip_dead_blocks) { int ip_offset = ip - header->code; if (ip_offset == bb->end) bb = bb->next; if (bb->dead) { g_assert (op_size > 0); /*The BB formation pass must catch all bad ops*/ if (cfg->verbose_level > 3) printf ("SKIPPING DEAD OP at %x\n", ip_offset); if (ip_offset + op_size == bb->end) { MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; } continue; } } /* * Sequence points are points where the debugger can place a breakpoint. * Currently, we generate these automatically at points where the IL * stack is empty. */ if (seq_points && ((!sym_seq_points && (sp == stack_start)) || (sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code)))) { /* * Make methods interruptable at the beginning, and at the targets of * backward branches. * Also, do this at the start of every bblock in methods with clauses too, * to be able to handle instructions with inprecise control flow like * throw/endfinally. * Backward branches are handled at the end of method-to-ir (). */ gboolean intr_loc = ip == header->code || (!cfg->cbb->last_ins && cfg->header->num_clauses); gboolean sym_seq_point = sym_seq_points && mono_bitset_test_fast (seq_point_locs, ip - header->code); /* Avoid sequence points on empty IL like .volatile */ // FIXME: Enable this //if (!(cfg->cbb->last_ins && cfg->cbb->last_ins->opcode == OP_SEQ_POINT)) { NEW_SEQ_POINT (cfg, ins, ip - header->code, intr_loc); if ((sp != stack_start) && !sym_seq_point) ins->flags |= MONO_INST_NONEMPTY_STACK; MONO_ADD_INS (cfg->cbb, ins); if (sym_seq_points) mono_bitset_set_fast (seq_point_set_locs, ip - header->code); if (cfg->prof_coverage) { guint32 cil_offset = ip - header->code; gpointer counter = &cfg->coverage_info->data [cil_offset].count; cfg->coverage_info->data [cil_offset].cil_code = ip; if (mono_arch_opcode_supported (OP_ATOMIC_ADD_I4)) { MonoInst *one_ins, *load_ins; EMIT_NEW_PCONST (cfg, load_ins, counter); EMIT_NEW_ICONST (cfg, one_ins, 1); MONO_INST_NEW (cfg, ins, OP_ATOMIC_ADD_I4); ins->dreg = mono_alloc_ireg (cfg); ins->inst_basereg = load_ins->dreg; ins->inst_offset = 0; ins->sreg2 = one_ins->dreg; ins->type = STACK_I4; MONO_ADD_INS (cfg->cbb, ins); } else { EMIT_NEW_PCONST (cfg, ins, counter); MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, ins->dreg, 0, 1); } } } cfg->cbb->real_offset = cfg->real_offset; if (cfg->verbose_level > 3) printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL)); /* * This is used to compute BB_HAS_SIDE_EFFECTS, which is used for the elimination of * foreach finally clauses, so only IL opcodes which occur in such clauses * need to set this. */ ins_has_side_effect = TRUE; // Variables shared by CEE_CALLI CEE_CALL CEE_CALLVIRT CEE_JMP. // Initialize to either what they all need or zero. gboolean emit_widen = TRUE; gboolean tailcall = FALSE; gboolean common_call = FALSE; MonoInst *keep_this_alive = NULL; MonoMethod *cmethod = NULL; MonoMethodSignature *fsig = NULL; // These are used only in CALL/CALLVIRT but must be initialized also for CALLI, // since it jumps into CALL/CALLVIRT. gboolean need_seq_point = FALSE; gboolean push_res = TRUE; gboolean skip_ret = FALSE; gboolean tailcall_remove_ret = FALSE; // FIXME split 500 lines load/store field into separate file/function. MonoOpcodeParameter parameter; const MonoOpcodeInfo* info = mono_opcode_decode (ip, op_size, il_op, &parameter); g_assert (info); n = parameter.i32; token = parameter.i32; target = parameter.branch_target; // Check stack size for push/pop except variable cases -- -1 like call/ret/newobj. const int pushes = info->pushes; const int pops = info->pops; if (pushes >= 0 && pops >= 0) { g_assert (pushes - pops <= 1); if (pushes - pops == 1) CHECK_STACK_OVF (); } if (pops >= 0) CHECK_STACK (pops); switch (il_op) { case MONO_CEE_NOP: if (seq_points && !sym_seq_points && sp != stack_start) { /* * The C# compiler uses these nops to notify the JIT that it should * insert seq points. */ NEW_SEQ_POINT (cfg, ins, ip - header->code, FALSE); MONO_ADD_INS (cfg->cbb, ins); } if (cfg->keep_cil_nops) MONO_INST_NEW (cfg, ins, OP_HARD_NOP); else MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); emitted_funccall_seq_point = FALSE; ins_has_side_effect = FALSE; break; case MONO_CEE_BREAK: if (mini_should_insert_breakpoint (cfg->method)) { ins = mono_emit_jit_icall (cfg, mono_debugger_agent_user_break, NULL); } else { MONO_INST_NEW (cfg, ins, OP_NOP); MONO_ADD_INS (cfg->cbb, ins); } break; case MONO_CEE_LDARG_0: case MONO_CEE_LDARG_1: case MONO_CEE_LDARG_2: case MONO_CEE_LDARG_3: case MONO_CEE_LDARG_S: case MONO_CEE_LDARG: CHECK_ARG (n); if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, cfg->arg_types[n])) { EMIT_NEW_ARGLOADA (cfg, ins, n); } else { EMIT_NEW_ARGLOAD (cfg, ins, n); } *sp++ = ins; break; case MONO_CEE_LDLOC_0: case MONO_CEE_LDLOC_1: case MONO_CEE_LDLOC_2: case MONO_CEE_LDLOC_3: case MONO_CEE_LDLOC_S: case MONO_CEE_LDLOC: CHECK_LOCAL (n); if (next_ip < end && is_addressable_valuetype_load (cfg, next_ip, header->locals[n])) { EMIT_NEW_LOCLOADA (cfg, ins, n); } else { EMIT_NEW_LOCLOAD (cfg, ins, n); } *sp++ = ins; break; case MONO_CEE_STLOC_0: case MONO_CEE_STLOC_1: case MONO_CEE_STLOC_2: case MONO_CEE_STLOC_3: case MONO_CEE_STLOC_S: case MONO_CEE_STLOC: CHECK_LOCAL (n); --sp; *sp = convert_value (cfg, header->locals [n], *sp); if (!dont_verify_stloc && target_type_is_incompatible (cfg, header->locals [n], *sp)) UNVERIFIED; emit_stloc_ir (cfg, sp, header, n); inline_costs += 1; break; case MONO_CEE_LDARGA_S: case MONO_CEE_LDARGA: CHECK_ARG (n); NEW_ARGLOADA (cfg, ins, n); MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_STARG_S: case MONO_CEE_STARG: --sp; CHECK_ARG (n); *sp = convert_value (cfg, param_types [n], *sp); if (!dont_verify_stloc && target_type_is_incompatible (cfg, param_types [n], *sp)) UNVERIFIED; emit_starg_ir (cfg, sp, n); break; case MONO_CEE_LDLOCA: case MONO_CEE_LDLOCA_S: { guchar *tmp_ip; CHECK_LOCAL (n); if ((tmp_ip = emit_optimized_ldloca_ir (cfg, next_ip, end, n))) { next_ip = tmp_ip; il_op = MONO_CEE_INITOBJ; inline_costs += 1; break; } ins_has_side_effect = FALSE; EMIT_NEW_LOCLOADA (cfg, ins, n); *sp++ = ins; break; } case MONO_CEE_LDNULL: EMIT_NEW_PCONST (cfg, ins, NULL); ins->type = STACK_OBJ; *sp++ = ins; break; case MONO_CEE_LDC_I4_M1: case MONO_CEE_LDC_I4_0: case MONO_CEE_LDC_I4_1: case MONO_CEE_LDC_I4_2: case MONO_CEE_LDC_I4_3: case MONO_CEE_LDC_I4_4: case MONO_CEE_LDC_I4_5: case MONO_CEE_LDC_I4_6: case MONO_CEE_LDC_I4_7: case MONO_CEE_LDC_I4_8: case MONO_CEE_LDC_I4_S: case MONO_CEE_LDC_I4: EMIT_NEW_ICONST (cfg, ins, n); *sp++ = ins; break; case MONO_CEE_LDC_I8: MONO_INST_NEW (cfg, ins, OP_I8CONST); ins->type = STACK_I8; ins->dreg = alloc_dreg (cfg, STACK_I8); ins->inst_l = parameter.i64; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_LDC_R4: { float *f; gboolean use_aotconst = FALSE; #ifdef TARGET_POWERPC /* FIXME: Clean this up */ if (cfg->compile_aot) use_aotconst = TRUE; #endif /* FIXME: we should really allocate this only late in the compilation process */ f = (float *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (float)); if (use_aotconst) { MonoInst *cons; int dreg; EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R4, f); dreg = alloc_freg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR4_MEMBASE, dreg, cons->dreg, 0); ins->type = cfg->r4_stack_type; } else { MONO_INST_NEW (cfg, ins, OP_R4CONST); ins->type = cfg->r4_stack_type; ins->dreg = alloc_dreg (cfg, STACK_R8); ins->inst_p0 = f; MONO_ADD_INS (cfg->cbb, ins); } *f = parameter.f; *sp++ = ins; break; } case MONO_CEE_LDC_R8: { double *d; gboolean use_aotconst = FALSE; #ifdef TARGET_POWERPC /* FIXME: Clean this up */ if (cfg->compile_aot) use_aotconst = TRUE; #endif /* FIXME: we should really allocate this only late in the compilation process */ d = (double *)mono_mem_manager_alloc (cfg->mem_manager, sizeof (double)); if (use_aotconst) { MonoInst *cons; int dreg; EMIT_NEW_AOTCONST (cfg, cons, MONO_PATCH_INFO_R8, d); dreg = alloc_freg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADR8_MEMBASE, dreg, cons->dreg, 0); ins->type = STACK_R8; } else { MONO_INST_NEW (cfg, ins, OP_R8CONST); ins->type = STACK_R8; ins->dreg = alloc_dreg (cfg, STACK_R8); ins->inst_p0 = d; MONO_ADD_INS (cfg->cbb, ins); } *d = parameter.d; *sp++ = ins; break; } case MONO_CEE_DUP: { MonoInst *temp, *store; MonoClass *klass; sp--; ins = *sp; klass = ins->klass; temp = mono_compile_create_var (cfg, type_from_stack_type (ins), OP_LOCAL); EMIT_NEW_TEMPSTORE (cfg, store, temp->inst_c0, ins); EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0); ins->klass = klass; *sp++ = ins; EMIT_NEW_TEMPLOAD (cfg, ins, temp->inst_c0); ins->klass = klass; *sp++ = ins; inline_costs += 2; break; } case MONO_CEE_POP: --sp; #ifdef TARGET_X86 if (sp [0]->type == STACK_R8) /* we need to pop the value from the x86 FP stack */ MONO_EMIT_NEW_UNALU (cfg, OP_X86_FPOP, -1, sp [0]->dreg); #endif break; case MONO_CEE_JMP: { MonoCallInst *call; int i, n; INLINE_FAILURE ("jmp"); GSHAREDVT_FAILURE (il_op); if (stack_start != sp) UNVERIFIED; /* FIXME: check the signature matches */ cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; if (cfg->gshared && mono_method_check_context_used (cmethod)) GENERIC_SHARING_FAILURE (CEE_JMP); mini_profiler_emit_tail_call (cfg, cmethod); fsig = mono_method_signature_internal (cmethod); n = fsig->param_count + fsig->hasthis; if (cfg->llvm_only) { MonoInst **args; args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n); for (i = 0; i < n; ++i) EMIT_NEW_ARGLOAD (cfg, args [i], i); ins = mini_emit_method_call_full (cfg, cmethod, fsig, TRUE, args, NULL, NULL, NULL); /* * The code in mono-basic-block.c treats the rest of the code as dead, but we * have to emit a normal return since llvm expects it. */ if (cfg->ret) emit_setret (cfg, ins); MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); break; } else { /* Handle tailcalls similarly to calls */ DISABLE_AOT (cfg); mini_emit_tailcall_parameters (cfg, fsig); MONO_INST_NEW_CALL (cfg, call, OP_TAILCALL); call->method = cmethod; // FIXME Other initialization of the tailcall field occurs after // it is used. So this is the only "real" use and needs more attention. call->tailcall = TRUE; call->signature = fsig; call->args = (MonoInst **)mono_mempool_alloc (cfg->mempool, sizeof (MonoInst*) * n); call->inst.inst_p0 = cmethod; for (i = 0; i < n; ++i) EMIT_NEW_ARGLOAD (cfg, call->args [i], i); if (mini_type_is_vtype (mini_get_underlying_type (call->signature->ret))) call->vret_var = cfg->vret_addr; mono_arch_emit_call (cfg, call); cfg->param_area = MAX(cfg->param_area, call->stack_usage); MONO_ADD_INS (cfg->cbb, (MonoInst*)call); } start_new_bblock = 1; break; } case MONO_CEE_CALLI: { // FIXME tail.calli is problemetic because the this pointer's type // is not in the signature, and we cannot check for a byref valuetype. MonoInst *addr; MonoInst *callee = NULL; // Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT. common_call = TRUE; // i.e. skip_ret/push_res/seq_point logic cmethod = NULL; gboolean const inst_tailcall = G_UNLIKELY (debug_tailcall_try_all ? (next_ip < end && next_ip [0] == CEE_RET) : ((ins_flag & MONO_INST_TAILCALL) != 0)); ins = NULL; //GSHAREDVT_FAILURE (il_op); CHECK_STACK (1); --sp; addr = *sp; g_assert (addr); fsig = mini_get_signature (method, token, generic_context, cfg->error); CHECK_CFG_ERROR; if (method->dynamic && fsig->pinvoke) { MonoInst *args [3]; /* * This is a call through a function pointer using a pinvoke * signature. Have to create a wrapper and call that instead. * FIXME: This is very slow, need to create a wrapper at JIT time * instead based on the signature. */ EMIT_NEW_IMAGECONST (cfg, args [0], ((MonoDynamicMethod*)method)->assembly->image); EMIT_NEW_PCONST (cfg, args [1], fsig); args [2] = addr; // FIXME tailcall? addr = mono_emit_jit_icall (cfg, mono_get_native_calli_wrapper, args); } if (!method->dynamic && fsig->pinvoke && !method->wrapper_type) { /* MONO_WRAPPER_DYNAMIC_METHOD dynamic method handled above in the method->dynamic case; for other wrapper types assume the code knows what its doing and added its own GC transitions */ gboolean skip_gc_trans = fsig->suppress_gc_transition; if (!skip_gc_trans) { #if 0 fprintf (stderr, "generating wrapper for calli in method %s with wrapper type %s\n", method->name, mono_wrapper_type_to_str (method->wrapper_type)); #endif /* Call the wrapper that will do the GC transition instead */ MonoMethod *wrapper = mono_marshal_get_native_func_wrapper_indirect (method->klass, fsig, cfg->compile_aot); fsig = mono_method_signature_internal (wrapper); n = fsig->param_count - 1; /* wrapper has extra fnptr param */ CHECK_STACK (n); /* move the args to allow room for 'this' in the first position */ while (n--) { --sp; sp [1] = sp [0]; } sp[0] = addr; /* n+1 args, first arg is the address of the indirect method to call */ g_assert (!fsig->hasthis && !fsig->pinvoke); ins = mono_emit_method_call (cfg, wrapper, /*args*/sp, NULL); goto calli_end; } } n = fsig->param_count + fsig->hasthis; CHECK_STACK (n); //g_assert (!virtual_ || fsig->hasthis); sp -= n; if (!(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD) && check_call_signature (cfg, fsig, sp)) { if (break_on_unverified ()) check_call_signature (cfg, fsig, sp); // Again, step through it. UNVERIFIED; } inline_costs += CALL_COST * MIN(10, num_calls++); /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { /* * We pass the address to the gsharedvt trampoline in the rgctx reg */ callee = addr; g_assert (addr); // Doubles as boolean after tailcall check. } inst_tailcall && is_supported_tailcall (cfg, ip, method, NULL, fsig, FALSE/*virtual irrelevant*/, addr != NULL, &tailcall); if (save_last_error) mono_emit_jit_icall (cfg, mono_marshal_clear_last_error, NULL); if (callee) { if (method->wrapper_type != MONO_WRAPPER_DELEGATE_INVOKE) /* Not tested */ GSHAREDVT_FAILURE (il_op); if (cfg->llvm_only) // FIXME: GSHAREDVT_FAILURE (il_op); addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, callee, tailcall); goto calli_end; } /* Prevent inlining of methods with indirect calls */ INLINE_FAILURE ("indirect call"); if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST || addr->opcode == OP_GOT_ENTRY) { MonoJumpInfoType info_type; gpointer info_data; /* * Instead of emitting an indirect call, emit a direct call * with the contents of the aotconst as the patch info. */ if (addr->opcode == OP_PCONST || addr->opcode == OP_AOTCONST) { info_type = (MonoJumpInfoType)addr->inst_c1; info_data = addr->inst_p0; } else { info_type = (MonoJumpInfoType)addr->inst_right->inst_c1; info_data = addr->inst_right->inst_left; } if (info_type == MONO_PATCH_INFO_ICALL_ADDR) { // non-JIT icall, mostly builtin, but also user-extensible tailcall = FALSE; ins = (MonoInst*)mini_emit_abs_call (cfg, MONO_PATCH_INFO_ICALL_ADDR_CALL, info_data, fsig, sp); NULLIFY_INS (addr); goto calli_end; } else if (info_type == MONO_PATCH_INFO_JIT_ICALL_ADDR || info_type == MONO_PATCH_INFO_SPECIFIC_TRAMPOLINE_LAZY_FETCH_ADDR) { tailcall = FALSE; ins = (MonoInst*)mini_emit_abs_call (cfg, info_type, info_data, fsig, sp); NULLIFY_INS (addr); goto calli_end; } } if (cfg->llvm_only && !(cfg->method->wrapper_type && cfg->method->wrapper_type != MONO_WRAPPER_DYNAMIC_METHOD)) ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); else ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, NULL, NULL, tailcall); goto calli_end; } case MONO_CEE_CALL: case MONO_CEE_CALLVIRT: { MonoInst *addr; addr = NULL; int array_rank; array_rank = 0; gboolean virtual_; virtual_ = il_op == MONO_CEE_CALLVIRT; gboolean pass_imt_from_rgctx; pass_imt_from_rgctx = FALSE; MonoInst *imt_arg; imt_arg = NULL; gboolean pass_vtable; pass_vtable = FALSE; gboolean pass_mrgctx; pass_mrgctx = FALSE; MonoInst *vtable_arg; vtable_arg = NULL; gboolean check_this; check_this = FALSE; gboolean delegate_invoke; delegate_invoke = FALSE; gboolean direct_icall; direct_icall = FALSE; gboolean tailcall_calli; tailcall_calli = FALSE; gboolean noreturn; noreturn = FALSE; gboolean gshared_static_virtual; gshared_static_virtual = FALSE; #ifdef TARGET_WASM gboolean needs_stack_walk; needs_stack_walk = FALSE; #endif // Variables shared by CEE_CALLI and CEE_CALL/CEE_CALLVIRT. common_call = FALSE; // variables to help in assertions gboolean called_is_supported_tailcall; called_is_supported_tailcall = FALSE; MonoMethod *tailcall_method; tailcall_method = NULL; MonoMethod *tailcall_cmethod; tailcall_cmethod = NULL; MonoMethodSignature *tailcall_fsig; tailcall_fsig = NULL; gboolean tailcall_virtual; tailcall_virtual = FALSE; gboolean tailcall_extra_arg; tailcall_extra_arg = FALSE; gboolean inst_tailcall; inst_tailcall = G_UNLIKELY (debug_tailcall_try_all ? (next_ip < end && next_ip [0] == CEE_RET) : ((ins_flag & MONO_INST_TAILCALL) != 0)); ins = NULL; /* Used to pass arguments to called functions */ HandleCallData cdata; memset (&cdata, 0, sizeof (HandleCallData)); cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; if (cfg->verbose_level > 3) printf ("cmethod = %s\n", mono_method_get_full_name (cmethod)); MonoMethod *cil_method; cil_method = cmethod; if (constrained_class) { if (m_method_is_static (cil_method) && mini_class_check_context_used (cfg, constrained_class)) { /* get_constrained_method () doesn't work on the gparams used by generic sharing */ // FIXME: Other configurations //if (!cfg->gsharedvt) // GENERIC_SHARING_FAILURE (CEE_CALL); gshared_static_virtual = TRUE; } else { cmethod = get_constrained_method (cfg, image, token, cil_method, constrained_class, generic_context); CHECK_CFG_ERROR; if (m_class_is_enumtype (constrained_class) && !strcmp (cmethod->name, "GetHashCode")) { /* Use the corresponding method from the base type to avoid boxing */ MonoType *base_type = mono_class_enum_basetype_internal (constrained_class); g_assert (base_type); constrained_class = mono_class_from_mono_type_internal (base_type); cmethod = get_method_nofail (constrained_class, cmethod->name, 0, 0); g_assert (cmethod); } } } if (!dont_verify && !cfg->skip_visibility) { MonoMethod *target_method = cil_method; if (method->is_inflated) { MonoGenericContainer *container = mono_method_get_generic_container(method_definition); MonoGenericContext *context = (container != NULL ? &container->context : NULL); target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error); CHECK_CFG_ERROR; } if (!mono_method_can_access_method (method_definition, target_method) && !mono_method_can_access_method (method, cil_method)) emit_method_access_failure (cfg, method, cil_method); } if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) { if (cfg->interp && !cfg->interp_entry_only) { /* Use the interpreter instead */ cfg->exception_message = g_strdup ("stack walk"); cfg->disable_llvm = TRUE; } #ifdef TARGET_WASM else { needs_stack_walk = TRUE; } #endif } if (!virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_ABSTRACT) && !gshared_static_virtual) { if (!mono_class_is_interface (method->klass)) emit_bad_image_failure (cfg, method, cil_method); else virtual_ = TRUE; } if (!m_class_is_inited (cmethod->klass)) if (!mono_class_init_internal (cmethod->klass)) TYPE_LOAD_ERROR (cmethod->klass); fsig = mono_method_signature_internal (cmethod); if (!fsig) LOAD_ERROR; if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL && mini_class_is_system_array (cmethod->klass)) { array_rank = m_class_get_rank (cmethod->klass); } else if ((cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) && direct_icalls_enabled (cfg, cmethod)) { direct_icall = TRUE; } else if (fsig->pinvoke) { if (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL) { /* * Avoid calling mono_marshal_get_native_wrapper () too early, it might call managed * callbacks on netcore. */ fsig = mono_metadata_signature_dup_mempool (cfg->mempool, fsig); fsig->pinvoke = FALSE; } else { MonoMethod *wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot); fsig = mono_method_signature_internal (wrapper); } } else if (constrained_class) { } else { fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error); CHECK_CFG_ERROR; } if (cfg->llvm_only && !cfg->method->wrapper_type && (!cmethod || cmethod->is_inflated)) cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig); /* See code below */ if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) { MonoBasicBlock *tbb; GET_BBLOCK (cfg, tbb, next_ip); if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) { /* * We want to extend the try block to cover the call, but we can't do it if the * call is made directly since its followed by an exception check. */ direct_icall = FALSE; } } mono_save_token_info (cfg, image, token, cil_method); if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code))) need_seq_point = TRUE; /* Don't support calls made using type arguments for now */ /* if (cfg->gsharedvt) { if (mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE (il_op); } */ if (cmethod->string_ctor && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) g_assert_not_reached (); n = fsig->param_count + fsig->hasthis; if (!cfg->gshared && mono_class_is_gtd (cmethod->klass)) UNVERIFIED; if (!cfg->gshared) g_assert (!mono_method_check_context_used (cmethod)); CHECK_STACK (n); //g_assert (!virtual_ || fsig->hasthis); sp -= n; if (virtual_ && cmethod && sp [0] && sp [0]->opcode == OP_TYPED_OBJREF) { ERROR_DECL (error); MonoMethod *new_cmethod = mono_class_get_virtual_method (sp [0]->klass, cmethod, error); if (is_ok (error)) { cmethod = new_cmethod; virtual_ = FALSE; } else { mono_error_cleanup (error); } } if (cmethod && method_does_not_return (cmethod)) { cfg->cbb->out_of_line = TRUE; noreturn = TRUE; } cdata.method = method; cdata.inst_tailcall = inst_tailcall; /* * We have the `constrained.' prefix opcode. */ if (constrained_class) { ins = handle_constrained_call (cfg, cmethod, fsig, constrained_class, sp, &cdata, &cmethod, &virtual_, &emit_widen); CHECK_CFG_EXCEPTION; if (!gshared_static_virtual) constrained_class = NULL; if (ins) goto call_end; } for (int i = 0; i < fsig->param_count; ++i) sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]); if (check_call_signature (cfg, fsig, sp)) { if (break_on_unverified ()) check_call_signature (cfg, fsig, sp); // Again, step through it. UNVERIFIED; } if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && !strcmp (cmethod->name, "Invoke")) delegate_invoke = TRUE; /* * Implement a workaround for the inherent races involved in locking: * Monitor.Enter () * try { * } finally { * Monitor.Exit () * } * If a thread abort happens between the call to Monitor.Enter () and the start of the * try block, the Exit () won't be executed, see: * http://www.bluebytesoftware.com/blog/2007/01/30/MonitorEnterThreadAbortsAndOrphanedLocks.aspx * To work around this, we extend such try blocks to include the last x bytes * of the Monitor.Enter () call. */ if (cmethod->klass == mono_defaults.monitor_class && !strcmp (cmethod->name, "Enter") && mono_method_signature_internal (cmethod)->param_count == 1) { MonoBasicBlock *tbb; GET_BBLOCK (cfg, tbb, next_ip); /* * Only extend try blocks with a finally, to avoid catching exceptions thrown * from Monitor.Enter like ArgumentNullException. */ if (tbb->try_start && MONO_REGION_FLAGS(tbb->region) == MONO_EXCEPTION_CLAUSE_FINALLY) { /* Mark this bblock as needing to be extended */ tbb->extend_try_block = TRUE; } } /* Conversion to a JIT intrinsic */ gboolean ins_type_initialized; if ((ins = mini_emit_inst_for_method (cfg, cmethod, fsig, sp, &ins_type_initialized))) { if (!MONO_TYPE_IS_VOID (fsig->ret)) { if (!ins_type_initialized) mini_type_to_eval_stack_type ((cfg), fsig->ret, ins); emit_widen = FALSE; } // FIXME This is only missed if in fact the intrinsic involves a call. if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall intrins %s -> %s\n", method->name, cmethod->name); goto call_end; } CHECK_CFG_ERROR; /* * If the callee is a shared method, then its static cctor * might not get called after the call was patched. */ if (cfg->gshared && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) { emit_class_init (cfg, cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } /* Inlining */ if ((cfg->opt & MONO_OPT_INLINE) && !inst_tailcall && (!virtual_ || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod)) && mono_method_check_inlining (cfg, cmethod)) { int costs; gboolean always = FALSE; gboolean is_empty = FALSE; if (cmethod->iflags & METHOD_IMPL_ATTRIBUTE_INTERNAL_CALL) { /* Prevent inlining of methods that call wrappers */ INLINE_FAILURE ("wrapper call"); // FIXME? Does this write to cmethod impact tailcall_supported? Probably not. // Neither pinvoke or icall are likely to be tailcalled. cmethod = mono_marshal_get_native_wrapper (cmethod, TRUE, FALSE); always = TRUE; } costs = inline_method (cfg, cmethod, fsig, sp, ip, cfg->real_offset, always, &is_empty); if (costs) { cfg->real_offset += 5; if (!MONO_TYPE_IS_VOID (fsig->ret)) /* *sp is already set by inline_method */ ins = *sp; inline_costs += costs; // FIXME This is missed if the inlinee contains tail calls that // would work, but not once inlined into caller. // This matchingness could be a factor in inlining. // i.e. Do not inline if it hurts tailcall, do inline // if it helps and/or or is neutral, and helps performance // using usual heuristics. // Note that inlining will expose multiple tailcall opportunities // so the tradeoff is not obvious. If we can tailcall anything // like desktop, then this factor mostly falls away, except // that inlining can affect tailcall performance due to // signature match/mismatch. if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall inline %s -> %s\n", method->name, cmethod->name); if (is_empty) ins_has_side_effect = FALSE; goto call_end; } } check_method_sharing (cfg, cmethod, &pass_vtable, &pass_mrgctx); if (cfg->gshared) { MonoGenericContext *cmethod_context = mono_method_get_context (cmethod); context_used = mini_method_check_context_used (cfg, cmethod); if (!context_used && gshared_static_virtual) context_used = mini_class_check_context_used (cfg, constrained_class); if (context_used && mono_class_is_interface (cmethod->klass) && !m_method_is_static (cmethod)) { /* Generic method interface calls are resolved via a helper function and don't need an imt. */ if (!cmethod_context || !cmethod_context->method_inst) pass_imt_from_rgctx = TRUE; } /* * If a shared method calls another * shared method then the caller must * have a generic sharing context * because the magic trampoline * requires it. FIXME: We shouldn't * have to force the vtable/mrgctx * variable here. Instead there * should be a flag in the cfg to * request a generic sharing context. */ if (context_used && ((cfg->method->flags & METHOD_ATTRIBUTE_STATIC) || m_class_is_valuetype (cfg->method->klass))) mono_get_vtable_var (cfg); } if (pass_vtable) { if (context_used) { vtable_arg = mini_emit_get_rgctx_klass (cfg, context_used, cmethod->klass, MONO_RGCTX_INFO_VTABLE); } else { MonoVTable *vtable = mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); EMIT_NEW_VTABLECONST (cfg, vtable_arg, vtable); } } if (pass_mrgctx) { g_assert (!vtable_arg); if (!cfg->compile_aot) { /* * emit_get_rgctx_method () calls mono_class_vtable () so check * for type load errors before. */ mono_class_setup_vtable (cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } vtable_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_RGCTX); if ((!(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) || MONO_METHOD_IS_FINAL (cmethod))) { if (virtual_) check_this = TRUE; virtual_ = FALSE; } } if (pass_imt_from_rgctx) { g_assert (!pass_vtable); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } if (check_this) MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg); /* Calling virtual generic methods */ // These temporaries help detangle "pure" computation of // inputs to is_supported_tailcall from side effects, so that // is_supported_tailcall can be computed just once. gboolean virtual_generic; virtual_generic = FALSE; gboolean virtual_generic_imt; virtual_generic_imt = FALSE; if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) && !MONO_METHOD_IS_FINAL (cmethod) && fsig->generic_param_count && !(cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) && !cfg->llvm_only) { g_assert (fsig->is_inflated); virtual_generic = TRUE; /* Prevent inlining of methods that contain indirect calls */ INLINE_FAILURE ("virtual generic call"); if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) GSHAREDVT_FAILURE (il_op); if (cfg->backend->have_generalized_imt_trampoline && cfg->backend->gshared_supported && cmethod->wrapper_type == MONO_WRAPPER_NONE) { virtual_generic_imt = TRUE; g_assert (!imt_arg); if (!context_used) g_assert (cmethod->is_inflated); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); virtual_ = TRUE; vtable_arg = NULL; } } // Capture some intent before computing tailcall. gboolean make_generic_call_out_of_gsharedvt_method; gboolean will_have_imt_arg; make_generic_call_out_of_gsharedvt_method = FALSE; will_have_imt_arg = FALSE; /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (cfg->gsharedvt && (mini_is_gsharedvt_signature (fsig) || cmethod->is_inflated || mono_class_is_ginst (cmethod->klass)) && !(m_class_get_rank (cmethod->klass) && m_class_get_byval_arg (cmethod->klass)->type != MONO_TYPE_SZARRAY) && (!(cfg->llvm_only && virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)))) { make_generic_call_out_of_gsharedvt_method = TRUE; if (virtual_) { if (fsig->generic_param_count) { will_have_imt_arg = TRUE; } else if (mono_class_is_interface (cmethod->klass) && !imt_arg) { will_have_imt_arg = TRUE; } } } /* Tail prefix / tailcall optimization */ /* FIXME: Enabling TAILC breaks some inlining/stack trace/etc tests. Inlining and stack traces are not guaranteed however. */ /* FIXME: runtime generic context pointer for jumps? */ /* FIXME: handle this for generic sharing eventually */ // tailcall means "the backend can and will handle it". // inst_tailcall means the tail. prefix is present. tailcall_extra_arg = vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass); tailcall = inst_tailcall && is_supported_tailcall (cfg, ip, method, cmethod, fsig, virtual_, tailcall_extra_arg, &tailcall_calli); // Writes to imt_arg, vtable_arg, virtual_, cmethod, must not occur from here (inputs to is_supported_tailcall). // Capture values to later assert they don't change. called_is_supported_tailcall = TRUE; tailcall_method = method; tailcall_cmethod = cmethod; tailcall_fsig = fsig; tailcall_virtual = virtual_; if (virtual_generic) { if (virtual_generic_imt) { if (tailcall) { /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); } common_call = TRUE; goto call_end; } MonoInst *this_temp, *this_arg_temp, *store; MonoInst *iargs [4]; this_temp = mono_compile_create_var (cfg, type_from_stack_type (sp [0]), OP_LOCAL); NEW_TEMPSTORE (cfg, store, this_temp->inst_c0, sp [0]); MONO_ADD_INS (cfg->cbb, store); /* FIXME: This should be a managed pointer */ this_arg_temp = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); EMIT_NEW_TEMPLOAD (cfg, iargs [0], this_temp->inst_c0); iargs [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); EMIT_NEW_TEMPLOADA (cfg, iargs [2], this_arg_temp->inst_c0); addr = mono_emit_jit_icall (cfg, mono_helper_compile_generic_method, iargs); EMIT_NEW_TEMPLOAD (cfg, sp [0], this_arg_temp->inst_c0); ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall virtual generic %s -> %s\n", method->name, cmethod->name); goto call_end; } CHECK_CFG_ERROR; /* Tail recursion elimination */ if (((cfg->opt & MONO_OPT_TAILCALL) || inst_tailcall) && il_op == MONO_CEE_CALL && cmethod == method && next_ip < end && next_ip [0] == CEE_RET && !vtable_arg) { gboolean has_vtargs = FALSE; int i; /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); /* keep it simple */ for (i = fsig->param_count - 1; !has_vtargs && i >= 0; i--) has_vtargs = MONO_TYPE_ISSTRUCT (mono_method_signature_internal (cmethod)->params [i]); if (!has_vtargs) { if (need_seq_point) { emit_seq_point (cfg, method, ip, FALSE, TRUE); need_seq_point = FALSE; } for (i = 0; i < n; ++i) EMIT_NEW_ARGSTORE (cfg, ins, i, sp [i]); mini_profiler_emit_tail_call (cfg, cmethod); MONO_INST_NEW (cfg, ins, OP_BR); MONO_ADD_INS (cfg->cbb, ins); tblock = start_bblock->out_bb [0]; link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; start_new_bblock = 1; /* skip the CEE_RET, too */ if (ip_in_bb (cfg, cfg->cbb, next_ip)) skip_ret = TRUE; push_res = FALSE; need_seq_point = FALSE; goto call_end; } } inline_costs += CALL_COST * MIN(10, num_calls++); /* * Synchronized wrappers. * Its hard to determine where to replace a method with its synchronized * wrapper without causing an infinite recursion. The current solution is * to add the synchronized wrapper in the trampolines, and to * change the called method to a dummy wrapper, and resolve that wrapper * to the real method in mono_jit_compile_method (). */ if (cfg->method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) { MonoMethod *orig = mono_marshal_method_from_wrapper (cfg->method); if (cmethod == orig || (cmethod->is_inflated && mono_method_get_declaring_generic_method (cmethod) == orig)) { // FIXME? Does this write to cmethod impact tailcall_supported? Probably not. cmethod = mono_marshal_get_synchronized_inner_wrapper (cmethod); } } /* * Making generic calls out of gsharedvt methods. * This needs to be used for all generic calls, not just ones with a gsharedvt signature, to avoid * patching gshared method addresses into a gsharedvt method. */ if (make_generic_call_out_of_gsharedvt_method) { if (virtual_) { //if (mono_class_is_interface (cmethod->klass)) //GSHAREDVT_FAILURE (il_op); // disable for possible remoting calls if (fsig->hasthis && method->klass == mono_defaults.object_class) GSHAREDVT_FAILURE (il_op); if (fsig->generic_param_count) { /* virtual generic call */ g_assert (!imt_arg); g_assert (will_have_imt_arg); /* Same as the virtual generic case above */ imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } else if (mono_class_is_interface (cmethod->klass) && !imt_arg) { /* This can happen when we call a fully instantiated iface method */ g_assert (will_have_imt_arg); imt_arg = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); g_assert (imt_arg); } /* This is not needed, as the trampoline code will pass one, and it might be passed in the same reg as the imt arg */ vtable_arg = NULL; } if ((m_class_get_parent (cmethod->klass) == mono_defaults.multicastdelegate_class) && (!strcmp (cmethod->name, "Invoke"))) keep_this_alive = sp [0]; MonoRgctxInfoType info_type; if (virtual_ && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE_VIRT; else info_type = MONO_RGCTX_INFO_METHOD_GSHAREDVT_OUT_TRAMPOLINE; addr = emit_get_rgctx_gsharedvt_call (cfg, context_used, fsig, cmethod, info_type); if (cfg->llvm_only) { // FIXME: Avoid initializing vtable_arg ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall llvmonly gsharedvt %s -> %s\n", method->name, cmethod->name); } else { tailcall = tailcall_calli; ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall); tailcall_remove_ret |= tailcall; } goto call_end; } /* Generic sharing */ /* * Calls to generic methods from shared code cannot go through the trampoline infrastructure * in some cases, because the called method might end up being different on every call. * Load the called method address from the rgctx and do an indirect call in these cases. * Use this if the callee is gsharedvt sharable too, since * at runtime we might find an instantiation so the call cannot * be patched (the 'no_patch' code path in mini-trampolines.c). */ gboolean gshared_indirect; gshared_indirect = context_used && !imt_arg && !array_rank && !delegate_invoke; if (gshared_indirect) gshared_indirect = (!mono_method_is_generic_sharable_full (cmethod, TRUE, FALSE, FALSE) || !mono_class_generic_sharing_enabled (cmethod->klass) || gshared_static_virtual); if (gshared_indirect) gshared_indirect = (!virtual_ || MONO_METHOD_IS_FINAL (cmethod) || !(cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)); if (gshared_indirect) { INLINE_FAILURE ("gshared"); g_assert (cfg->gshared && cmethod); g_assert (!addr); if (fsig->hasthis) MONO_EMIT_NEW_CHECK_THIS (cfg, sp [0]->dreg); if (cfg->llvm_only) { if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) { /* Handled in handle_constrained_gsharedvt_call () */ g_assert (!gshared_static_virtual); addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GSHAREDVT_OUT_WRAPPER); } else { if (gshared_static_virtual) addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); else addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD_FTNDESC); } // FIXME: Avoid initializing imt_arg/vtable_arg ins = mini_emit_llvmonly_calli (cfg, fsig, sp, addr); if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall context_used_llvmonly %s -> %s\n", method->name, cmethod->name); } else { if (gshared_static_virtual) { /* * cmethod is a static interface method, the actual called method at runtime * needs to be computed using constrained_class and cmethod. */ addr = emit_get_rgctx_virt_method (cfg, -1, constrained_class, cmethod, MONO_RGCTX_INFO_VIRT_METHOD_CODE); } else { addr = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_GENERIC_METHOD_CODE); } if (inst_tailcall) mono_tailcall_print ("%s tailcall_calli#2 %s -> %s\n", tailcall_calli ? "making" : "missed", method->name, cmethod->name); tailcall = tailcall_calli; ins = (MonoInst*)mini_emit_calli_full (cfg, fsig, sp, addr, imt_arg, vtable_arg, tailcall); tailcall_remove_ret |= tailcall; } goto call_end; } /* Direct calls to icalls */ if (direct_icall) { MonoMethod *wrapper; int costs; /* Inline the wrapper */ wrapper = mono_marshal_get_native_wrapper (cmethod, TRUE, cfg->compile_aot); costs = inline_method (cfg, wrapper, fsig, sp, ip, cfg->real_offset, TRUE, NULL); g_assert (costs > 0); cfg->real_offset += 5; if (!MONO_TYPE_IS_VOID (fsig->ret)) /* *sp is already set by inline_method */ ins = *sp; inline_costs += costs; if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall direct_icall %s -> %s\n", method->name, cmethod->name); goto call_end; } /* Array methods */ if (array_rank) { MonoInst *addr; if (strcmp (cmethod->name, "Set") == 0) { /* array Set */ MonoInst *val = sp [fsig->param_count]; if (val->type == STACK_OBJ) { MonoInst *iargs [ ] = { sp [0], val }; mono_emit_jit_icall (cfg, mono_helper_stelem_ref_check, iargs); } addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, TRUE); if (!mini_debug_options.weak_memory_model && val->type == STACK_OBJ) mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); EMIT_NEW_STORE_MEMBASE_TYPE (cfg, ins, fsig->params [fsig->param_count - 1], addr->dreg, 0, val->dreg); if (cfg->gen_write_barriers && val->type == STACK_OBJ && !MONO_INS_IS_PCONST_NULL (val)) mini_emit_write_barrier (cfg, addr, val); if (cfg->gen_write_barriers && mini_is_gsharedvt_klass (cmethod->klass)) GSHAREDVT_FAILURE (il_op); } else if (strcmp (cmethod->name, "Get") == 0) { /* array Get */ addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, fsig->ret, addr->dreg, 0); } else if (strcmp (cmethod->name, "Address") == 0) { /* array Address */ if (!m_class_is_valuetype (m_class_get_element_class (cmethod->klass)) && !readonly) mini_emit_check_array_type (cfg, sp [0], cmethod->klass); CHECK_TYPELOAD (cmethod->klass); readonly = FALSE; addr = mini_emit_ldelema_ins (cfg, cmethod, sp, ip, FALSE); ins = addr; } else { g_assert_not_reached (); } emit_widen = FALSE; if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall array_rank %s -> %s\n", method->name, cmethod->name); goto call_end; } ins = mini_redirect_call (cfg, cmethod, fsig, sp, virtual_ ? sp [0] : NULL); if (ins) { if (inst_tailcall) // FIXME mono_tailcall_print ("missed tailcall redirect %s -> %s\n", method->name, cmethod->name); goto call_end; } /* Tail prefix / tailcall optimization */ if (tailcall) { /* Prevent inlining of methods with tailcalls (the call stack would be altered) */ INLINE_FAILURE ("tailcall"); } /* * Virtual calls in llvm-only mode. */ if (cfg->llvm_only && virtual_ && cmethod && (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL)) { ins = mini_emit_llvmonly_virtual_call (cfg, cmethod, fsig, context_used, sp); goto call_end; } /* Common call */ if (!(cfg->opt & MONO_OPT_AGGRESSIVE_INLINING) && !(method->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING) && !method_does_not_return (cmethod)) INLINE_FAILURE ("call"); common_call = TRUE; #ifdef TARGET_WASM /* Push an LMF so these frames can be enumerated during stack walks by mono_arch_unwind_frame () */ if (needs_stack_walk && !cfg->deopt) { MonoInst *method_ins; int lmf_reg; emit_push_lmf (cfg); EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); lmf_reg = ins->dreg; /* The lmf->method field will be used to look up the MonoJitInfo for this method */ method_ins = emit_get_rgctx_method (cfg, mono_method_check_context_used (cfg->method), cfg->method, MONO_RGCTX_INFO_METHOD); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, method), method_ins->dreg); } #endif call_end: // Check that the decision to tailcall would not have changed. g_assert (!called_is_supported_tailcall || tailcall_method == method); // FIXME? cmethod does change, weaken the assert if we weren't tailcalling anyway. // If this still fails, restructure the code, or call tailcall_supported again and assert no change. g_assert (!called_is_supported_tailcall || !tailcall || tailcall_cmethod == cmethod); g_assert (!called_is_supported_tailcall || tailcall_fsig == fsig); g_assert (!called_is_supported_tailcall || tailcall_virtual == virtual_); g_assert (!called_is_supported_tailcall || tailcall_extra_arg == (vtable_arg || imt_arg || will_have_imt_arg || mono_class_is_interface (cmethod->klass))); if (common_call) // FIXME goto call_end && !common_call often skips tailcall processing. ins = mini_emit_method_call_full (cfg, cmethod, fsig, tailcall, sp, virtual_ ? sp [0] : NULL, imt_arg, vtable_arg); /* * Handle devirt of some A.B.C calls by replacing the result of A.B with a OP_TYPED_OBJREF instruction, so the .C * call can be devirtualized above. */ if (cmethod) ins = handle_call_res_devirt (cfg, cmethod, ins); #ifdef TARGET_WASM if (common_call && needs_stack_walk && !cfg->deopt) /* If an exception is thrown, the LMF is popped by a call to mini_llvmonly_pop_lmf () */ emit_pop_lmf (cfg); #endif if (noreturn) { MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); } calli_end: if ((tailcall_remove_ret || (common_call && tailcall)) && !cfg->llvm_only) { link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; // FIXME: Eliminate unreachable epilogs /* * OP_TAILCALL has no return value, so skip the CEE_RET if it is * only reachable from this call. */ GET_BBLOCK (cfg, tblock, next_ip); if (tblock == cfg->cbb || tblock->in_count == 0) skip_ret = TRUE; push_res = FALSE; need_seq_point = FALSE; } if (ins_flag & MONO_INST_TAILCALL) mini_test_tailcall (cfg, tailcall); /* End of call, INS should contain the result of the call, if any */ if (push_res && !MONO_TYPE_IS_VOID (fsig->ret)) { g_assert (ins); if (emit_widen) *sp++ = mono_emit_widen_call_res (cfg, ins, fsig); else *sp++ = ins; } if (save_last_error) { save_last_error = FALSE; #ifdef TARGET_WIN32 // Making icalls etc could clobber the value so emit inline code // to read last error on Windows. MONO_INST_NEW (cfg, ins, OP_GET_LAST_ERROR); ins->dreg = alloc_dreg (cfg, STACK_I4); ins->type = STACK_I4; MONO_ADD_INS (cfg->cbb, ins); mono_emit_jit_icall (cfg, mono_marshal_set_last_error_windows, &ins); #else mono_emit_jit_icall (cfg, mono_marshal_set_last_error, NULL); #endif } if (keep_this_alive) { MonoInst *dummy_use; /* See mini_emit_method_call_full () */ EMIT_NEW_DUMMY_USE (cfg, dummy_use, keep_this_alive); } if (cfg->llvm_only && cmethod && method_needs_stack_walk (cfg, cmethod)) { /* * Clang can convert these calls to tailcalls which screw up the stack * walk. This happens even when the -fno-optimize-sibling-calls * option is passed to clang. * Work around this by emitting a dummy call. */ mono_emit_jit_icall (cfg, mono_dummy_jit_icall, NULL); } CHECK_CFG_EXCEPTION; if (skip_ret) { // FIXME When not followed by CEE_RET, correct behavior is to raise an exception. g_assert (next_ip [0] == CEE_RET); next_ip += 1; il_op = MonoOpcodeEnum_Invalid; // Call or ret? Unclear. } ins_flag = 0; constrained_class = NULL; if (need_seq_point) { //check is is a nested call and remove the non_empty_stack of the last call, only for non native methods if (!(method->flags & METHOD_IMPL_ATTRIBUTE_NATIVE)) { if (emitted_funccall_seq_point) { if (cfg->last_seq_point) cfg->last_seq_point->flags |= MONO_INST_NESTED_CALL; } else emitted_funccall_seq_point = TRUE; } emit_seq_point (cfg, method, next_ip, FALSE, TRUE); } break; } case MONO_CEE_RET: if (!detached_before_ret) mini_profiler_emit_leave (cfg, sig->ret->type != MONO_TYPE_VOID ? sp [-1] : NULL); g_assert (!method_does_not_return (method)); if (cfg->method != method) { /* return from inlined method */ /* * If in_count == 0, that means the ret is unreachable due to * being preceded by a throw. In that case, inline_method () will * handle setting the return value * (test case: test_0_inline_throw ()). */ if (return_var && cfg->cbb->in_count) { MonoType *ret_type = mono_method_signature_internal (method)->ret; MonoInst *store; CHECK_STACK (1); --sp; *sp = convert_value (cfg, ret_type, *sp); if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp)) UNVERIFIED; //g_assert (returnvar != -1); EMIT_NEW_TEMPSTORE (cfg, store, return_var->inst_c0, *sp); cfg->ret_var_set = TRUE; } } else { if (cfg->lmf_var && cfg->cbb->in_count && (!cfg->llvm_only || cfg->deopt)) emit_pop_lmf (cfg); if (cfg->ret) { MonoType *ret_type = mini_get_underlying_type (mono_method_signature_internal (method)->ret); if (seq_points && !sym_seq_points) { /* * Place a seq point here too even through the IL stack is not * empty, so a step over on * call <FOO> * ret * will work correctly. */ NEW_SEQ_POINT (cfg, ins, ip - header->code, TRUE); MONO_ADD_INS (cfg->cbb, ins); } g_assert (!return_var); CHECK_STACK (1); --sp; *sp = convert_value (cfg, ret_type, *sp); if ((method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_NONE) && target_type_is_incompatible (cfg, ret_type, *sp)) UNVERIFIED; emit_setret (cfg, *sp); } } if (sp != stack_start) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; case MONO_CEE_BR_S: MONO_INST_NEW (cfg, ins, OP_BR); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; inline_costs += BRANCH_COST; break; case MONO_CEE_BEQ_S: case MONO_CEE_BGE_S: case MONO_CEE_BGT_S: case MONO_CEE_BLE_S: case MONO_CEE_BLT_S: case MONO_CEE_BNE_UN_S: case MONO_CEE_BGE_UN_S: case MONO_CEE_BGT_UN_S: case MONO_CEE_BLE_UN_S: case MONO_CEE_BLT_UN_S: MONO_INST_NEW (cfg, ins, il_op + BIG_BRANCH_OFFSET); ADD_BINCOND (NULL); sp = stack_start; inline_costs += BRANCH_COST; break; case MONO_CEE_BR: MONO_INST_NEW (cfg, ins, OP_BR); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; inline_costs += BRANCH_COST; break; case MONO_CEE_BRFALSE_S: case MONO_CEE_BRTRUE_S: case MONO_CEE_BRFALSE: case MONO_CEE_BRTRUE: { MonoInst *cmp; gboolean is_true = il_op == MONO_CEE_BRTRUE_S || il_op == MONO_CEE_BRTRUE; if (sp [-1]->type == STACK_VTYPE || sp [-1]->type == STACK_R8) UNVERIFIED; sp--; GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); GET_BBLOCK (cfg, tblock, next_ip); link_bblock (cfg, cfg->cbb, tblock); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); CHECK_UNVERIFIABLE (cfg); } MONO_INST_NEW(cfg, cmp, OP_ICOMPARE_IMM); cmp->sreg1 = sp [0]->dreg; type_from_op (cfg, cmp, sp [0], NULL); CHECK_TYPE (cmp); #if SIZEOF_REGISTER == 4 if (cmp->opcode == OP_LCOMPARE_IMM) { /* Convert it to OP_LCOMPARE */ MONO_INST_NEW (cfg, ins, OP_I8CONST); ins->type = STACK_I8; ins->dreg = alloc_dreg (cfg, STACK_I8); ins->inst_l = 0; MONO_ADD_INS (cfg->cbb, ins); cmp->opcode = OP_LCOMPARE; cmp->sreg2 = ins->dreg; } #endif MONO_ADD_INS (cfg->cbb, cmp); MONO_INST_NEW (cfg, ins, is_true ? CEE_BNE_UN : CEE_BEQ); type_from_op (cfg, ins, sp [0], NULL); MONO_ADD_INS (cfg->cbb, ins); ins->inst_many_bb = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (gpointer) * 2); GET_BBLOCK (cfg, tblock, target); ins->inst_true_bb = tblock; GET_BBLOCK (cfg, tblock, next_ip); ins->inst_false_bb = tblock; start_new_bblock = 2; sp = stack_start; inline_costs += BRANCH_COST; break; } case MONO_CEE_BEQ: case MONO_CEE_BGE: case MONO_CEE_BGT: case MONO_CEE_BLE: case MONO_CEE_BLT: case MONO_CEE_BNE_UN: case MONO_CEE_BGE_UN: case MONO_CEE_BGT_UN: case MONO_CEE_BLE_UN: case MONO_CEE_BLT_UN: MONO_INST_NEW (cfg, ins, il_op); ADD_BINCOND (NULL); sp = stack_start; inline_costs += BRANCH_COST; break; case MONO_CEE_SWITCH: { MonoInst *src1; MonoBasicBlock **targets; MonoBasicBlock *default_bblock; MonoJumpInfoBBTable *table; int offset_reg = alloc_preg (cfg); int target_reg = alloc_preg (cfg); int table_reg = alloc_preg (cfg); int sum_reg = alloc_preg (cfg); gboolean use_op_switch; n = read32 (ip + 1); --sp; src1 = sp [0]; if ((src1->type != STACK_I4) && (src1->type != STACK_PTR)) UNVERIFIED; ip += 5; GET_BBLOCK (cfg, default_bblock, next_ip); default_bblock->flags |= BB_INDIRECT_JUMP_TARGET; targets = (MonoBasicBlock **)mono_mempool_alloc (cfg->mempool, sizeof (MonoBasicBlock*) * n); for (i = 0; i < n; ++i) { GET_BBLOCK (cfg, tblock, next_ip + (gint32)read32 (ip)); targets [i] = tblock; targets [i]->flags |= BB_INDIRECT_JUMP_TARGET; ip += 4; } if (sp != stack_start) { /* * Link the current bb with the targets as well, so handle_stack_args * will set their in_stack correctly. */ link_bblock (cfg, cfg->cbb, default_bblock); for (i = 0; i < n; ++i) link_bblock (cfg, cfg->cbb, targets [i]); handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); /* Undo the links */ mono_unlink_bblock (cfg, cfg->cbb, default_bblock); for (i = 0; i < n; ++i) mono_unlink_bblock (cfg, cfg->cbb, targets [i]); } MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ICOMPARE_IMM, -1, src1->dreg, n); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_IBGE_UN, default_bblock); for (i = 0; i < n; ++i) link_bblock (cfg, cfg->cbb, targets [i]); table = (MonoJumpInfoBBTable *)mono_mempool_alloc (cfg->mempool, sizeof (MonoJumpInfoBBTable)); table->table = targets; table->table_size = n; use_op_switch = FALSE; #ifdef TARGET_ARM /* ARM implements SWITCH statements differently */ /* FIXME: Make it use the generic implementation */ if (!cfg->compile_aot) use_op_switch = TRUE; #endif if (COMPILE_LLVM (cfg)) use_op_switch = TRUE; cfg->cbb->has_jump_table = 1; if (use_op_switch) { MONO_INST_NEW (cfg, ins, OP_SWITCH); ins->sreg1 = src1->dreg; ins->inst_p0 = table; ins->inst_many_bb = targets; ins->klass = (MonoClass *)GUINT_TO_POINTER (n); MONO_ADD_INS (cfg->cbb, ins); } else { if (TARGET_SIZEOF_VOID_P == 8) MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 3); else MONO_EMIT_NEW_BIALU_IMM (cfg, OP_SHL_IMM, offset_reg, src1->dreg, 2); #if SIZEOF_REGISTER == 8 /* The upper word might not be zero, and we add it to a 64 bit address later */ MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, offset_reg, offset_reg); #endif if (cfg->compile_aot) { MONO_EMIT_NEW_AOTCONST (cfg, table_reg, table, MONO_PATCH_INFO_SWITCH); } else { MONO_INST_NEW (cfg, ins, OP_JUMP_TABLE); ins->inst_c1 = MONO_PATCH_INFO_SWITCH; ins->inst_p0 = table; ins->dreg = table_reg; MONO_ADD_INS (cfg->cbb, ins); } /* FIXME: Use load_memindex */ MONO_EMIT_NEW_BIALU (cfg, OP_PADD, sum_reg, table_reg, offset_reg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, target_reg, sum_reg, 0); MONO_EMIT_NEW_UNALU (cfg, OP_BR_REG, -1, target_reg); } start_new_bblock = 1; inline_costs += BRANCH_COST * 2; break; } case MONO_CEE_LDIND_I1: case MONO_CEE_LDIND_U1: case MONO_CEE_LDIND_I2: case MONO_CEE_LDIND_U2: case MONO_CEE_LDIND_I4: case MONO_CEE_LDIND_U4: case MONO_CEE_LDIND_I8: case MONO_CEE_LDIND_I: case MONO_CEE_LDIND_R4: case MONO_CEE_LDIND_R8: case MONO_CEE_LDIND_REF: --sp; if (!(ins_flag & MONO_INST_NONULLCHECK)) MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, FALSE); ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (ldind_to_type (il_op)), sp [0], 0, ins_flag); *sp++ = ins; ins_flag = 0; break; case MONO_CEE_STIND_REF: case MONO_CEE_STIND_I1: case MONO_CEE_STIND_I2: case MONO_CEE_STIND_I4: case MONO_CEE_STIND_I8: case MONO_CEE_STIND_R4: case MONO_CEE_STIND_R8: case MONO_CEE_STIND_I: { sp -= 2; if (il_op == MONO_CEE_STIND_REF && sp [1]->type != STACK_OBJ) { /* stind.ref must only be used with object references. */ UNVERIFIED; } if (il_op == MONO_CEE_STIND_R4 && sp [1]->type == STACK_R8) sp [1] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.single_class), sp [1]); mini_emit_memory_store (cfg, m_class_get_byval_arg (stind_to_type (il_op)), sp [0], sp [1], ins_flag); ins_flag = 0; inline_costs += 1; break; } case MONO_CEE_MUL: MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); /* Use the immediate opcodes if possible */ int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode); if ((sp [1]->opcode == OP_ICONST) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->inst_c0)) { if (imm_opcode != -1) { ins->opcode = imm_opcode; ins->inst_p1 = (gpointer)(gssize)(sp [1]->inst_c0); ins->sreg2 = -1; NULLIFY_INS (sp [1]); } } MONO_ADD_INS ((cfg)->cbb, (ins)); *sp++ = mono_decompose_opcode (cfg, ins); break; case MONO_CEE_ADD: case MONO_CEE_SUB: case MONO_CEE_DIV: case MONO_CEE_DIV_UN: case MONO_CEE_REM: case MONO_CEE_REM_UN: case MONO_CEE_AND: case MONO_CEE_OR: case MONO_CEE_XOR: case MONO_CEE_SHL: case MONO_CEE_SHR: case MONO_CEE_SHR_UN: { MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); add_widen_op (cfg, ins, &sp [0], &sp [1]); ins->dreg = alloc_dreg ((cfg), (MonoStackType)(ins)->type); /* Use the immediate opcodes if possible */ int imm_opcode; imm_opcode = mono_op_to_op_imm_noemul (ins->opcode); if (((sp [1]->opcode == OP_ICONST) || (sp [1]->opcode == OP_I8CONST)) && mono_arch_is_inst_imm (ins->opcode, imm_opcode, sp [1]->opcode == OP_ICONST ? sp [1]->inst_c0 : sp [1]->inst_l)) { if (imm_opcode != -1) { ins->opcode = imm_opcode; if (sp [1]->opcode == OP_I8CONST) { #if SIZEOF_REGISTER == 8 ins->inst_imm = sp [1]->inst_l; #else ins->inst_l = sp [1]->inst_l; #endif } else { ins->inst_imm = (gssize)(sp [1]->inst_c0); } ins->sreg2 = -1; /* Might be followed by an instruction added by add_widen_op */ if (sp [1]->next == NULL) NULLIFY_INS (sp [1]); } } MONO_ADD_INS ((cfg)->cbb, (ins)); *sp++ = mono_decompose_opcode (cfg, ins); break; } case MONO_CEE_NEG: case MONO_CEE_NOT: case MONO_CEE_CONV_I1: case MONO_CEE_CONV_I2: case MONO_CEE_CONV_I4: case MONO_CEE_CONV_R4: case MONO_CEE_CONV_R8: case MONO_CEE_CONV_U4: case MONO_CEE_CONV_I8: case MONO_CEE_CONV_U8: case MONO_CEE_CONV_OVF_I8: case MONO_CEE_CONV_OVF_U8: case MONO_CEE_CONV_R_UN: /* Special case this earlier so we have long constants in the IR */ if ((il_op == MONO_CEE_CONV_I8 || il_op == MONO_CEE_CONV_U8) && (sp [-1]->opcode == OP_ICONST)) { int data = sp [-1]->inst_c0; sp [-1]->opcode = OP_I8CONST; sp [-1]->type = STACK_I8; #if SIZEOF_REGISTER == 8 if (il_op == MONO_CEE_CONV_U8) sp [-1]->inst_c0 = (guint32)data; else sp [-1]->inst_c0 = data; #else if (il_op == MONO_CEE_CONV_U8) sp [-1]->inst_l = (guint32)data; else sp [-1]->inst_l = data; #endif sp [-1]->dreg = alloc_dreg (cfg, STACK_I8); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_OVF_I4: case MONO_CEE_CONV_OVF_I1: case MONO_CEE_CONV_OVF_I2: case MONO_CEE_CONV_OVF_I: case MONO_CEE_CONV_OVF_I1_UN: case MONO_CEE_CONV_OVF_I2_UN: case MONO_CEE_CONV_OVF_I4_UN: case MONO_CEE_CONV_OVF_I8_UN: case MONO_CEE_CONV_OVF_I_UN: if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) { /* floats are always signed, _UN has no effect */ ADD_UNOP (CEE_CONV_OVF_I8); if (il_op == MONO_CEE_CONV_OVF_I1_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I1); else if (il_op == MONO_CEE_CONV_OVF_I2_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I2); else if (il_op == MONO_CEE_CONV_OVF_I4_UN) ADD_UNOP (MONO_CEE_CONV_OVF_I4); else if (il_op == MONO_CEE_CONV_OVF_I8_UN) ; else ADD_UNOP (il_op); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_OVF_U1: case MONO_CEE_CONV_OVF_U2: case MONO_CEE_CONV_OVF_U4: case MONO_CEE_CONV_OVF_U: case MONO_CEE_CONV_OVF_U1_UN: case MONO_CEE_CONV_OVF_U2_UN: case MONO_CEE_CONV_OVF_U4_UN: case MONO_CEE_CONV_OVF_U8_UN: case MONO_CEE_CONV_OVF_U_UN: if (sp [-1]->type == STACK_R8 || sp [-1]->type == STACK_R4) { /* floats are always signed, _UN has no effect */ ADD_UNOP (CEE_CONV_OVF_U8); ADD_UNOP (il_op); } else { ADD_UNOP (il_op); } break; case MONO_CEE_CONV_U2: case MONO_CEE_CONV_U1: case MONO_CEE_CONV_U: case MONO_CEE_CONV_I: ADD_UNOP (il_op); CHECK_CFG_EXCEPTION; break; case MONO_CEE_ADD_OVF: case MONO_CEE_ADD_OVF_UN: case MONO_CEE_MUL_OVF: case MONO_CEE_MUL_OVF_UN: case MONO_CEE_SUB_OVF: case MONO_CEE_SUB_OVF_UN: MONO_INST_NEW (cfg, ins, il_op); sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; type_from_op (cfg, ins, sp [0], sp [1]); CHECK_TYPE (ins); if (ovf_exc) ins->inst_exc_name = ovf_exc; else ins->inst_exc_name = "OverflowException"; /* Have to insert a widening op */ add_widen_op (cfg, ins, &sp [0], &sp [1]); ins->dreg = alloc_dreg (cfg, (MonoStackType)(ins)->type); MONO_ADD_INS ((cfg)->cbb, ins); /* The opcode might be emulated, so need to special case this */ if (ovf_exc && mono_find_jit_opcode_emulation (ins->opcode)) { switch (ins->opcode) { case OP_IMUL_OVF_UN: /* This opcode is just a placeholder, it will be emulated also */ ins->opcode = OP_IMUL_OVF_UN_OOM; break; case OP_LMUL_OVF_UN: /* This opcode is just a placeholder, it will be emulated also */ ins->opcode = OP_LMUL_OVF_UN_OOM; break; default: g_assert_not_reached (); } } ovf_exc = NULL; *sp++ = mono_decompose_opcode (cfg, ins); break; case MONO_CEE_CPOBJ: GSHAREDVT_FAILURE (il_op); GSHAREDVT_FAILURE (*ip); klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); sp -= 2; mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag); ins_flag = 0; break; case MONO_CEE_LDOBJ: { int loc_index = -1; int stloc_len = 0; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* Optimize the common ldobj+stloc combination */ if (next_ip < end) { switch (next_ip [0]) { case MONO_CEE_STLOC_S: CHECK_OPSIZE (7); loc_index = next_ip [1]; stloc_len = 2; break; case MONO_CEE_STLOC_0: case MONO_CEE_STLOC_1: case MONO_CEE_STLOC_2: case MONO_CEE_STLOC_3: loc_index = next_ip [0] - CEE_STLOC_0; stloc_len = 1; break; default: break; } } if ((loc_index != -1) && ip_in_bb (cfg, cfg->cbb, next_ip)) { CHECK_LOCAL (loc_index); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), sp [0]->dreg, 0); ins->dreg = cfg->locals [loc_index]->dreg; ins->flags |= ins_flag; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += stloc_len; if (ins_flag & MONO_INST_VOLATILE) { /* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ); } ins_flag = 0; break; } /* Optimize the ldobj+stobj combination */ if (next_ip + 4 < end && next_ip [0] == CEE_STOBJ && ip_in_bb (cfg, cfg->cbb, next_ip) && read32 (next_ip + 1) == token) { CHECK_STACK (1); sp --; mini_emit_memory_copy (cfg, sp [0], sp [1], klass, FALSE, ins_flag); il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += 5; ins_flag = 0; break; } ins = mini_emit_memory_load (cfg, m_class_get_byval_arg (klass), sp [0], 0, ins_flag); *sp++ = ins; ins_flag = 0; inline_costs += 1; break; } case MONO_CEE_LDSTR: if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD) { EMIT_NEW_PCONST (cfg, ins, mono_method_get_wrapper_data (method, n)); ins->type = STACK_OBJ; *sp = ins; } else if (method->wrapper_type != MONO_WRAPPER_NONE) { MonoInst *iargs [1]; char *str = (char *)mono_method_get_wrapper_data (method, n); if (cfg->compile_aot) EMIT_NEW_LDSTRLITCONST (cfg, iargs [0], str); else EMIT_NEW_PCONST (cfg, iargs [0], str); *sp = mono_emit_jit_icall (cfg, mono_string_new_wrapper_internal, iargs); } else { { if (cfg->cbb->out_of_line) { MonoInst *iargs [2]; if (image == mono_defaults.corlib) { /* * Avoid relocations in AOT and save some space by using a * version of helper_ldstr specialized to mscorlib. */ EMIT_NEW_ICONST (cfg, iargs [0], mono_metadata_token_index (n)); *sp = mono_emit_jit_icall (cfg, mono_helper_ldstr_mscorlib, iargs); } else { /* Avoid creating the string object */ EMIT_NEW_IMAGECONST (cfg, iargs [0], image); EMIT_NEW_ICONST (cfg, iargs [1], mono_metadata_token_index (n)); *sp = mono_emit_jit_icall (cfg, mono_helper_ldstr, iargs); } } else if (cfg->compile_aot) { NEW_LDSTRCONST (cfg, ins, image, n); *sp = ins; MONO_ADD_INS (cfg->cbb, ins); } else { NEW_PCONST (cfg, ins, NULL); ins->type = STACK_OBJ; ins->inst_p0 = mono_ldstr_checked (image, mono_metadata_token_index (n), cfg->error); CHECK_CFG_ERROR; if (!ins->inst_p0) OUT_OF_MEMORY_FAILURE; *sp = ins; MONO_ADD_INS (cfg->cbb, ins); } } } sp++; break; case MONO_CEE_NEWOBJ: { MonoInst *iargs [2]; MonoMethodSignature *fsig; MonoInst this_ins; MonoInst *alloc; MonoInst *vtable_arg = NULL; cmethod = mini_get_method (cfg, method, token, NULL, generic_context); CHECK_CFG_ERROR; fsig = mono_method_get_signature_checked (cmethod, image, token, generic_context, cfg->error); CHECK_CFG_ERROR; mono_save_token_info (cfg, image, token, cmethod); if (!mono_class_init_internal (cmethod->klass)) TYPE_LOAD_ERROR (cmethod->klass); context_used = mini_method_check_context_used (cfg, cmethod); if (!dont_verify && !cfg->skip_visibility) { MonoMethod *cil_method = cmethod; MonoMethod *target_method = cil_method; if (method->is_inflated) { MonoGenericContainer *container = mono_method_get_generic_container(method_definition); MonoGenericContext *context = (container != NULL ? &container->context : NULL); target_method = mini_get_method_allow_open (method, token, NULL, context, cfg->error); CHECK_CFG_ERROR; } if (!mono_method_can_access_method (method_definition, target_method) && !mono_method_can_access_method (method, cil_method)) emit_method_access_failure (cfg, method, cil_method); } if (cfg->gshared && cmethod && cmethod->klass != method->klass && mono_class_is_ginst (cmethod->klass) && mono_method_is_generic_sharable (cmethod, TRUE) && mono_class_needs_cctor_run (cmethod->klass, method)) { emit_class_init (cfg, cmethod->klass); CHECK_TYPELOAD (cmethod->klass); } /* if (cfg->gsharedvt) { if (mini_is_gsharedvt_variable_signature (sig)) GSHAREDVT_FAILURE (il_op); } */ n = fsig->param_count; CHECK_STACK (n); /* * Generate smaller code for the common newobj <exception> instruction in * argument checking code. */ if (cfg->cbb->out_of_line && m_class_get_image (cmethod->klass) == mono_defaults.corlib && is_exception_class (cmethod->klass) && n <= 2 && ((n < 1) || (!m_type_is_byref (fsig->params [0]) && fsig->params [0]->type == MONO_TYPE_STRING)) && ((n < 2) || (!m_type_is_byref (fsig->params [1]) && fsig->params [1]->type == MONO_TYPE_STRING))) { MonoInst *iargs [3]; sp -= n; EMIT_NEW_ICONST (cfg, iargs [0], m_class_get_type_token (cmethod->klass)); switch (n) { case 0: *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_0, iargs); break; case 1: iargs [1] = sp [0]; *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_1, iargs); break; case 2: iargs [1] = sp [0]; iargs [2] = sp [1]; *sp ++ = mono_emit_jit_icall (cfg, mono_create_corlib_exception_2, iargs); break; default: g_assert_not_reached (); } inline_costs += 5; break; } /* move the args to allow room for 'this' in the first position */ while (n--) { --sp; sp [1] = sp [0]; } for (int i = 0; i < fsig->param_count; ++i) sp [i + fsig->hasthis] = convert_value (cfg, fsig->params [i], sp [i + fsig->hasthis]); /* check_call_signature () requires sp[0] to be set */ this_ins.type = STACK_OBJ; sp [0] = &this_ins; if (check_call_signature (cfg, fsig, sp)) UNVERIFIED; iargs [0] = NULL; if (mini_class_is_system_array (cmethod->klass)) { *sp = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); MonoJitICallId function = MONO_JIT_ICALL_ZeroIsReserved; int rank = m_class_get_rank (cmethod->klass); int n = fsig->param_count; /* Optimize the common cases, use ctor using length for each rank (no lbound). */ if (n == rank) { switch (n) { case 1: function = MONO_JIT_ICALL_mono_array_new_1; break; case 2: function = MONO_JIT_ICALL_mono_array_new_2; break; case 3: function = MONO_JIT_ICALL_mono_array_new_3; break; case 4: function = MONO_JIT_ICALL_mono_array_new_4; break; default: break; } } /* Regular case, rank > 4 or legnth, lbound specified per rank. */ if (function == MONO_JIT_ICALL_ZeroIsReserved) { // FIXME Maximum value of param_count? Realistically 64. Fits in imm? if (!array_new_localalloc_ins) { MONO_INST_NEW (cfg, array_new_localalloc_ins, OP_LOCALLOC_IMM); array_new_localalloc_ins->dreg = alloc_preg (cfg); cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_ADD_INS (init_localsbb, array_new_localalloc_ins); } array_new_localalloc_ins->inst_imm = MAX (array_new_localalloc_ins->inst_imm, n * sizeof (target_mgreg_t)); int dreg = array_new_localalloc_ins->dreg; if (2 * rank == n) { /* [lbound, length, lbound, length, ...] * mono_array_new_n_icall expects a non-interleaved list of * lbounds and lengths, so deinterleave here. */ for (int l = 0; l < 2; ++l) { int src = l; int dst = l * rank; for (int r = 0; r < rank; ++r, src += 2, ++dst) { NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, dst * sizeof (target_mgreg_t), sp [src + 1]->dreg); MONO_ADD_INS (cfg->cbb, ins); } } } else { /* [length, length, length, ...] */ for (int i = 0; i < n; ++i) { NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_REG, dreg, i * sizeof (target_mgreg_t), sp [i + 1]->dreg); MONO_ADD_INS (cfg->cbb, ins); } } EMIT_NEW_ICONST (cfg, ins, n); sp [1] = ins; EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), dreg); ins->type = STACK_PTR; sp [2] = ins; // FIXME Adjust sp by n - 3? Attempts failed. function = MONO_JIT_ICALL_mono_array_new_n_icall; } alloc = mono_emit_jit_icall_id (cfg, function, sp); } else if (cmethod->string_ctor) { g_assert (!context_used); g_assert (!vtable_arg); /* we simply pass a null pointer */ EMIT_NEW_PCONST (cfg, *sp, NULL); /* now call the string ctor */ alloc = mini_emit_method_call_full (cfg, cmethod, fsig, FALSE, sp, NULL, NULL, NULL); } else { if (m_class_is_valuetype (cmethod->klass)) { iargs [0] = mono_compile_create_var (cfg, m_class_get_byval_arg (cmethod->klass), OP_LOCAL); mini_emit_init_rvar (cfg, iargs [0]->dreg, m_class_get_byval_arg (cmethod->klass)); EMIT_NEW_TEMPLOADA (cfg, *sp, iargs [0]->inst_c0); alloc = NULL; /* * The code generated by mini_emit_virtual_call () expects * iargs [0] to be a boxed instance, but luckily the vcall * will be transformed into a normal call there. */ } else if (context_used) { alloc = handle_alloc (cfg, cmethod->klass, FALSE, context_used); *sp = alloc; } else { MonoVTable *vtable = NULL; if (!cfg->compile_aot) vtable = mono_class_vtable_checked (cmethod->klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (cmethod->klass); /* * TypeInitializationExceptions thrown from the mono_runtime_class_init * call in mono_jit_runtime_invoke () can abort the finalizer thread. * As a workaround, we call class cctors before allocating objects. */ if (mini_field_access_needs_cctor_run (cfg, method, cmethod->klass, vtable) && !(g_slist_find (class_inits, cmethod->klass))) { emit_class_init (cfg, cmethod->klass); if (cfg->verbose_level > 2) printf ("class %s.%s needs init call for ctor\n", m_class_get_name_space (cmethod->klass), m_class_get_name (cmethod->klass)); class_inits = g_slist_prepend (class_inits, cmethod->klass); } alloc = handle_alloc (cfg, cmethod->klass, FALSE, 0); *sp = alloc; } CHECK_CFG_EXCEPTION; /*for handle_alloc*/ if (alloc) MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, alloc->dreg); /* Now call the actual ctor */ int ctor_inline_costs = 0; handle_ctor_call (cfg, cmethod, fsig, context_used, sp, ip, &ctor_inline_costs); // don't contribute to inline_const if ctor has [MethodImpl(MethodImplOptions.AggressiveInlining)] if (!COMPILE_LLVM(cfg) || !(cmethod->iflags & METHOD_IMPL_ATTRIBUTE_AGGRESSIVE_INLINING)) inline_costs += ctor_inline_costs; CHECK_CFG_EXCEPTION; } if (alloc == NULL) { /* Valuetype */ EMIT_NEW_TEMPLOAD (cfg, ins, iargs [0]->inst_c0); mini_type_to_eval_stack_type (cfg, m_class_get_byval_arg (ins->klass), ins); *sp++= ins; } else { *sp++ = alloc; } inline_costs += 5; if (!(seq_point_locs && mono_bitset_test_fast (seq_point_locs, next_ip - header->code))) emit_seq_point (cfg, method, next_ip, FALSE, TRUE); break; } case MONO_CEE_CASTCLASS: case MONO_CEE_ISINST: { --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (sp [0]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, (il_op == MONO_CEE_ISINST) ? OP_ISINST : OP_CASTCLASS); ins->dreg = alloc_preg (cfg); ins->sreg1 = (*sp)->dreg; ins->klass = klass; ins->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, ins); CHECK_CFG_EXCEPTION; *sp++ = ins; cfg->flags |= MONO_CFG_HAS_TYPE_CHECK; break; } case MONO_CEE_UNBOX_ANY: { MonoInst *res, *addr; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mini_is_gsharedvt_klass (klass)) { res = handle_unbox_gsharedvt (cfg, klass, *sp); inline_costs += 2; } else if (mini_class_is_reference (klass)) { if (MONO_INS_IS_PCONST_NULL (*sp)) { EMIT_NEW_PCONST (cfg, res, NULL); res->type = STACK_OBJ; } else { MONO_INST_NEW (cfg, res, OP_CASTCLASS); res->dreg = alloc_preg (cfg); res->sreg1 = (*sp)->dreg; res->klass = klass; res->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, res); cfg->flags |= MONO_CFG_HAS_TYPE_CHECK; } } else if (mono_class_is_nullable (klass)) { res = handle_unbox_nullable (cfg, *sp, klass, context_used); } else { addr = mini_handle_unbox (cfg, klass, *sp, context_used); /* LDOBJ */ EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); res = ins; inline_costs += 2; } *sp ++ = res; break; } case MONO_CEE_BOX: { MonoInst *val; MonoClass *enum_class; MonoMethod *has_flag; MonoMethodSignature *has_flag_sig; --sp; val = *sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mini_class_is_reference (klass)) { *sp++ = val; break; } val = convert_value (cfg, m_class_get_byval_arg (klass), val); if (klass == mono_defaults.void_class) UNVERIFIED; if (target_type_is_incompatible (cfg, m_class_get_byval_arg (klass), val)) UNVERIFIED; /* frequent check in generic code: box (struct), brtrue */ /* * Look for: * * <push int/long ptr> * <push int/long> * box MyFlags * constrained. MyFlags * callvirt instace bool class [mscorlib] System.Enum::HasFlag (class [mscorlib] System.Enum) * * If we find this sequence and the operand types on box and constrained * are equal, we can emit a specialized instruction sequence instead of * the very slow HasFlag () call. * This code sequence is generated by older mcs/csc, the newer one is handled in * emit_inst_for_method (). */ guint32 constrained_token; guint32 callvirt_token; if ((cfg->opt & MONO_OPT_INTRINS) && // FIXME ip_in_bb as we go? next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (ip = il_read_constrained (next_ip, end, &constrained_token)) && ip_in_bb (cfg, cfg->cbb, ip) && (ip = il_read_callvirt (ip, end, &callvirt_token)) && ip_in_bb (cfg, cfg->cbb, ip) && m_class_is_enumtype (klass) && (enum_class = mini_get_class (method, constrained_token, generic_context)) && (has_flag = mini_get_method (cfg, method, callvirt_token, NULL, generic_context)) && has_flag->klass == mono_defaults.enum_class && !strcmp (has_flag->name, "HasFlag") && (has_flag_sig = mono_method_signature_internal (has_flag)) && has_flag_sig->hasthis && has_flag_sig->param_count == 1) { CHECK_TYPELOAD (enum_class); if (enum_class == klass) { MonoInst *enum_this, *enum_flag; next_ip = ip; il_op = MONO_CEE_CALLVIRT; --sp; enum_this = sp [0]; enum_flag = sp [1]; *sp++ = mini_handle_enum_has_flag (cfg, klass, enum_this, -1, enum_flag); break; } } guint32 unbox_any_token; /* * Common in generic code: * box T1, unbox.any T2. */ if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (ip = il_read_unbox_any (next_ip, end, &unbox_any_token))) { MonoClass *unbox_klass = mini_get_class (method, unbox_any_token, generic_context); CHECK_TYPELOAD (unbox_klass); if (klass == unbox_klass) { next_ip = ip; *sp++ = val; break; } } // Optimize // // box // call object::GetType() // guint32 gettype_token; if ((ip = il_read_call(next_ip, end, &gettype_token)) && ip_in_bb (cfg, cfg->cbb, ip)) { MonoMethod* gettype_method = mini_get_method (cfg, method, gettype_token, NULL, generic_context); if (!strcmp (gettype_method->name, "GetType") && gettype_method->klass == mono_defaults.object_class) { mono_class_init_internal(klass); if (mono_class_get_checked (m_class_get_image (klass), m_class_get_type_token (klass), error) == klass) { if (cfg->compile_aot) { EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (klass), m_class_get_type_token (klass), generic_context); } else { MonoType *klass_type = m_class_get_byval_arg (klass); MonoReflectionType* reflection_type = mono_type_get_object_checked (klass_type, cfg->error); EMIT_NEW_PCONST (cfg, ins, reflection_type); } ins->type = STACK_OBJ; ins->klass = mono_defaults.systemtype_class; *sp++ = ins; next_ip = ip; break; } } } // Optimize // // box // ldnull // ceq (or cgt.un) // // to just // // ldc.i4.0 (or 1) guchar* ldnull_ip; if ((ldnull_ip = il_read_op (next_ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) { gboolean is_eq = FALSE, is_neq = FALSE; if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ))) is_eq = TRUE; else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN))) is_neq = TRUE; if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) && !mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) { next_ip = ip; il_op = (MonoOpcodeEnum) (is_eq ? CEE_LDC_I4_0 : CEE_LDC_I4_1); EMIT_NEW_ICONST (cfg, ins, is_eq ? 0 : 1); ins->type = STACK_I4; *sp++ = ins; break; } } guint32 isinst_tk = 0; if ((ip = il_read_op_and_token (next_ip, end, CEE_ISINST, MONO_CEE_ISINST, &isinst_tk)) && ip_in_bb (cfg, cfg->cbb, ip)) { MonoClass *isinst_class = mini_get_class (method, isinst_tk, generic_context); if (!mono_class_is_nullable (klass) && !mono_class_is_nullable (isinst_class) && !mini_is_gsharedvt_variable_klass (klass) && !mini_is_gsharedvt_variable_klass (isinst_class) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (klass)) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (isinst_class))) { // Optimize // // box // isinst [Type] // brfalse/brtrue // // to // // ldc.i4.0 (or 1) // brfalse/brtrue // guchar* br_ip = NULL; if ((br_ip = il_read_brtrue (ip, end, &target)) || (br_ip = il_read_brtrue_s (ip, end, &target)) || (br_ip = il_read_brfalse (ip, end, &target)) || (br_ip = il_read_brfalse_s (ip, end, &target))) { gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass); next_ip = ip; il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0); EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } // Optimize // // box // isinst [Type] // ldnull // ceq/cgt.un // // to // // ldc.i4.0 (or 1) // guchar* ldnull_ip = NULL; if ((ldnull_ip = il_read_op (ip, end, CEE_LDNULL, MONO_CEE_LDNULL)) && ip_in_bb (cfg, cfg->cbb, ldnull_ip)) { gboolean is_eq = FALSE, is_neq = FALSE; if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CEQ))) is_eq = TRUE; else if ((ip = il_read_op (ldnull_ip, end, CEE_PREFIX1, MONO_CEE_CGT_UN))) is_neq = TRUE; if ((is_eq || is_neq) && ip_in_bb (cfg, cfg->cbb, ip) && !mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass)) { gboolean isinst = mono_class_is_assignable_from_internal (isinst_class, klass); next_ip = ip; if (is_eq) isinst = !isinst; il_op = (MonoOpcodeEnum) (isinst ? CEE_LDC_I4_1 : CEE_LDC_I4_0); EMIT_NEW_ICONST (cfg, ins, isinst ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } } // Optimize // // box // isinst [Type] // unbox.any // // to // // nop // guchar* unbox_ip = NULL; guint32 unbox_token = 0; if ((unbox_ip = il_read_unbox_any (ip, end, &unbox_token)) && ip_in_bb (cfg, cfg->cbb, unbox_ip)) { MonoClass *unbox_klass = mini_get_class (method, unbox_token, generic_context); CHECK_TYPELOAD (unbox_klass); if (!mono_class_is_nullable (unbox_klass) && !mini_is_gsharedvt_klass (unbox_klass) && klass == isinst_class && klass == unbox_klass) { *sp++ = val; next_ip = unbox_ip; break; } } } } gboolean is_true; // FIXME: LLVM can't handle the inconsistent bb linking if (!mono_class_is_nullable (klass) && !mini_is_gsharedvt_klass (klass) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && ( (is_true = !!(ip = il_read_brtrue (next_ip, end, &target))) || (is_true = !!(ip = il_read_brtrue_s (next_ip, end, &target))) || (ip = il_read_brfalse (next_ip, end, &target)) || (ip = il_read_brfalse_s (next_ip, end, &target)))) { int dreg; MonoBasicBlock *true_bb, *false_bb; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip = ip; if (cfg->verbose_level > 3) { printf ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip, NULL)); printf ("<box+brtrue opt>\n"); } /* * We need to link both bblocks, since it is needed for handling stack * arguments correctly (See test_0_box_brtrue_opt_regress_81102). * Branching to only one of them would lead to inconsistencies, so * generate an ICONST+BRTRUE, the branch opts will get rid of them. */ GET_BBLOCK (cfg, true_bb, target); GET_BBLOCK (cfg, false_bb, next_ip); mono_link_bblock (cfg, cfg->cbb, true_bb); mono_link_bblock (cfg, cfg->cbb, false_bb); if (sp != stack_start) { handle_stack_args (cfg, stack_start, sp - stack_start); sp = stack_start; CHECK_UNVERIFIABLE (cfg); } if (COMPILE_LLVM (cfg)) { dreg = alloc_ireg (cfg); MONO_EMIT_NEW_ICONST (cfg, dreg, 0); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, dreg, is_true ? 0 : 1); MONO_EMIT_NEW_BRANCH_BLOCK2 (cfg, OP_IBEQ, true_bb, false_bb); } else { /* The JIT can't eliminate the iconst+compare */ MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = is_true ? true_bb : false_bb; MONO_ADD_INS (cfg->cbb, ins); } start_new_bblock = 1; break; } if (m_class_is_enumtype (klass) && !mini_is_gsharedvt_klass (klass) && !(val->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4)) { /* Can't do this with 64 bit enums on 32 bit since the vtype decomp pass is ran after the long decomp pass */ if (val->opcode == OP_ICONST) { MONO_INST_NEW (cfg, ins, OP_BOX_ICONST); ins->type = STACK_OBJ; ins->klass = klass; ins->inst_c0 = val->inst_c0; ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type); } else { MONO_INST_NEW (cfg, ins, OP_BOX); ins->type = STACK_OBJ; ins->klass = klass; ins->sreg1 = val->dreg; ins->dreg = alloc_dreg (cfg, (MonoStackType)val->type); } MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; } else { *sp++ = mini_emit_box (cfg, val, klass, context_used); } CHECK_CFG_EXCEPTION; inline_costs += 1; break; } case MONO_CEE_UNBOX: { --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_save_token_info (cfg, image, token, klass); context_used = mini_class_check_context_used (cfg, klass); if (mono_class_is_nullable (klass)) { MonoInst *val; val = handle_unbox_nullable (cfg, *sp, klass, context_used); EMIT_NEW_VARLOADA (cfg, ins, get_vreg_to_inst (cfg, val->dreg), m_class_get_byval_arg (val->klass)); *sp++= ins; } else { ins = mini_handle_unbox (cfg, klass, *sp, context_used); *sp++ = ins; } inline_costs += 2; break; } case MONO_CEE_LDFLD: case MONO_CEE_LDFLDA: case MONO_CEE_STFLD: case MONO_CEE_LDSFLD: case MONO_CEE_LDSFLDA: case MONO_CEE_STSFLD: { MonoClassField *field; guint foffset; gboolean is_instance; gpointer addr = NULL; gboolean is_special_static; MonoType *ftype; MonoInst *store_val = NULL; MonoInst *thread_ins; is_instance = (il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDFLDA || il_op == MONO_CEE_STFLD); if (is_instance) { if (il_op == MONO_CEE_STFLD) { sp -= 2; store_val = sp [1]; } else { --sp; } if (sp [0]->type == STACK_I4 || sp [0]->type == STACK_I8 || sp [0]->type == STACK_R8) UNVERIFIED; if (il_op != MONO_CEE_LDFLD && sp [0]->type == STACK_VTYPE) UNVERIFIED; } else { if (il_op == MONO_CEE_STSFLD) { sp--; store_val = sp [0]; } } if (method->wrapper_type != MONO_WRAPPER_NONE) { field = (MonoClassField *)mono_method_get_wrapper_data (method, token); klass = m_field_get_parent (field); } else { klass = NULL; field = mono_field_from_token_checked (image, token, &klass, generic_context, cfg->error); if (!field) CHECK_TYPELOAD (klass); CHECK_CFG_ERROR; } if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_field (method, field)) FIELD_ACCESS_FAILURE (method, field); mono_class_init_internal (klass); mono_class_setup_fields (klass); ftype = mono_field_get_type_internal (field); /* * LDFLD etc. is usable on static fields as well, so convert those cases to * the static case. */ if (is_instance && ftype->attrs & FIELD_ATTRIBUTE_STATIC) { switch (il_op) { case MONO_CEE_LDFLD: il_op = MONO_CEE_LDSFLD; break; case MONO_CEE_STFLD: il_op = MONO_CEE_STSFLD; break; case MONO_CEE_LDFLDA: il_op = MONO_CEE_LDSFLDA; break; default: g_assert_not_reached (); } is_instance = FALSE; } context_used = mini_class_check_context_used (cfg, klass); if (il_op == MONO_CEE_LDSFLD) { ins = mini_emit_inst_for_field_load (cfg, field); if (ins) { *sp++ = ins; goto field_access_end; } } /* INSTANCE CASE */ if (is_instance) g_assert (field->offset); foffset = m_class_is_valuetype (klass) ? field->offset - MONO_ABI_SIZEOF (MonoObject): field->offset; if (il_op == MONO_CEE_STFLD) { sp [1] = convert_value (cfg, field->type, sp [1]); if (target_type_is_incompatible (cfg, field->type, sp [1])) UNVERIFIED; { MonoInst *store; MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ()); if (ins_flag & MONO_INST_VOLATILE) { /* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; context_used = mini_class_check_context_used (cfg, klass); offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg); if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) { store = mini_emit_storing_write_barrier (cfg, ins, sp [1]); } else { /* The decomposition will call mini_emit_memory_copy () which will emit a wbarrier if needed */ EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, dreg, 0, sp [1]->dreg); } } else { if (cfg->gen_write_barriers && mini_type_to_stind (cfg, field->type) == CEE_STIND_REF && !MONO_INS_IS_PCONST_NULL (sp [1])) { /* insert call to write barrier */ MonoInst *ptr; int dreg; dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU_IMM (cfg, ptr, OP_PADD_IMM, dreg, sp [0]->dreg, foffset); store = mini_emit_storing_write_barrier (cfg, ptr, sp [1]); } else { EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, field->type, sp [0]->dreg, foffset, sp [1]->dreg); } } if (sp [0]->opcode != OP_LDADDR) store->flags |= MONO_INST_FAULT; store->flags |= ins_flag; } goto field_access_end; } if (is_instance) { if (sp [0]->type == STACK_VTYPE) { MonoInst *var; /* Have to compute the address of the variable */ var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!var) var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (klass), OP_LOCAL, sp [0]->dreg); else g_assert (var->klass == klass); EMIT_NEW_VARLOADA (cfg, ins, var, m_class_get_byval_arg (var->klass)); sp [0] = ins; } if (il_op == MONO_CEE_LDFLDA) { if (sp [0]->type == STACK_OBJ) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "NullReferenceException"); } dreg = alloc_ireg_mp (cfg); if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, sp [0]->dreg, offset_ins->dreg); } else { EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, dreg, sp [0]->dreg, foffset); } ins->klass = mono_class_from_mono_type_internal (field->type); ins->type = STACK_MP; *sp++ = ins; } else { MonoInst *load; MONO_EMIT_NULL_CHECK (cfg, sp [0]->dreg, foffset > mono_target_pagesize ()); #ifdef MONO_ARCH_SIMD_INTRINSICS if (sp [0]->opcode == OP_LDADDR && m_class_is_simd_type (klass) && cfg->opt & MONO_OPT_SIMD) { ins = mono_emit_simd_field_load (cfg, field, sp [0]); if (ins) { *sp++ = ins; goto field_access_end; } } #endif MonoInst *field_add_inst = sp [0]; if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_gsharedvt_info (cfg, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); EMIT_NEW_BIALU (cfg, field_add_inst, OP_PADD, alloc_ireg_mp (cfg), sp [0]->dreg, offset_ins->dreg); foffset = 0; } load = mini_emit_memory_load (cfg, field->type, field_add_inst, foffset, ins_flag); if (sp [0]->opcode != OP_LDADDR) load->flags |= MONO_INST_FAULT; *sp++ = load; } } if (is_instance) goto field_access_end; /* STATIC CASE */ context_used = mini_class_check_context_used (cfg, klass); if (ftype->attrs & FIELD_ATTRIBUTE_LITERAL) { mono_error_set_field_missing (cfg->error, m_field_get_parent (field), field->name, NULL, "Using static instructions with literal field"); CHECK_CFG_ERROR; } /* The special_static_fields field is init'd in mono_class_vtable, so it needs * to be called here. */ if (!context_used) { mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); } addr = mono_special_static_field_get_offset (field, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); is_special_static = mono_class_field_is_special_static (field); if (is_special_static && ((gsize)addr & 0x80000000) == 0) thread_ins = mono_create_tls_get (cfg, TLS_KEY_THREAD); else thread_ins = NULL; /* Generate IR to compute the field address */ if (is_special_static && ((gsize)addr & 0x80000000) == 0 && thread_ins && !(context_used && cfg->gsharedvt && mini_is_gsharedvt_klass (klass))) { /* * Fast access to TLS data * Inline version of get_thread_static_data () in * threads.c. */ guint32 offset; int idx, static_data_reg, array_reg, dreg; static_data_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, static_data_reg, thread_ins->dreg, MONO_STRUCT_OFFSET (MonoInternalThread, static_data)); if (cfg->compile_aot || context_used) { int offset_reg, offset2_reg, idx_reg; /* For TLS variables, this will return the TLS offset */ if (context_used) { MonoInst *addr_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, addr_ins->dreg, addr_ins->dreg, 1); } else { EMIT_NEW_SFLDACONST (cfg, ins, field); } offset_reg = ins->dreg; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset_reg, offset_reg, 0x7fffffff); idx_reg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, idx_reg, offset_reg, 0x3f); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHL_IMM, idx_reg, idx_reg, TARGET_SIZEOF_VOID_P == 8 ? 3 : 2); MONO_EMIT_NEW_BIALU (cfg, OP_PADD, static_data_reg, static_data_reg, idx_reg); array_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, 0); offset2_reg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ISHR_UN_IMM, offset2_reg, offset_reg, 6); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_IAND_IMM, offset2_reg, offset2_reg, 0x1ffffff); dreg = alloc_ireg (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, array_reg, offset2_reg); } else { offset = (gsize)addr & 0x7fffffff; idx = offset & 0x3f; array_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, array_reg, static_data_reg, idx * TARGET_SIZEOF_VOID_P); dreg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_ADD_IMM, dreg, array_reg, ((offset >> 6) & 0x1ffffff)); } } else if ((cfg->compile_aot && is_special_static) || (context_used && is_special_static)) { MonoInst *iargs [1]; g_assert (m_field_get_parent (field)); if (context_used) { iargs [0] = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_CLASS_FIELD); } else { EMIT_NEW_FIELDCONST (cfg, iargs [0], field); } ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs); } else if (context_used) { MonoInst *static_data; /* g_print ("sharing static field access in %s.%s.%s - depth %d offset %d\n", method->klass->name_space, method->klass->name, method->name, depth, field->offset); */ if (mono_class_needs_cctor_run (klass, method)) emit_class_init (cfg, klass); /* * The pointer we're computing here is * * super_info.static_data + field->offset */ static_data = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_STATIC_DATA); if (mini_is_gsharedvt_klass (klass)) { MonoInst *offset_ins; offset_ins = emit_get_rgctx_field (cfg, context_used, field, MONO_RGCTX_INFO_FIELD_OFFSET); /* The value is offset by 1 */ EMIT_NEW_BIALU_IMM (cfg, ins, OP_PSUB_IMM, offset_ins->dreg, offset_ins->dreg, 1); dreg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU (cfg, ins, OP_PADD, dreg, static_data->dreg, offset_ins->dreg); } else if (field->offset == 0) { ins = static_data; } else { int addr_reg = mono_alloc_preg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_PADD_IMM, addr_reg, static_data->dreg, field->offset); } } else if (cfg->compile_aot && addr) { MonoInst *iargs [1]; g_assert (m_field_get_parent (field)); EMIT_NEW_FIELDCONST (cfg, iargs [0], field); ins = mono_emit_jit_icall (cfg, mono_class_static_field_address, iargs); } else { MonoVTable *vtable = NULL; if (!cfg->compile_aot) vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); if (!addr) { if (mini_field_access_needs_cctor_run (cfg, method, klass, vtable)) { if (!(g_slist_find (class_inits, klass))) { emit_class_init (cfg, klass); if (cfg->verbose_level > 2) printf ("class %s.%s needs init call for %s\n", m_class_get_name_space (klass), m_class_get_name (klass), mono_field_get_name (field)); class_inits = g_slist_prepend (class_inits, klass); } } else { if (cfg->run_cctors) { /* This makes so that inline cannot trigger */ /* .cctors: too many apps depend on them */ /* running with a specific order... */ g_assert (vtable); if (!vtable->initialized && m_class_has_cctor (vtable->klass)) INLINE_FAILURE ("class init"); if (!mono_runtime_class_init_full (vtable, cfg->error)) { mono_cfg_set_exception (cfg, MONO_EXCEPTION_MONO_ERROR); goto exception_exit; } } } if (cfg->compile_aot) EMIT_NEW_SFLDACONST (cfg, ins, field); else { g_assert (vtable); addr = mono_static_field_get_addr (vtable, field); g_assert (addr); EMIT_NEW_PCONST (cfg, ins, addr); } } else { MonoInst *iargs [1]; EMIT_NEW_ICONST (cfg, iargs [0], GPOINTER_TO_UINT (addr)); ins = mono_emit_jit_icall (cfg, mono_get_special_static_data, iargs); } } /* Generate IR to do the actual load/store operation */ if ((il_op == MONO_CEE_STFLD || il_op == MONO_CEE_STSFLD)) { if (ins_flag & MONO_INST_VOLATILE) { /* Volatile stores have release semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } else if (!mini_debug_options.weak_memory_model && mini_type_is_reference (ftype)) { mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_REL); } } if (il_op == MONO_CEE_LDSFLDA) { ins->klass = mono_class_from_mono_type_internal (ftype); ins->type = STACK_PTR; *sp++ = ins; } else if (il_op == MONO_CEE_STSFLD) { MonoInst *store; EMIT_NEW_STORE_MEMBASE_TYPE (cfg, store, ftype, ins->dreg, 0, store_val->dreg); store->flags |= ins_flag; } else { gboolean is_const = FALSE; MonoVTable *vtable = NULL; gpointer addr = NULL; if (!context_used) { vtable = mono_class_vtable_checked (klass, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (klass); } if ((ftype->attrs & FIELD_ATTRIBUTE_INIT_ONLY) && (((addr = mono_aot_readonly_field_override (field)) != NULL) || (!context_used && !cfg->compile_aot && vtable->initialized))) { int ro_type = ftype->type; if (!addr) addr = mono_static_field_get_addr (vtable, field); if (ro_type == MONO_TYPE_VALUETYPE && m_class_is_enumtype (ftype->data.klass)) { ro_type = mono_class_enum_basetype_internal (ftype->data.klass)->type; } GSHAREDVT_FAILURE (il_op); /* printf ("RO-FIELD %s.%s:%s\n", klass->name_space, klass->name, mono_field_get_name (field));*/ is_const = TRUE; switch (ro_type) { case MONO_TYPE_BOOLEAN: case MONO_TYPE_U1: EMIT_NEW_ICONST (cfg, *sp, *((guint8 *)addr)); sp++; break; case MONO_TYPE_I1: EMIT_NEW_ICONST (cfg, *sp, *((gint8 *)addr)); sp++; break; case MONO_TYPE_CHAR: case MONO_TYPE_U2: EMIT_NEW_ICONST (cfg, *sp, *((guint16 *)addr)); sp++; break; case MONO_TYPE_I2: EMIT_NEW_ICONST (cfg, *sp, *((gint16 *)addr)); sp++; break; break; case MONO_TYPE_I4: EMIT_NEW_ICONST (cfg, *sp, *((gint32 *)addr)); sp++; break; case MONO_TYPE_U4: EMIT_NEW_ICONST (cfg, *sp, *((guint32 *)addr)); sp++; break; case MONO_TYPE_I: case MONO_TYPE_U: case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr)); mini_type_to_eval_stack_type ((cfg), field->type, *sp); sp++; break; case MONO_TYPE_STRING: case MONO_TYPE_OBJECT: case MONO_TYPE_CLASS: case MONO_TYPE_SZARRAY: case MONO_TYPE_ARRAY: if (!mono_gc_is_moving ()) { EMIT_NEW_PCONST (cfg, *sp, *((gpointer *)addr)); mini_type_to_eval_stack_type ((cfg), field->type, *sp); sp++; } else { is_const = FALSE; } break; case MONO_TYPE_I8: case MONO_TYPE_U8: EMIT_NEW_I8CONST (cfg, *sp, *((gint64 *)addr)); sp++; break; case MONO_TYPE_R4: case MONO_TYPE_R8: case MONO_TYPE_VALUETYPE: default: is_const = FALSE; break; } } if (!is_const) { MonoInst *load; EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, load, field->type, ins->dreg, 0); load->flags |= ins_flag; *sp++ = load; } } field_access_end: if ((il_op == MONO_CEE_LDFLD || il_op == MONO_CEE_LDSFLD) && (ins_flag & MONO_INST_VOLATILE)) { /* Volatile loads have acquire semantics, see 12.6.7 in Ecma 335 */ mini_emit_memory_barrier (cfg, MONO_MEMORY_BARRIER_ACQ); } ins_flag = 0; break; } case MONO_CEE_STOBJ: sp -= 2; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* FIXME: should check item at sp [1] is compatible with the type of the store. */ mini_emit_memory_store (cfg, m_class_get_byval_arg (klass), sp [0], sp [1], ins_flag); ins_flag = 0; inline_costs += 1; break; /* * Array opcodes */ case MONO_CEE_NEWARR: { MonoInst *len_ins; const char *data_ptr; int data_size = 0; guint32 field_token; --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (m_class_get_byval_arg (klass)->type == MONO_TYPE_VOID) UNVERIFIED; context_used = mini_class_check_context_used (cfg, klass); #ifndef TARGET_S390X if (sp [0]->type == STACK_I8 && TARGET_SIZEOF_VOID_P == 4) { MONO_INST_NEW (cfg, ins, OP_LCONV_TO_OVF_U4); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I4; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } #else /* The array allocator expects a 64-bit input, and we cannot rely on the high bits of a 32-bit result, so we have to extend. */ if (sp [0]->type == STACK_I4 && TARGET_SIZEOF_VOID_P == 8) { MONO_INST_NEW (cfg, ins, OP_ICONV_TO_I8); ins->sreg1 = sp [0]->dreg; ins->type = STACK_I8; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); *sp = mono_decompose_opcode (cfg, ins); } #endif if (context_used) { MonoInst *args [3]; MonoClass *array_class = mono_class_create_array (klass, 1); MonoMethod *managed_alloc = mono_gc_get_managed_array_allocator (array_class); /* FIXME: Use OP_NEWARR and decompose later to help abcrem */ /* vtable */ args [0] = mini_emit_get_rgctx_klass (cfg, context_used, array_class, MONO_RGCTX_INFO_VTABLE); /* array len */ args [1] = sp [0]; if (managed_alloc) ins = mono_emit_method_call (cfg, managed_alloc, args, NULL); else ins = mono_emit_jit_icall (cfg, ves_icall_array_new_specific, args); } else { /* Decompose later since it is needed by abcrem */ MonoClass *array_type = mono_class_create_array (klass, 1); mono_class_vtable_checked (array_type, cfg->error); CHECK_CFG_ERROR; CHECK_TYPELOAD (array_type); MONO_INST_NEW (cfg, ins, OP_NEWARR); ins->dreg = alloc_ireg_ref (cfg); ins->sreg1 = sp [0]->dreg; ins->inst_newa_class = klass; ins->type = STACK_OBJ; ins->klass = array_type; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; /* Needed so mono_emit_load_get_addr () gets called */ mono_get_got_var (cfg); } len_ins = sp [0]; ip += 5; *sp++ = ins; inline_costs += 1; /* * we inline/optimize the initialization sequence if possible. * we should also allocate the array as not cleared, since we spend as much time clearing to 0 as initializing * for small sizes open code the memcpy * ensure the rva field is big enough */ if ((cfg->opt & MONO_OPT_INTRINS) && next_ip < end && ip_in_bb (cfg, cfg->cbb, next_ip) && (len_ins->opcode == OP_ICONST) && (data_ptr = initialize_array_data (cfg, method, cfg->compile_aot, next_ip, end, klass, len_ins->inst_c0, &data_size, &field_token, &il_op, &next_ip))) { MonoMethod *memcpy_method = mini_get_memcpy_method (); MonoInst *iargs [3]; int add_reg = alloc_ireg_mp (cfg); EMIT_NEW_BIALU_IMM (cfg, iargs [0], OP_PADD_IMM, add_reg, ins->dreg, MONO_STRUCT_OFFSET (MonoArray, vector)); if (cfg->compile_aot) { EMIT_NEW_AOTCONST_TOKEN (cfg, iargs [1], MONO_PATCH_INFO_RVA, m_class_get_image (method->klass), GPOINTER_TO_UINT(field_token), STACK_PTR, NULL); } else { EMIT_NEW_PCONST (cfg, iargs [1], (char*)data_ptr); } EMIT_NEW_ICONST (cfg, iargs [2], data_size); mono_emit_method_call (cfg, memcpy_method, iargs, NULL); } break; } case MONO_CEE_LDLEN: --sp; if (sp [0]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_LDLEN); ins->dreg = alloc_preg (cfg); ins->sreg1 = sp [0]->dreg; ins->inst_imm = MONO_STRUCT_OFFSET (MonoArray, max_length); ins->type = STACK_I4; /* This flag will be inherited by the decomposition */ ins->flags |= MONO_INST_FAULT | MONO_INST_INVARIANT_LOAD; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; MONO_EMIT_NEW_UNALU (cfg, OP_NOT_NULL, -1, sp [0]->dreg); *sp++ = ins; break; case MONO_CEE_LDELEMA: sp -= 2; if (sp [0]->type != STACK_OBJ) UNVERIFIED; cfg->flags |= MONO_CFG_HAS_LDELEMA; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); /* we need to make sure that this array is exactly the type it needs * to be for correctness. the wrappers are lax with their usage * so we need to ignore them here */ if (!m_class_is_valuetype (klass) && method->wrapper_type == MONO_WRAPPER_NONE && !readonly) { MonoClass *array_class = mono_class_create_array (klass, 1); mini_emit_check_array_type (cfg, sp [0], array_class); CHECK_TYPELOAD (array_class); } readonly = FALSE; ins = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); *sp++ = ins; break; case MONO_CEE_LDELEM: case MONO_CEE_LDELEM_I1: case MONO_CEE_LDELEM_U1: case MONO_CEE_LDELEM_I2: case MONO_CEE_LDELEM_U2: case MONO_CEE_LDELEM_I4: case MONO_CEE_LDELEM_U4: case MONO_CEE_LDELEM_I8: case MONO_CEE_LDELEM_I: case MONO_CEE_LDELEM_R4: case MONO_CEE_LDELEM_R8: case MONO_CEE_LDELEM_REF: { MonoInst *addr; sp -= 2; if (il_op == MONO_CEE_LDELEM) { klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_class_init_internal (klass); } else klass = array_access_to_klass (il_op); if (sp [0]->type != STACK_OBJ) UNVERIFIED; cfg->flags |= MONO_CFG_HAS_LDELEMA; if (mini_is_gsharedvt_variable_klass (klass)) { // FIXME-VT: OP_ICONST optimization addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); ins->opcode = OP_LOADV_MEMBASE; } else if (sp [1]->opcode == OP_ICONST) { int array_reg = sp [0]->dreg; int index_reg = sp [1]->dreg; int offset = (mono_class_array_element_size (klass) * sp [1]->inst_c0) + MONO_STRUCT_OFFSET (MonoArray, vector); if (SIZEOF_REGISTER == 8 && COMPILE_LLVM (cfg)) MONO_EMIT_NEW_UNALU (cfg, OP_ZEXT_I4, index_reg, index_reg); MONO_EMIT_BOUNDS_CHECK (cfg, array_reg, MonoArray, max_length, index_reg); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), array_reg, offset); } else { addr = mini_emit_ldelema_1_ins (cfg, klass, sp [0], sp [1], TRUE, FALSE); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (klass), addr->dreg, 0); } *sp++ = ins; break; } case MONO_CEE_STELEM_I: case MONO_CEE_STELEM_I1: case MONO_CEE_STELEM_I2: case MONO_CEE_STELEM_I4: case MONO_CEE_STELEM_I8: case MONO_CEE_STELEM_R4: case MONO_CEE_STELEM_R8: case MONO_CEE_STELEM_REF: case MONO_CEE_STELEM: { sp -= 3; cfg->flags |= MONO_CFG_HAS_LDELEMA; if (il_op == MONO_CEE_STELEM) { klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); mono_class_init_internal (klass); } else klass = array_access_to_klass (il_op); if (sp [0]->type != STACK_OBJ) UNVERIFIED; sp [2] = convert_value (cfg, m_class_get_byval_arg (klass), sp [2]); mini_emit_array_store (cfg, klass, sp, TRUE); inline_costs += 1; break; } case MONO_CEE_CKFINITE: { --sp; if (cfg->llvm_only) { MonoInst *iargs [1]; iargs [0] = sp [0]; *sp++ = mono_emit_jit_icall (cfg, mono_ckfinite, iargs); } else { sp [0] = convert_value (cfg, m_class_get_byval_arg (mono_defaults.double_class), sp [0]); MONO_INST_NEW (cfg, ins, OP_CKFINITE); ins->sreg1 = sp [0]->dreg; ins->dreg = alloc_freg (cfg); ins->type = STACK_R8; MONO_ADD_INS (cfg->cbb, ins); *sp++ = mono_decompose_opcode (cfg, ins); } break; } case MONO_CEE_REFANYVAL: { MonoInst *src_var, *src; int klass_reg = alloc_preg (cfg); int dreg = alloc_preg (cfg); GSHAREDVT_FAILURE (il_op); MONO_INST_NEW (cfg, ins, il_op); --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); context_used = mini_class_check_context_used (cfg, klass); // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg); EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype); MONO_EMIT_NEW_LOAD_MEMBASE (cfg, klass_reg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass)); if (context_used) { MonoInst *klass_ins; klass_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS); // FIXME: MONO_EMIT_NEW_BIALU (cfg, OP_COMPARE, -1, klass_reg, klass_ins->dreg); MONO_EMIT_NEW_COND_EXC (cfg, NE_UN, "InvalidCastException"); } else { mini_emit_class_check (cfg, klass_reg, klass); } EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value)); ins->type = STACK_MP; ins->klass = klass; *sp++ = ins; break; } case MONO_CEE_MKREFANY: { MonoInst *loc, *addr; GSHAREDVT_FAILURE (il_op); MONO_INST_NEW (cfg, ins, il_op); --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); context_used = mini_class_check_context_used (cfg, klass); loc = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL); EMIT_NEW_TEMPLOADA (cfg, addr, loc->inst_c0); MonoInst *const_ins = mini_emit_get_rgctx_klass (cfg, context_used, klass, MONO_RGCTX_INFO_KLASS); int type_reg = alloc_preg (cfg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, klass), const_ins->dreg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_ADD_IMM, type_reg, const_ins->dreg, m_class_offsetof_byval_arg ()); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type), type_reg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STOREP_MEMBASE_REG, addr->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, value), sp [0]->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, loc->inst_c0); ins->type = STACK_VTYPE; ins->klass = mono_defaults.typed_reference_class; *sp++ = ins; break; } case MONO_CEE_LDTOKEN: { gpointer handle; MonoClass *handle_class; if (method->wrapper_type == MONO_WRAPPER_DYNAMIC_METHOD || method->wrapper_type == MONO_WRAPPER_SYNCHRONIZED) { handle = mono_method_get_wrapper_data (method, n); handle_class = (MonoClass *)mono_method_get_wrapper_data (method, n + 1); if (handle_class == mono_defaults.typehandle_class) handle = m_class_get_byval_arg ((MonoClass*)handle); } else { handle = mono_ldtoken_checked (image, n, &handle_class, generic_context, cfg->error); CHECK_CFG_ERROR; } if (!handle) LOAD_ERROR; mono_class_init_internal (handle_class); if (cfg->gshared) { if (mono_metadata_token_table (n) == MONO_TABLE_TYPEDEF || mono_metadata_token_table (n) == MONO_TABLE_TYPEREF) { /* This case handles ldtoken of an open type, like for typeof(Gen<>). */ context_used = 0; } else if (handle_class == mono_defaults.typehandle_class) { context_used = mini_class_check_context_used (cfg, mono_class_from_mono_type_internal ((MonoType *)handle)); } else if (handle_class == mono_defaults.fieldhandle_class) context_used = mini_class_check_context_used (cfg, m_field_get_parent (((MonoClassField*)handle))); else if (handle_class == mono_defaults.methodhandle_class) context_used = mini_method_check_context_used (cfg, (MonoMethod *)handle); else g_assert_not_reached (); } { if ((next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && ((next_ip [0] == CEE_CALL) || (next_ip [0] == CEE_CALLVIRT)) && (cmethod = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context)) && (cmethod->klass == mono_defaults.systemtype_class) && (strcmp (cmethod->name, "GetTypeFromHandle") == 0)) { MonoClass *tclass = mono_class_from_mono_type_internal ((MonoType *)handle); mono_class_init_internal (tclass); // Optimize to true/false if next instruction is `call instance bool Type::get_IsValueType()` guchar *is_vt_ip; guint32 is_vt_token; if ((is_vt_ip = il_read_call (next_ip + 5, end, &is_vt_token)) && ip_in_bb (cfg, cfg->cbb, is_vt_ip)) { MonoMethod *is_vt_method = mini_get_method (cfg, method, is_vt_token, NULL, generic_context); if (is_vt_method->klass == mono_defaults.systemtype_class && !mini_is_gsharedvt_variable_klass (tclass) && !mono_class_is_open_constructed_type (m_class_get_byval_arg (tclass)) && !strcmp ("get_IsValueType", is_vt_method->name)) { next_ip = is_vt_ip; EMIT_NEW_ICONST (cfg, ins, m_class_is_valuetype (tclass) ? 1 : 0); ins->type = STACK_I4; *sp++ = ins; break; } } if (context_used) { MONO_INST_NEW (cfg, ins, OP_RTTYPE); ins->dreg = alloc_ireg_ref (cfg); ins->inst_p0 = tclass; ins->type = STACK_OBJ; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_NEEDS_DECOMPOSE; cfg->cbb->needs_decompose = TRUE; } else if (cfg->compile_aot) { if (method->wrapper_type) { error_init (error); //got to do it since there are multiple conditionals below if (mono_class_get_checked (m_class_get_image (tclass), m_class_get_type_token (tclass), error) == tclass && !generic_context) { /* Special case for static synchronized wrappers */ EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, m_class_get_image (tclass), m_class_get_type_token (tclass), generic_context); } else { mono_error_cleanup (error); /* FIXME don't swallow the error */ /* FIXME: n is not a normal token */ DISABLE_AOT (cfg); EMIT_NEW_PCONST (cfg, ins, NULL); } } else { EMIT_NEW_TYPE_FROM_HANDLE_CONST (cfg, ins, image, n, generic_context); } } else { MonoReflectionType *rt = mono_type_get_object_checked ((MonoType *)handle, cfg->error); CHECK_CFG_ERROR; EMIT_NEW_PCONST (cfg, ins, rt); } ins->type = STACK_OBJ; ins->klass = mono_defaults.runtimetype_class; il_op = (MonoOpcodeEnum)next_ip [0]; next_ip += 5; } else { MonoInst *addr, *vtvar; vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (handle_class), OP_LOCAL); if (context_used) { if (handle_class == mono_defaults.typehandle_class) { ins = mini_emit_get_rgctx_klass (cfg, context_used, mono_class_from_mono_type_internal ((MonoType *)handle), MONO_RGCTX_INFO_TYPE); } else if (handle_class == mono_defaults.methodhandle_class) { ins = emit_get_rgctx_method (cfg, context_used, (MonoMethod *)handle, MONO_RGCTX_INFO_METHOD); } else if (handle_class == mono_defaults.fieldhandle_class) { ins = emit_get_rgctx_field (cfg, context_used, (MonoClassField *)handle, MONO_RGCTX_INFO_CLASS_FIELD); } else { g_assert_not_reached (); } } else if (cfg->compile_aot) { EMIT_NEW_LDTOKENCONST (cfg, ins, image, n, generic_context); } else { EMIT_NEW_PCONST (cfg, ins, handle); } EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, addr->dreg, 0, ins->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0); } } *sp++ = ins; break; } case MONO_CEE_THROW: if (sp [-1]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_THROW); --sp; ins->sreg1 = sp [0]->dreg; cfg->cbb->out_of_line = TRUE; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; /* This can complicate code generation for llvm since the return value might not be defined */ if (COMPILE_LLVM (cfg)) INLINE_FAILURE ("throw"); break; case MONO_CEE_ENDFINALLY: if (!ip_in_finally_clause (cfg, ip - header->code)) UNVERIFIED; /* mono_save_seq_point_info () depends on this */ if (sp != stack_start) emit_seq_point (cfg, method, ip, FALSE, FALSE); MONO_INST_NEW (cfg, ins, OP_ENDFINALLY); MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; ins_has_side_effect = FALSE; /* * Control will leave the method so empty the stack, otherwise * the next basic block will start with a nonempty stack. */ while (sp != stack_start) { sp--; } break; case MONO_CEE_LEAVE: case MONO_CEE_LEAVE_S: { GList *handlers; /* empty the stack */ g_assert (sp >= stack_start); sp = stack_start; /* * If this leave statement is in a catch block, check for a * pending exception, and rethrow it if necessary. * We avoid doing this in runtime invoke wrappers, since those are called * by native code which excepts the wrapper to catch all exceptions. */ for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; /* * Use <= in the final comparison to handle clauses with multiple * leave statements, like in bug #78024. * The ordering of the exception clauses guarantees that we find the * innermost clause. */ if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && (clause->flags == MONO_EXCEPTION_CLAUSE_NONE) && (ip - header->code + ((il_op == MONO_CEE_LEAVE) ? 5 : 2)) <= (clause->handler_offset + clause->handler_len) && method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE) { MonoInst *exc_ins; MonoBasicBlock *dont_throw; /* MonoInst *load; NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, clause->handler_offset)->inst_c0); */ exc_ins = mono_emit_jit_icall (cfg, mono_thread_get_undeniable_exception, NULL); NEW_BBLOCK (cfg, dont_throw); /* * Currently, we always rethrow the abort exception, despite the * fact that this is not correct. See thread6.cs for an example. * But propagating the abort exception is more important than * getting the semantics right. */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, exc_ins->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw); MONO_EMIT_NEW_UNALU (cfg, OP_THROW, -1, exc_ins->dreg); MONO_START_BB (cfg, dont_throw); } } #ifdef ENABLE_LLVM cfg->cbb->try_end = (intptr_t)(ip - header->code); #endif if ((handlers = mono_find_leave_clauses (cfg, ip, target))) { GList *tmp; /* * For each finally clause that we exit we need to invoke the finally block. * After each invocation we need to add try holes for all the clauses that * we already exited. */ for (tmp = handlers; tmp; tmp = tmp->next) { MonoLeaveClause *leave = (MonoLeaveClause *) tmp->data; MonoExceptionClause *clause = leave->clause; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY) continue; MonoInst *abort_exc = (MonoInst *)mono_find_exvar_for_offset (cfg, clause->handler_offset); MonoBasicBlock *dont_throw; /* * Emit instrumentation code before linking the basic blocks below as this * will alter cfg->cbb. */ mini_profiler_emit_call_finally (cfg, header, ip, leave->index, clause); tblock = cfg->cil_offset_to_bb [clause->handler_offset]; g_assert (tblock); link_bblock (cfg, cfg->cbb, tblock); MONO_EMIT_NEW_PCONST (cfg, abort_exc->dreg, 0); MONO_INST_NEW (cfg, ins, OP_CALL_HANDLER); ins->inst_target_bb = tblock; ins->inst_eh_blocks = tmp; MONO_ADD_INS (cfg->cbb, ins); cfg->cbb->has_call_handler = 1; /* Throw exception if exvar is set */ /* FIXME Do we need this for calls from catch/filter ? */ NEW_BBLOCK (cfg, dont_throw); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, abort_exc->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBEQ, dont_throw); mono_emit_jit_icall (cfg, ves_icall_thread_finish_async_abort, NULL); cfg->cbb->clause_holes = tmp; MONO_START_BB (cfg, dont_throw); cfg->cbb->clause_holes = tmp; if (COMPILE_LLVM (cfg)) { MonoBasicBlock *target_bb; /* * Link the finally bblock with the target, since it will * conceptually branch there. */ GET_BBLOCK (cfg, tblock, cfg->cil_start + clause->handler_offset + clause->handler_len - 1); GET_BBLOCK (cfg, target_bb, target); link_bblock (cfg, tblock, target_bb); } } } MONO_INST_NEW (cfg, ins, OP_BR); MONO_ADD_INS (cfg->cbb, ins); GET_BBLOCK (cfg, tblock, target); link_bblock (cfg, cfg->cbb, tblock); ins->inst_target_bb = tblock; start_new_bblock = 1; break; } /* * Mono specific opcodes */ case MONO_CEE_MONO_ICALL: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); const MonoJitICallId jit_icall_id = (MonoJitICallId)token; MonoJitICallInfo * const info = mono_find_jit_icall_info (jit_icall_id); CHECK_STACK (info->sig->param_count); sp -= info->sig->param_count; if (token == MONO_JIT_ICALL_mono_threads_attach_coop) { MonoInst *addr; MonoBasicBlock *next_bb; if (cfg->compile_aot) { /* * This is called on unattached threads, so it cannot go through the trampoline * infrastructure. Use an indirect call through a got slot initialized at load time * instead. */ EMIT_NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL, GUINT_TO_POINTER (jit_icall_id)); ins = mini_emit_calli (cfg, info->sig, sp, addr, NULL, NULL); } else { ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp); } /* * Parts of the initlocals code needs to come after this, since it might call methods like memset. * Also profiling needs to be after attach. */ init_localsbb2 = cfg->cbb; NEW_BBLOCK (cfg, next_bb); MONO_START_BB (cfg, next_bb); } else { if (token == MONO_JIT_ICALL_mono_threads_detach_coop) { /* can't emit profiling code after a detach, so emit it now */ mini_profiler_emit_leave (cfg, NULL); detached_before_ret = TRUE; } ins = mono_emit_jit_icall_id (cfg, jit_icall_id, sp); } if (!MONO_TYPE_IS_VOID (info->sig->ret)) *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } MonoJumpInfoType ldptr_type; case MONO_CEE_MONO_LDPTR_CARD_TABLE: ldptr_type = MONO_PATCH_INFO_GC_CARD_TABLE_ADDR; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_NURSERY_START: ldptr_type = MONO_PATCH_INFO_GC_NURSERY_START; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_NURSERY_BITS: ldptr_type = MONO_PATCH_INFO_GC_NURSERY_BITS; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_INT_REQ_FLAG: ldptr_type = MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG; goto mono_ldptr; case MONO_CEE_MONO_LDPTR_PROFILER_ALLOCATION_COUNT: ldptr_type = MONO_PATCH_INFO_PROFILER_ALLOCATION_COUNT; mono_ldptr: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ins = mini_emit_runtime_constant (cfg, ldptr_type, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_LDPTR: { gpointer ptr; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ptr = mono_method_get_wrapper_data (method, token); EMIT_NEW_PCONST (cfg, ins, ptr); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); /* Can't embed random pointers into AOT code */ DISABLE_AOT (cfg); break; } case MONO_CEE_MONO_JIT_ICALL_ADDR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_JIT_ICALL_ADDRCONST (cfg, ins, GUINT_TO_POINTER (token)); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_ICALL_ADDR: { MonoMethod *cmethod; gpointer ptr; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); cmethod = (MonoMethod *)mono_method_get_wrapper_data (method, token); if (cfg->compile_aot) { if (cfg->direct_pinvoke && ip + 6 < end && (ip [6] == CEE_POP)) { /* * This is generated by emit_native_wrapper () to resolve the pinvoke address * before the call, its not needed when using direct pinvoke. * This is not an optimization, but its used to avoid looking up pinvokes * on platforms which don't support dlopen (). */ EMIT_NEW_PCONST (cfg, ins, NULL); } else { EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_ICALL_ADDR, cmethod); } } else { ptr = mono_lookup_internal_call (cmethod); g_assert (ptr); EMIT_NEW_PCONST (cfg, ins, ptr); } *sp++ = ins; break; } case MONO_CEE_MONO_VTADDR: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoInst *src_var, *src; --sp; // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); EMIT_NEW_VARLOADA ((cfg), (src), src_var, src_var->inst_vtype); *sp++ = src; break; } case MONO_CEE_MONO_NEWOBJ: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoInst *iargs [2]; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); mono_class_init_internal (klass); NEW_CLASSCONST (cfg, iargs [0], klass); MONO_ADD_INS (cfg->cbb, iargs [0]); *sp++ = mono_emit_jit_icall (cfg, ves_icall_object_new, iargs); inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_MONO_OBJADDR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); --sp; MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = alloc_ireg_mp (cfg); ins->sreg1 = sp [0]->dreg; ins->type = STACK_MP; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_MONO_LDNATIVEOBJ: /* * Similar to LDOBJ, but instead load the unmanaged * representation of the vtype to the stack. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); --sp; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); g_assert (m_class_is_valuetype (klass)); mono_class_init_internal (klass); { MonoInst *src, *dest, *temp; src = sp [0]; temp = mono_compile_create_var (cfg, m_class_get_byval_arg (klass), OP_LOCAL); temp->backend.is_pinvoke = 1; EMIT_NEW_TEMPLOADA (cfg, dest, temp->inst_c0); mini_emit_memory_copy (cfg, dest, src, klass, TRUE, 0); EMIT_NEW_TEMPLOAD (cfg, dest, temp->inst_c0); dest->type = STACK_VTYPE; dest->klass = klass; *sp ++ = dest; } break; case MONO_CEE_MONO_RETOBJ: { /* * Same as RET, but return the native representation of a vtype * to the caller. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); g_assert (cfg->ret); g_assert (mono_method_signature_internal (method)->pinvoke); --sp; klass = (MonoClass *)mono_method_get_wrapper_data (method, token); if (!cfg->vret_addr) { g_assert (cfg->ret_var_is_local); EMIT_NEW_VARLOADA (cfg, ins, cfg->ret, cfg->ret->inst_vtype); } else { EMIT_NEW_RETLOADA (cfg, ins); } mini_emit_memory_copy (cfg, ins, sp [0], klass, TRUE, 0); if (sp != stack_start) UNVERIFIED; if (!detached_before_ret) mini_profiler_emit_leave (cfg, sp [0]); MONO_INST_NEW (cfg, ins, OP_BR); ins->inst_target_bb = end_bblock; MONO_ADD_INS (cfg->cbb, ins); link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; } case MONO_CEE_MONO_SAVE_LMF: case MONO_CEE_MONO_RESTORE_LMF: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); break; case MONO_CEE_MONO_CLASSCONST: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_CLASSCONST (cfg, ins, mono_method_get_wrapper_data (method, token)); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; case MONO_CEE_MONO_METHODCONST: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_METHODCONST (cfg, ins, mono_method_get_wrapper_data (method, token)); *sp++ = ins; break; case MONO_CEE_MONO_PINVOKE_ADDR_CACHE: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); MonoMethod *pinvoke_method = (MonoMethod*)mono_method_get_wrapper_data (method, token); /* This is a memory slot used by the wrapper */ if (cfg->compile_aot) { EMIT_NEW_AOTCONST (cfg, ins, MONO_PATCH_INFO_METHOD_PINVOKE_ADDR_CACHE, pinvoke_method); } else { gpointer addr = mono_mem_manager_alloc0 (cfg->mem_manager, sizeof (gpointer)); EMIT_NEW_PCONST (cfg, ins, addr); } *sp++ = ins; break; } case MONO_CEE_MONO_NOT_TAKEN: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); cfg->cbb->out_of_line = TRUE; break; case MONO_CEE_MONO_TLS: { MonoTlsKey key; g_assert (method->wrapper_type != MONO_WRAPPER_NONE); key = (MonoTlsKey)n; g_assert (key < TLS_KEY_NUM); ins = mono_create_tls_get (cfg, key); g_assert (ins); ins->type = STACK_PTR; *sp++ = ins; break; } case MONO_CEE_MONO_DYN_CALL: { MonoCallInst *call; /* It would be easier to call a trampoline, but that would put an * extra frame on the stack, confusing exception handling. So * implement it inline using an opcode for now. */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); if (!cfg->dyn_call_var) { cfg->dyn_call_var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); /* prevent it from being register allocated */ cfg->dyn_call_var->flags |= MONO_INST_VOLATILE; } /* Has to use a call inst since local regalloc expects it */ MONO_INST_NEW_CALL (cfg, call, OP_DYN_CALL); ins = (MonoInst*)call; sp -= 2; ins->sreg1 = sp [0]->dreg; ins->sreg2 = sp [1]->dreg; MONO_ADD_INS (cfg->cbb, ins); cfg->param_area = MAX (cfg->param_area, cfg->backend->dyn_call_param_area); /* OP_DYN_CALL might need to allocate a dynamically sized param area */ cfg->flags |= MONO_CFG_HAS_ALLOCA; inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_MONO_MEMORY_BARRIER: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); mini_emit_memory_barrier (cfg, (int)n); break; } case MONO_CEE_MONO_ATOMIC_STORE_I4: { g_assert (method->wrapper_type != MONO_WRAPPER_NONE); g_assert (mono_arch_opcode_supported (OP_ATOMIC_STORE_I4)); sp -= 2; MONO_INST_NEW (cfg, ins, OP_ATOMIC_STORE_I4); ins->dreg = sp [0]->dreg; ins->sreg1 = sp [1]->dreg; ins->backend.memory_barrier_kind = (int)n; MONO_ADD_INS (cfg->cbb, ins); break; } case MONO_CEE_MONO_LD_DELEGATE_METHOD_PTR: { CHECK_STACK (1); --sp; dreg = alloc_preg (cfg); EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOAD_MEMBASE, dreg, sp [0]->dreg, MONO_STRUCT_OFFSET (MonoDelegate, method_ptr)); *sp++ = ins; break; } case MONO_CEE_MONO_CALLI_EXTRA_ARG: { MonoInst *addr; MonoMethodSignature *fsig; MonoInst *arg; /* * This is the same as CEE_CALLI, but passes an additional argument * to the called method in llvmonly mode. * This is only used by delegate invoke wrappers to call the * actual delegate method. */ g_assert (method->wrapper_type == MONO_WRAPPER_DELEGATE_INVOKE); ins = NULL; cmethod = NULL; CHECK_STACK (1); --sp; addr = *sp; fsig = mini_get_signature (method, token, generic_context, cfg->error); CHECK_CFG_ERROR; if (cfg->llvm_only) cfg->signatures = g_slist_prepend_mempool (cfg->mempool, cfg->signatures, fsig); n = fsig->param_count + fsig->hasthis + 1; CHECK_STACK (n); sp -= n; arg = sp [n - 1]; if (cfg->llvm_only) { /* * The lowest bit of 'arg' determines whenever the callee uses the gsharedvt * cconv. This is set by mono_init_delegate (). */ if (cfg->gsharedvt && mini_is_gsharedvt_variable_signature (fsig)) { MonoInst *callee = addr; MonoInst *call, *localloc_ins; MonoBasicBlock *is_gsharedvt_bb, *end_bb; int low_bit_reg = alloc_preg (cfg); NEW_BBLOCK (cfg, is_gsharedvt_bb); NEW_BBLOCK (cfg, end_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb); /* Normal case: callee uses a normal cconv, have to add an out wrapper */ addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); /* * ADDR points to a gsharedvt-out wrapper, have to pass <callee, arg> as an extra arg. */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P; MONO_ADD_INS (cfg->cbb, ins); localloc_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg); call = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Gsharedvt case: callee uses a gsharedvt cconv, no conversion is needed */ MONO_START_BB (cfg, is_gsharedvt_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1); ins = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee); ins->dreg = call->dreg; MONO_START_BB (cfg, end_bb); } else { /* Caller uses a normal calling conv */ MonoInst *callee = addr; MonoInst *call, *localloc_ins; MonoBasicBlock *is_gsharedvt_bb, *end_bb; int low_bit_reg = alloc_preg (cfg); NEW_BBLOCK (cfg, is_gsharedvt_bb); NEW_BBLOCK (cfg, end_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PAND_IMM, low_bit_reg, arg->dreg, 1); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, low_bit_reg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, is_gsharedvt_bb); /* Normal case: callee uses a normal cconv, no conversion is needed */ call = mini_emit_extra_arg_calli (cfg, fsig, sp, arg->dreg, callee); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); /* Gsharedvt case: callee uses a gsharedvt cconv, have to add an in wrapper */ MONO_START_BB (cfg, is_gsharedvt_bb); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_PXOR_IMM, arg->dreg, arg->dreg, 1); NEW_AOTCONST (cfg, addr, MONO_PATCH_INFO_GSHAREDVT_IN_WRAPPER, fsig); MONO_ADD_INS (cfg->cbb, addr); /* * ADDR points to a gsharedvt-in wrapper, have to pass <callee, arg> as an extra arg. */ MONO_INST_NEW (cfg, ins, OP_LOCALLOC_IMM); ins->dreg = alloc_preg (cfg); ins->inst_imm = 2 * TARGET_SIZEOF_VOID_P; MONO_ADD_INS (cfg->cbb, ins); localloc_ins = ins; cfg->flags |= MONO_CFG_HAS_ALLOCA; MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, 0, callee->dreg); MONO_EMIT_NEW_STORE_MEMBASE (cfg, OP_STORE_MEMBASE_REG, localloc_ins->dreg, TARGET_SIZEOF_VOID_P, arg->dreg); ins = mini_emit_extra_arg_calli (cfg, fsig, sp, localloc_ins->dreg, addr); ins->dreg = call->dreg; MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, end_bb); } } else { /* Same as CEE_CALLI */ if (cfg->gsharedvt && mini_is_gsharedvt_signature (fsig)) { /* * We pass the address to the gsharedvt trampoline in the rgctx reg */ MonoInst *callee = addr; addr = emit_get_rgctx_sig (cfg, context_used, fsig, MONO_RGCTX_INFO_SIG_GSHAREDVT_OUT_TRAMPOLINE_CALLI); ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, callee); } else { ins = (MonoInst*)mini_emit_calli (cfg, fsig, sp, addr, NULL, NULL); } } if (!MONO_TYPE_IS_VOID (fsig->ret)) *sp++ = mono_emit_widen_call_res (cfg, ins, fsig); CHECK_CFG_EXCEPTION; ins_flag = 0; constrained_class = NULL; break; } case MONO_CEE_MONO_LDDOMAIN: { MonoDomain *domain = mono_get_root_domain (); g_assert (method->wrapper_type != MONO_WRAPPER_NONE); EMIT_NEW_PCONST (cfg, ins, cfg->compile_aot ? NULL : domain); *sp++ = ins; break; } case MONO_CEE_MONO_SAVE_LAST_ERROR: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); // Just an IL prefix, setting this flag, picked up by call instructions. save_last_error = TRUE; break; case MONO_CEE_MONO_GET_RGCTX_ARG: g_assert (method->wrapper_type != MONO_WRAPPER_NONE); mono_create_rgctx_var (cfg); MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = alloc_dreg (cfg, STACK_PTR); ins->sreg1 = cfg->rgctx_var->dreg; ins->type = STACK_PTR; MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; case MONO_CEE_MONO_GET_SP: { /* Used by COOP only, so this is good enough */ MonoInst *var = mono_compile_create_var (cfg, mono_get_int_type (), OP_LOCAL); EMIT_NEW_VARLOADA (cfg, ins, var, NULL); *sp++ = ins; break; } case MONO_CEE_MONO_REMAP_OVF_EXC: /* Remap the exception thrown by the next _OVF opcode */ g_assert (method->wrapper_type != MONO_WRAPPER_NONE); ovf_exc = (const char*)mono_method_get_wrapper_data (method, token); break; case MONO_CEE_ARGLIST: { /* somewhat similar to LDTOKEN */ MonoInst *addr, *vtvar; vtvar = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_defaults.argumenthandle_class), OP_LOCAL); EMIT_NEW_TEMPLOADA (cfg, addr, vtvar->inst_c0); EMIT_NEW_UNALU (cfg, ins, OP_ARGLIST, -1, addr->dreg); EMIT_NEW_TEMPLOAD (cfg, ins, vtvar->inst_c0); ins->type = STACK_VTYPE; ins->klass = mono_defaults.argumenthandle_class; *sp++ = ins; break; } case MONO_CEE_CEQ: case MONO_CEE_CGT: case MONO_CEE_CGT_UN: case MONO_CEE_CLT: case MONO_CEE_CLT_UN: { MonoInst *cmp, *arg1, *arg2; sp -= 2; arg1 = sp [0]; arg2 = sp [1]; /* * The following transforms: * CEE_CEQ into OP_CEQ * CEE_CGT into OP_CGT * CEE_CGT_UN into OP_CGT_UN * CEE_CLT into OP_CLT * CEE_CLT_UN into OP_CLT_UN */ MONO_INST_NEW (cfg, cmp, (OP_CEQ - CEE_CEQ) + ip [1]); MONO_INST_NEW (cfg, ins, cmp->opcode); cmp->sreg1 = arg1->dreg; cmp->sreg2 = arg2->dreg; type_from_op (cfg, cmp, arg1, arg2); CHECK_TYPE (cmp); add_widen_op (cfg, cmp, &arg1, &arg2); if ((arg1->type == STACK_I8) || ((TARGET_SIZEOF_VOID_P == 8) && ((arg1->type == STACK_PTR) || (arg1->type == STACK_OBJ) || (arg1->type == STACK_MP)))) cmp->opcode = OP_LCOMPARE; else if (arg1->type == STACK_R4) cmp->opcode = OP_RCOMPARE; else if (arg1->type == STACK_R8) cmp->opcode = OP_FCOMPARE; else cmp->opcode = OP_ICOMPARE; MONO_ADD_INS (cfg->cbb, cmp); ins->type = STACK_I4; ins->dreg = alloc_dreg (cfg, (MonoStackType)ins->type); type_from_op (cfg, ins, arg1, arg2); if (cmp->opcode == OP_FCOMPARE || cmp->opcode == OP_RCOMPARE) { /* * The backends expect the fceq opcodes to do the * comparison too. */ ins->sreg1 = cmp->sreg1; ins->sreg2 = cmp->sreg2; NULLIFY_INS (cmp); } MONO_ADD_INS (cfg->cbb, ins); *sp++ = ins; break; } case MONO_CEE_LDFTN: { MonoInst *argconst; MonoMethod *cil_method; cmethod = mini_get_method (cfg, method, n, NULL, generic_context); CHECK_CFG_ERROR; if (constrained_class) { if (m_method_is_static (cmethod) && mini_class_check_context_used (cfg, constrained_class)) // FIXME: GENERIC_SHARING_FAILURE (CEE_LDFTN); cmethod = get_constrained_method (cfg, image, n, cmethod, constrained_class, generic_context); constrained_class = NULL; CHECK_CFG_ERROR; } mono_class_init_internal (cmethod->klass); mono_save_token_info (cfg, image, n, cmethod); context_used = mini_method_check_context_used (cfg, cmethod); cil_method = cmethod; if (!dont_verify && !cfg->skip_visibility && !mono_method_can_access_method (method, cmethod)) emit_method_access_failure (cfg, method, cil_method); const gboolean has_unmanaged_callers_only = cmethod->wrapper_type == MONO_WRAPPER_NONE && mono_method_has_unmanaged_callers_only_attribute (cmethod); /* * Optimize the common case of ldftn+delegate creation */ if ((sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) { MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context); if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) { MonoInst *target_ins, *handle_ins; MonoMethod *invoke; int invoke_context_used; if (G_UNLIKELY (has_unmanaged_callers_only)) { mono_error_set_not_supported (cfg->error, "Cannot create delegate from method with UnmanagedCallersOnlyAttribute"); CHECK_CFG_ERROR; } invoke = mono_get_delegate_invoke_internal (ctor_method->klass); if (!invoke || !mono_method_signature_internal (invoke)) LOAD_ERROR; invoke_context_used = mini_method_check_context_used (cfg, invoke); target_ins = sp [-1]; if (!(cmethod->flags & METHOD_ATTRIBUTE_STATIC)) { /*BAD IMPL: We must not add a null check for virtual invoke delegates.*/ if (mono_method_signature_internal (invoke)->param_count == mono_method_signature_internal (cmethod)->param_count) { MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, target_ins->dreg, 0); MONO_EMIT_NEW_COND_EXC (cfg, EQ, "ArgumentException"); } } if ((invoke_context_used == 0 || !cfg->gsharedvt) || cfg->llvm_only) { if (cfg->verbose_level > 3) g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL)); if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, FALSE))) { sp --; *sp = handle_ins; CHECK_CFG_EXCEPTION; sp ++; next_ip += 5; il_op = MONO_CEE_NEWOBJ; break; } else { CHECK_CFG_ERROR; } } } } /* UnmanagedCallersOnlyAttribute means ldftn should return a method callable from native */ if (G_UNLIKELY (has_unmanaged_callers_only)) { if (G_UNLIKELY (cmethod->flags & METHOD_ATTRIBUTE_PINVOKE_IMPL)) { // Follow CoreCLR, disallow [UnmanagedCallersOnly] and [DllImport] to be used // together emit_not_supported_failure (cfg); EMIT_NEW_PCONST (cfg, ins, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } MonoClass *delegate_klass = NULL; MonoGCHandle target_handle = 0; ERROR_DECL (wrapper_error); MonoMethod *wrapped_cmethod; wrapped_cmethod = mono_marshal_get_managed_wrapper (cmethod, delegate_klass, target_handle, wrapper_error); if (!is_ok (wrapper_error)) { /* if we couldn't create a wrapper because cmethod isn't supposed to have an UnmanagedCallersOnly attribute, follow CoreCLR behavior and throw when the method with the ldftn is executing, not when it is being compiled. */ emit_invalid_program_with_msg (cfg, wrapper_error, method, cmethod); mono_error_cleanup (wrapper_error); EMIT_NEW_PCONST (cfg, ins, NULL); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } else { cmethod = wrapped_cmethod; } } argconst = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); ins = mono_emit_jit_icall (cfg, mono_ldftn, &argconst); *sp++ = ins; inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_LDVIRTFTN: { MonoInst *args [2]; cmethod = mini_get_method (cfg, method, n, NULL, generic_context); CHECK_CFG_ERROR; mono_class_init_internal (cmethod->klass); context_used = mini_method_check_context_used (cfg, cmethod); /* * Optimize the common case of ldvirtftn+delegate creation */ if (previous_il_op == MONO_CEE_DUP && (sp > stack_start) && (next_ip + 4 < end) && ip_in_bb (cfg, cfg->cbb, next_ip) && (next_ip [0] == CEE_NEWOBJ)) { MonoMethod *ctor_method = mini_get_method (cfg, method, read32 (next_ip + 1), NULL, generic_context); if (ctor_method && (m_class_get_parent (ctor_method->klass) == mono_defaults.multicastdelegate_class)) { MonoInst *target_ins, *handle_ins; MonoMethod *invoke; int invoke_context_used; const gboolean is_virtual = (cmethod->flags & METHOD_ATTRIBUTE_VIRTUAL) != 0; invoke = mono_get_delegate_invoke_internal (ctor_method->klass); if (!invoke || !mono_method_signature_internal (invoke)) LOAD_ERROR; invoke_context_used = mini_method_check_context_used (cfg, invoke); target_ins = sp [-1]; if (invoke_context_used == 0 || !cfg->gsharedvt || cfg->llvm_only) { if (cfg->verbose_level > 3) g_print ("converting (in B%d: stack: %d) %s", cfg->cbb->block_num, (int)(sp - stack_start), mono_disasm_code_one (NULL, method, ip + 6, NULL)); if ((handle_ins = handle_delegate_ctor (cfg, ctor_method->klass, target_ins, cmethod, context_used, invoke_context_used, is_virtual))) { sp -= 2; *sp = handle_ins; CHECK_CFG_EXCEPTION; next_ip += 5; previous_il_op = MONO_CEE_NEWOBJ; sp ++; break; } else { CHECK_CFG_ERROR; } } } } --sp; args [0] = *sp; args [1] = emit_get_rgctx_method (cfg, context_used, cmethod, MONO_RGCTX_INFO_METHOD); if (context_used) *sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn_gshared, args); else *sp++ = mono_emit_jit_icall (cfg, mono_ldvirtfn, args); inline_costs += CALL_COST * MIN(10, num_calls++); break; } case MONO_CEE_LOCALLOC: { MonoBasicBlock *non_zero_bb, *end_bb; int alloc_ptr = alloc_preg (cfg); --sp; if (sp != stack_start) UNVERIFIED; if (cfg->method != method) /* * Inlining this into a loop in a parent could lead to * stack overflows which is different behavior than the * non-inlined case, thus disable inlining in this case. */ INLINE_FAILURE("localloc"); NEW_BBLOCK (cfg, non_zero_bb); NEW_BBLOCK (cfg, end_bb); /* if size != zero */ MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sp [0]->dreg, 0); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_PBNE_UN, non_zero_bb); //size is zero, so result is NULL MONO_EMIT_NEW_PCONST (cfg, alloc_ptr, NULL); MONO_EMIT_NEW_BRANCH_BLOCK (cfg, OP_BR, end_bb); MONO_START_BB (cfg, non_zero_bb); MONO_INST_NEW (cfg, ins, OP_LOCALLOC); ins->dreg = alloc_ptr; ins->sreg1 = sp [0]->dreg; ins->type = STACK_PTR; MONO_ADD_INS (cfg->cbb, ins); cfg->flags |= MONO_CFG_HAS_ALLOCA; if (header->init_locals) ins->flags |= MONO_INST_INIT; MONO_START_BB (cfg, end_bb); EMIT_NEW_UNALU (cfg, ins, OP_MOVE, alloc_preg (cfg), alloc_ptr); ins->type = STACK_PTR; *sp++ = ins; break; } case MONO_CEE_ENDFILTER: { MonoExceptionClause *clause, *nearest; int cc; --sp; if ((sp != stack_start) || (sp [0]->type != STACK_I4)) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_ENDFILTER); ins->sreg1 = (*sp)->dreg; MONO_ADD_INS (cfg->cbb, ins); start_new_bblock = 1; nearest = NULL; for (cc = 0; cc < header->num_clauses; ++cc) { clause = &header->clauses [cc]; if ((clause->flags & MONO_EXCEPTION_CLAUSE_FILTER) && ((next_ip - header->code) > clause->data.filter_offset && (next_ip - header->code) <= clause->handler_offset) && (!nearest || (clause->data.filter_offset < nearest->data.filter_offset))) nearest = clause; } g_assert (nearest); if ((next_ip - header->code) != nearest->handler_offset) UNVERIFIED; break; } case MONO_CEE_UNALIGNED_: ins_flag |= MONO_INST_UNALIGNED; /* FIXME: record alignment? we can assume 1 for now */ break; case MONO_CEE_VOLATILE_: ins_flag |= MONO_INST_VOLATILE; break; case MONO_CEE_TAIL_: ins_flag |= MONO_INST_TAILCALL; cfg->flags |= MONO_CFG_HAS_TAILCALL; /* Can't inline tailcalls at this time */ inline_costs += 100000; break; case MONO_CEE_INITOBJ: --sp; klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (mini_class_is_reference (klass)) MONO_EMIT_NEW_STORE_MEMBASE_IMM (cfg, OP_STORE_MEMBASE_IMM, sp [0]->dreg, 0, 0); else mini_emit_initobj (cfg, *sp, NULL, klass); inline_costs += 1; break; case MONO_CEE_CONSTRAINED_: constrained_class = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (constrained_class); ins_has_side_effect = FALSE; break; case MONO_CEE_CPBLK: sp -= 3; mini_emit_memory_copy_bytes (cfg, sp [0], sp [1], sp [2], ins_flag); ins_flag = 0; inline_costs += 1; break; case MONO_CEE_INITBLK: sp -= 3; mini_emit_memory_init_bytes (cfg, sp [0], sp [1], sp [2], ins_flag); ins_flag = 0; inline_costs += 1; break; case MONO_CEE_NO_: if (ip [2] & CEE_NO_TYPECHECK) ins_flag |= MONO_INST_NOTYPECHECK; if (ip [2] & CEE_NO_RANGECHECK) ins_flag |= MONO_INST_NORANGECHECK; if (ip [2] & CEE_NO_NULLCHECK) ins_flag |= MONO_INST_NONULLCHECK; break; case MONO_CEE_RETHROW: { MonoInst *load; int handler_offset = -1; for (i = 0; i < header->num_clauses; ++i) { MonoExceptionClause *clause = &header->clauses [i]; if (MONO_OFFSET_IN_HANDLER (clause, ip - header->code) && !(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY)) { handler_offset = clause->handler_offset; break; } } cfg->cbb->flags |= BB_EXCEPTION_UNSAFE; if (handler_offset == -1) UNVERIFIED; EMIT_NEW_TEMPLOAD (cfg, load, mono_find_exvar_for_offset (cfg, handler_offset)->inst_c0); MONO_INST_NEW (cfg, ins, OP_RETHROW); ins->sreg1 = load->dreg; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; break; } case MONO_CEE_MONO_RETHROW: { if (sp [-1]->type != STACK_OBJ) UNVERIFIED; MONO_INST_NEW (cfg, ins, OP_RETHROW); --sp; ins->sreg1 = sp [0]->dreg; cfg->cbb->out_of_line = TRUE; MONO_ADD_INS (cfg->cbb, ins); MONO_INST_NEW (cfg, ins, OP_NOT_REACHED); MONO_ADD_INS (cfg->cbb, ins); sp = stack_start; link_bblock (cfg, cfg->cbb, end_bblock); start_new_bblock = 1; /* This can complicate code generation for llvm since the return value might not be defined */ if (COMPILE_LLVM (cfg)) INLINE_FAILURE ("mono_rethrow"); break; } case MONO_CEE_SIZEOF: { guint32 val; int ialign; if (mono_metadata_token_table (token) == MONO_TABLE_TYPESPEC && !image_is_dynamic (m_class_get_image (method->klass)) && !generic_context) { MonoType *type = mono_type_create_from_typespec_checked (image, token, cfg->error); CHECK_CFG_ERROR; val = mono_type_size (type, &ialign); EMIT_NEW_ICONST (cfg, ins, val); } else { MonoClass *klass = mini_get_class (method, token, generic_context); CHECK_TYPELOAD (klass); if (mini_is_gsharedvt_klass (klass)) { ins = mini_emit_get_gsharedvt_info_klass (cfg, klass, MONO_RGCTX_INFO_CLASS_SIZEOF); ins->type = STACK_I4; } else { val = mono_type_size (m_class_get_byval_arg (klass), &ialign); EMIT_NEW_ICONST (cfg, ins, val); } } *sp++ = ins; break; } case MONO_CEE_REFANYTYPE: { MonoInst *src_var, *src; GSHAREDVT_FAILURE (il_op); --sp; // FIXME: src_var = get_vreg_to_inst (cfg, sp [0]->dreg); if (!src_var) src_var = mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.typed_reference_class), OP_LOCAL, sp [0]->dreg); EMIT_NEW_VARLOADA (cfg, src, src_var, src_var->inst_vtype); EMIT_NEW_LOAD_MEMBASE_TYPE (cfg, ins, m_class_get_byval_arg (mono_defaults.typehandle_class), src->dreg, MONO_STRUCT_OFFSET (MonoTypedRef, type)); *sp++ = ins; break; } case MONO_CEE_READONLY_: readonly = TRUE; break; case MONO_CEE_UNUSED56: case MONO_CEE_UNUSED57: case MONO_CEE_UNUSED70: case MONO_CEE_UNUSED: case MONO_CEE_UNUSED99: case MONO_CEE_UNUSED58: case MONO_CEE_UNUSED1: UNVERIFIED; default: g_warning ("opcode 0x%02x not handled", il_op); UNVERIFIED; } if (ins_has_side_effect) cfg->cbb->flags |= BB_HAS_SIDE_EFFECTS; } if (start_new_bblock != 1) UNVERIFIED; cfg->cbb->cil_length = ip - cfg->cbb->cil_code; if (cfg->cbb->next_bb) { /* This could already be set because of inlining, #693905 */ MonoBasicBlock *bb = cfg->cbb; while (bb->next_bb) bb = bb->next_bb; bb->next_bb = end_bblock; } else { cfg->cbb->next_bb = end_bblock; } #if defined(TARGET_POWERPC) || defined(TARGET_X86) if (cfg->compile_aot) /* FIXME: The plt slots require a GOT var even if the method doesn't use it */ mono_get_got_var (cfg); #endif #ifdef TARGET_WASM if (cfg->lmf_var && !cfg->deopt) { // mini_llvmonly_pop_lmf () might be called before emit_push_lmf () so initialize the LMF cfg->cbb = init_localsbb; EMIT_NEW_VARLOADA (cfg, ins, cfg->lmf_var, NULL); int lmf_reg = ins->dreg; EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STORE_MEMBASE_IMM, lmf_reg, MONO_STRUCT_OFFSET (MonoLMF, previous_lmf), 0); } #endif if (cfg->method == method && cfg->got_var) mono_emit_load_got_addr (cfg); if (init_localsbb) { cfg->cbb = init_localsbb; cfg->ip = NULL; for (i = 0; i < header->num_locals; ++i) { /* * Vtype initialization might need to be done after CEE_JIT_ATTACH, since it can make calls to memset (), * which need the trampoline code to work. */ if (MONO_TYPE_ISSTRUCT (header->locals [i])) cfg->cbb = init_localsbb2; else cfg->cbb = init_localsbb; emit_init_local (cfg, i, header->locals [i], init_locals); } } if (cfg->init_ref_vars && cfg->method == method) { /* Emit initialization for ref vars */ // FIXME: Avoid duplication initialization for IL locals. for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *ins = cfg->varinfo [i]; if (ins->opcode == OP_LOCAL && ins->type == STACK_OBJ) MONO_EMIT_NEW_PCONST (cfg, ins->dreg, NULL); } } if (cfg->lmf_var && cfg->method == method && !cfg->llvm_only) { cfg->cbb = init_localsbb; emit_push_lmf (cfg); } /* emit profiler enter code after a jit attach if there is one */ cfg->cbb = init_localsbb2; mini_profiler_emit_enter (cfg); cfg->cbb = init_localsbb; if (seq_points) { MonoBasicBlock *bb; /* * Make seq points at backward branch targets interruptable. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) if (bb->code && bb->in_count > 1 && bb->code->opcode == OP_SEQ_POINT) bb->code->flags |= MONO_INST_SINGLE_STEP_LOC; } /* Add a sequence point for method entry/exit events */ if (seq_points && cfg->gen_sdb_seq_points) { NEW_SEQ_POINT (cfg, ins, METHOD_ENTRY_IL_OFFSET, FALSE); MONO_ADD_INS (init_localsbb, ins); NEW_SEQ_POINT (cfg, ins, METHOD_EXIT_IL_OFFSET, FALSE); MONO_ADD_INS (cfg->bb_exit, ins); } /* * Add seq points for IL offsets which have line number info, but wasn't generated a seq point during JITting because * the code they refer to was dead (#11880). */ if (sym_seq_points) { for (i = 0; i < header->code_size; ++i) { if (mono_bitset_test_fast (seq_point_locs, i) && !mono_bitset_test_fast (seq_point_set_locs, i)) { MonoInst *ins; NEW_SEQ_POINT (cfg, ins, i, FALSE); mono_add_seq_point (cfg, NULL, ins, SEQ_POINT_NATIVE_OFFSET_DEAD_CODE); } } } cfg->ip = NULL; if (cfg->method == method) { compute_bb_regions (cfg); } else { MonoBasicBlock *bb; /* get_most_deep_clause () in mini-llvm.c depends on this for inlined bblocks */ for (bb = start_bblock; bb != end_bblock; bb = bb->next_bb) { bb->real_offset = inline_offset; } } if (inline_costs < 0) { char *mname; /* Method is too large */ mname = mono_method_full_name (method, TRUE); mono_cfg_set_exception_invalid_program (cfg, g_strdup_printf ("Method %s is too complex.", mname)); g_free (mname); } if ((cfg->verbose_level > 2) && (cfg->method == method)) mono_print_code (cfg, "AFTER METHOD-TO-IR"); goto cleanup; mono_error_exit: if (cfg->verbose_level > 3) g_print ("exiting due to error"); g_assert (!is_ok (cfg->error)); goto cleanup; exception_exit: if (cfg->verbose_level > 3) g_print ("exiting due to exception"); g_assert (cfg->exception_type != MONO_EXCEPTION_NONE); goto cleanup; unverified: if (cfg->verbose_level > 3) g_print ("exiting due to invalid il"); set_exception_type_from_invalid_il (cfg, method, ip); goto cleanup; cleanup: g_slist_free (class_inits); mono_basic_block_free (original_bb); cfg->dont_inline = g_list_remove (cfg->dont_inline, method); if (cfg->exception_type) return -1; else return inline_costs; } static int store_membase_reg_to_store_membase_imm (int opcode) { switch (opcode) { case OP_STORE_MEMBASE_REG: return OP_STORE_MEMBASE_IMM; case OP_STOREI1_MEMBASE_REG: return OP_STOREI1_MEMBASE_IMM; case OP_STOREI2_MEMBASE_REG: return OP_STOREI2_MEMBASE_IMM; case OP_STOREI4_MEMBASE_REG: return OP_STOREI4_MEMBASE_IMM; case OP_STOREI8_MEMBASE_REG: return OP_STOREI8_MEMBASE_IMM; default: g_assert_not_reached (); } return -1; } int mono_op_to_op_imm (int opcode) { switch (opcode) { case OP_IADD: return OP_IADD_IMM; case OP_ISUB: return OP_ISUB_IMM; case OP_IDIV: return OP_IDIV_IMM; case OP_IDIV_UN: return OP_IDIV_UN_IMM; case OP_IREM: return OP_IREM_IMM; case OP_IREM_UN: return OP_IREM_UN_IMM; case OP_IMUL: return OP_IMUL_IMM; case OP_IAND: return OP_IAND_IMM; case OP_IOR: return OP_IOR_IMM; case OP_IXOR: return OP_IXOR_IMM; case OP_ISHL: return OP_ISHL_IMM; case OP_ISHR: return OP_ISHR_IMM; case OP_ISHR_UN: return OP_ISHR_UN_IMM; case OP_LADD: return OP_LADD_IMM; case OP_LSUB: return OP_LSUB_IMM; case OP_LAND: return OP_LAND_IMM; case OP_LOR: return OP_LOR_IMM; case OP_LXOR: return OP_LXOR_IMM; case OP_LSHL: return OP_LSHL_IMM; case OP_LSHR: return OP_LSHR_IMM; case OP_LSHR_UN: return OP_LSHR_UN_IMM; #if SIZEOF_REGISTER == 8 case OP_LMUL: return OP_LMUL_IMM; case OP_LREM: return OP_LREM_IMM; #endif case OP_COMPARE: return OP_COMPARE_IMM; case OP_ICOMPARE: return OP_ICOMPARE_IMM; case OP_LCOMPARE: return OP_LCOMPARE_IMM; case OP_STORE_MEMBASE_REG: return OP_STORE_MEMBASE_IMM; case OP_STOREI1_MEMBASE_REG: return OP_STOREI1_MEMBASE_IMM; case OP_STOREI2_MEMBASE_REG: return OP_STOREI2_MEMBASE_IMM; case OP_STOREI4_MEMBASE_REG: return OP_STOREI4_MEMBASE_IMM; #if defined(TARGET_X86) || defined (TARGET_AMD64) case OP_X86_PUSH: return OP_X86_PUSH_IMM; case OP_X86_COMPARE_MEMBASE_REG: return OP_X86_COMPARE_MEMBASE_IMM; #endif #if defined(TARGET_AMD64) case OP_AMD64_ICOMPARE_MEMBASE_REG: return OP_AMD64_ICOMPARE_MEMBASE_IMM; #endif case OP_VOIDCALL_REG: return OP_VOIDCALL; case OP_CALL_REG: return OP_CALL; case OP_LCALL_REG: return OP_LCALL; case OP_FCALL_REG: return OP_FCALL; case OP_LOCALLOC: return OP_LOCALLOC_IMM; } return -1; } int mono_load_membase_to_load_mem (int opcode) { // FIXME: Add a MONO_ARCH_HAVE_LOAD_MEM macro #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_LOAD_MEMBASE: return OP_LOAD_MEM; case OP_LOADU1_MEMBASE: return OP_LOADU1_MEM; case OP_LOADU2_MEMBASE: return OP_LOADU2_MEM; case OP_LOADI4_MEMBASE: return OP_LOADI4_MEM; case OP_LOADU4_MEMBASE: return OP_LOADU4_MEM; #if SIZEOF_REGISTER == 8 case OP_LOADI8_MEMBASE: return OP_LOADI8_MEM; #endif } #endif return -1; } static int op_to_op_dest_membase (int store_opcode, int opcode) { #if defined(TARGET_X86) if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG))) return -1; switch (opcode) { case OP_IADD: return OP_X86_ADD_MEMBASE_REG; case OP_ISUB: return OP_X86_SUB_MEMBASE_REG; case OP_IAND: return OP_X86_AND_MEMBASE_REG; case OP_IOR: return OP_X86_OR_MEMBASE_REG; case OP_IXOR: return OP_X86_XOR_MEMBASE_REG; case OP_ADD_IMM: case OP_IADD_IMM: return OP_X86_ADD_MEMBASE_IMM; case OP_SUB_IMM: case OP_ISUB_IMM: return OP_X86_SUB_MEMBASE_IMM; case OP_AND_IMM: case OP_IAND_IMM: return OP_X86_AND_MEMBASE_IMM; case OP_OR_IMM: case OP_IOR_IMM: return OP_X86_OR_MEMBASE_IMM; case OP_XOR_IMM: case OP_IXOR_IMM: return OP_X86_XOR_MEMBASE_IMM; case OP_MOVE: return OP_NOP; } #endif #if defined(TARGET_AMD64) if (!((store_opcode == OP_STORE_MEMBASE_REG) || (store_opcode == OP_STOREI4_MEMBASE_REG) || (store_opcode == OP_STOREI8_MEMBASE_REG))) return -1; switch (opcode) { case OP_IADD: return OP_X86_ADD_MEMBASE_REG; case OP_ISUB: return OP_X86_SUB_MEMBASE_REG; case OP_IAND: return OP_X86_AND_MEMBASE_REG; case OP_IOR: return OP_X86_OR_MEMBASE_REG; case OP_IXOR: return OP_X86_XOR_MEMBASE_REG; case OP_IADD_IMM: return OP_X86_ADD_MEMBASE_IMM; case OP_ISUB_IMM: return OP_X86_SUB_MEMBASE_IMM; case OP_IAND_IMM: return OP_X86_AND_MEMBASE_IMM; case OP_IOR_IMM: return OP_X86_OR_MEMBASE_IMM; case OP_IXOR_IMM: return OP_X86_XOR_MEMBASE_IMM; case OP_LADD: return OP_AMD64_ADD_MEMBASE_REG; case OP_LSUB: return OP_AMD64_SUB_MEMBASE_REG; case OP_LAND: return OP_AMD64_AND_MEMBASE_REG; case OP_LOR: return OP_AMD64_OR_MEMBASE_REG; case OP_LXOR: return OP_AMD64_XOR_MEMBASE_REG; case OP_ADD_IMM: case OP_LADD_IMM: return OP_AMD64_ADD_MEMBASE_IMM; case OP_SUB_IMM: case OP_LSUB_IMM: return OP_AMD64_SUB_MEMBASE_IMM; case OP_AND_IMM: case OP_LAND_IMM: return OP_AMD64_AND_MEMBASE_IMM; case OP_OR_IMM: case OP_LOR_IMM: return OP_AMD64_OR_MEMBASE_IMM; case OP_XOR_IMM: case OP_LXOR_IMM: return OP_AMD64_XOR_MEMBASE_IMM; case OP_MOVE: return OP_NOP; } #endif return -1; } static int op_to_op_store_membase (int store_opcode, int opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_ICEQ: if (store_opcode == OP_STOREI1_MEMBASE_REG) return OP_X86_SETEQ_MEMBASE; case OP_CNE: if (store_opcode == OP_STOREI1_MEMBASE_REG) return OP_X86_SETNE_MEMBASE; } #endif return -1; } static int op_to_op_src1_membase (MonoCompile *cfg, int load_opcode, int opcode) { #ifdef TARGET_X86 /* FIXME: This has sign extension issues */ /* if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE)) return OP_X86_COMPARE_MEMBASE8_IMM; */ if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))) return -1; switch (opcode) { case OP_X86_PUSH: return OP_X86_PUSH_MEMBASE; case OP_COMPARE_IMM: case OP_ICOMPARE_IMM: return OP_X86_COMPARE_MEMBASE_IMM; case OP_COMPARE: case OP_ICOMPARE: return OP_X86_COMPARE_MEMBASE_REG; } #endif #ifdef TARGET_AMD64 /* FIXME: This has sign extension issues */ /* if ((opcode == OP_ICOMPARE_IMM) && (load_opcode == OP_LOADU1_MEMBASE)) return OP_X86_COMPARE_MEMBASE8_IMM; */ switch (opcode) { case OP_X86_PUSH: if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_X86_PUSH_MEMBASE; break; /* FIXME: This only works for 32 bit immediates case OP_COMPARE_IMM: case OP_LCOMPARE_IMM: if ((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_AMD64_COMPARE_MEMBASE_IMM; */ case OP_ICOMPARE_IMM: if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)) return OP_AMD64_ICOMPARE_MEMBASE_IMM; break; case OP_COMPARE: case OP_LCOMPARE: if (cfg->backend->ilp32 && load_opcode == OP_LOAD_MEMBASE) return OP_AMD64_ICOMPARE_MEMBASE_REG; if ((load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32) || (load_opcode == OP_LOADI8_MEMBASE)) return OP_AMD64_COMPARE_MEMBASE_REG; break; case OP_ICOMPARE: if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE)) return OP_AMD64_ICOMPARE_MEMBASE_REG; break; } #endif return -1; } static int op_to_op_src2_membase (MonoCompile *cfg, int load_opcode, int opcode) { #ifdef TARGET_X86 if (!((load_opcode == OP_LOAD_MEMBASE) || (load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE))) return -1; switch (opcode) { case OP_COMPARE: case OP_ICOMPARE: return OP_X86_COMPARE_REG_MEMBASE; case OP_IADD: return OP_X86_ADD_REG_MEMBASE; case OP_ISUB: return OP_X86_SUB_REG_MEMBASE; case OP_IAND: return OP_X86_AND_REG_MEMBASE; case OP_IOR: return OP_X86_OR_REG_MEMBASE; case OP_IXOR: return OP_X86_XOR_REG_MEMBASE; } #endif #ifdef TARGET_AMD64 if ((load_opcode == OP_LOADI4_MEMBASE) || (load_opcode == OP_LOADU4_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && cfg->backend->ilp32)) { switch (opcode) { case OP_ICOMPARE: return OP_AMD64_ICOMPARE_REG_MEMBASE; case OP_IADD: return OP_X86_ADD_REG_MEMBASE; case OP_ISUB: return OP_X86_SUB_REG_MEMBASE; case OP_IAND: return OP_X86_AND_REG_MEMBASE; case OP_IOR: return OP_X86_OR_REG_MEMBASE; case OP_IXOR: return OP_X86_XOR_REG_MEMBASE; } } else if ((load_opcode == OP_LOADI8_MEMBASE) || (load_opcode == OP_LOAD_MEMBASE && !cfg->backend->ilp32)) { switch (opcode) { case OP_COMPARE: case OP_LCOMPARE: return OP_AMD64_COMPARE_REG_MEMBASE; case OP_LADD: return OP_AMD64_ADD_REG_MEMBASE; case OP_LSUB: return OP_AMD64_SUB_REG_MEMBASE; case OP_LAND: return OP_AMD64_AND_REG_MEMBASE; case OP_LOR: return OP_AMD64_OR_REG_MEMBASE; case OP_LXOR: return OP_AMD64_XOR_REG_MEMBASE; } } #endif return -1; } int mono_op_to_op_imm_noemul (int opcode) { MONO_DISABLE_WARNING(4065) // switch with default but no case switch (opcode) { #if SIZEOF_REGISTER == 4 && !defined(MONO_ARCH_NO_EMULATE_LONG_SHIFT_OPS) case OP_LSHR: case OP_LSHL: case OP_LSHR_UN: return -1; #endif #if defined(MONO_ARCH_EMULATE_MUL_DIV) || defined(MONO_ARCH_EMULATE_DIV) case OP_IDIV: case OP_IDIV_UN: case OP_IREM: case OP_IREM_UN: return -1; #endif #if defined(MONO_ARCH_EMULATE_MUL_DIV) case OP_IMUL: return -1; #endif default: return mono_op_to_op_imm (opcode); } MONO_RESTORE_WARNING } gboolean mono_op_no_side_effects (int opcode) { /* FIXME: Add more instructions */ /* INEG sets the condition codes, and the OP_LNEG decomposition depends on this on x86 */ switch (opcode) { case OP_MOVE: case OP_FMOVE: case OP_VMOVE: case OP_XMOVE: case OP_RMOVE: case OP_VZERO: case OP_XZERO: case OP_XONES: case OP_ICONST: case OP_I8CONST: case OP_ADD_IMM: case OP_R8CONST: case OP_LADD_IMM: case OP_ISUB_IMM: case OP_IADD_IMM: case OP_LNEG: case OP_ISUB: case OP_CMOV_IGE: case OP_ISHL_IMM: case OP_ISHR_IMM: case OP_ISHR_UN_IMM: case OP_IAND_IMM: case OP_ICONV_TO_U1: case OP_ICONV_TO_I1: case OP_SEXT_I4: case OP_LCONV_TO_U1: case OP_ICONV_TO_U2: case OP_ICONV_TO_I2: case OP_LCONV_TO_I2: case OP_LDADDR: case OP_PHI: case OP_NOP: case OP_ZEXT_I4: case OP_NOT_NULL: case OP_IL_SEQ_POINT: case OP_RTTYPE: return TRUE; default: return FALSE; } } gboolean mono_ins_no_side_effects (MonoInst *ins) { if (mono_op_no_side_effects (ins->opcode)) return TRUE; if (ins->opcode == OP_AOTCONST) { MonoJumpInfoType type = (MonoJumpInfoType)(intptr_t)ins->inst_p1; // Some AOTCONSTs have side effects switch (type) { case MONO_PATCH_INFO_TYPE_FROM_HANDLE: case MONO_PATCH_INFO_LDSTR: case MONO_PATCH_INFO_VTABLE: case MONO_PATCH_INFO_METHOD_RGCTX: return TRUE; } } return FALSE; } /** * mono_handle_global_vregs: * * Make vregs used in more than one bblock 'global', i.e. allocate a variable * for them. */ void mono_handle_global_vregs (MonoCompile *cfg) { gint32 *vreg_to_bb; MonoBasicBlock *bb; int i, pos; vreg_to_bb = (gint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (gint32*) * cfg->next_vreg + 1); #ifdef MONO_ARCH_SIMD_INTRINSICS if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS_SIMPLIFY_INDIRECTION) mono_simd_simplify_indirection (cfg); #endif /* Find local vregs used in more than one bb */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins = bb->code; int block_num = bb->block_num; if (cfg->verbose_level > 2) printf ("\nHANDLE-GLOBAL-VREGS BLOCK %d:\n", bb->block_num); cfg->cbb = bb; for (; ins; ins = ins->next) { const char *spec = INS_INFO (ins->opcode); int regtype = 0, regindex; gint32 prev_bb; if (G_UNLIKELY (cfg->verbose_level > 2)) mono_print_ins (ins); g_assert (ins->opcode >= MONO_CEE_LAST); for (regindex = 0; regindex < 4; regindex ++) { int vreg = 0; if (regindex == 0) { regtype = spec [MONO_INST_DEST]; if (regtype == ' ') continue; vreg = ins->dreg; } else if (regindex == 1) { regtype = spec [MONO_INST_SRC1]; if (regtype == ' ') continue; vreg = ins->sreg1; } else if (regindex == 2) { regtype = spec [MONO_INST_SRC2]; if (regtype == ' ') continue; vreg = ins->sreg2; } else if (regindex == 3) { regtype = spec [MONO_INST_SRC3]; if (regtype == ' ') continue; vreg = ins->sreg3; } #if SIZEOF_REGISTER == 4 /* In the LLVM case, the long opcodes are not decomposed */ if (regtype == 'l' && !COMPILE_LLVM (cfg)) { /* * Since some instructions reference the original long vreg, * and some reference the two component vregs, it is quite hard * to determine when it needs to be global. So be conservative. */ if (!get_vreg_to_inst (cfg, vreg)) { mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg); if (cfg->verbose_level > 2) printf ("LONG VREG R%d made global.\n", vreg); } /* * Make the component vregs volatile since the optimizations can * get confused otherwise. */ get_vreg_to_inst (cfg, MONO_LVREG_LS (vreg))->flags |= MONO_INST_VOLATILE; get_vreg_to_inst (cfg, MONO_LVREG_MS (vreg))->flags |= MONO_INST_VOLATILE; } #endif g_assert (vreg != -1); prev_bb = vreg_to_bb [vreg]; if (prev_bb == 0) { /* 0 is a valid block num */ vreg_to_bb [vreg] = block_num + 1; } else if ((prev_bb != block_num + 1) && (prev_bb != -1)) { if (((regtype == 'i' && (vreg < MONO_MAX_IREGS))) || (regtype == 'f' && (vreg < MONO_MAX_FREGS))) continue; if (!get_vreg_to_inst (cfg, vreg)) { if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("VREG R%d used in BB%d and BB%d made global.\n", vreg, vreg_to_bb [vreg], block_num); switch (regtype) { case 'i': if (vreg_is_ref (cfg, vreg)) mono_compile_create_var_for_vreg (cfg, mono_get_object_type (), OP_LOCAL, vreg); else mono_compile_create_var_for_vreg (cfg, mono_get_int_type (), OP_LOCAL, vreg); break; case 'l': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.int64_class), OP_LOCAL, vreg); break; case 'f': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (mono_defaults.double_class), OP_LOCAL, vreg); break; case 'v': case 'x': mono_compile_create_var_for_vreg (cfg, m_class_get_byval_arg (ins->klass), OP_LOCAL, vreg); break; default: g_assert_not_reached (); } } /* Flag as having been used in more than one bb */ vreg_to_bb [vreg] = -1; } } } } /* If a variable is used in only one bblock, convert it into a local vreg */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *var = cfg->varinfo [i]; MonoMethodVar *vmv = MONO_VARINFO (cfg, i); switch (var->type) { case STACK_I4: case STACK_OBJ: case STACK_PTR: case STACK_MP: case STACK_VTYPE: #if SIZEOF_REGISTER == 8 case STACK_I8: #endif #if !defined(TARGET_X86) /* Enabling this screws up the fp stack on x86 */ case STACK_R8: #endif if (mono_arch_is_soft_float ()) break; /* if (var->type == STACK_VTYPE && cfg->gsharedvt && mini_is_gsharedvt_variable_type (var->inst_vtype)) break; */ /* Arguments are implicitly global */ /* Putting R4 vars into registers doesn't work currently */ /* The gsharedvt vars are implicitly referenced by ldaddr opcodes, but those opcodes are only generated later */ if ((var->opcode != OP_ARG) && (var != cfg->ret) && !(var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && (vreg_to_bb [var->dreg] != -1) && (m_class_get_byval_arg (var->klass)->type != MONO_TYPE_R4) && !cfg->disable_vreg_to_lvreg && var != cfg->gsharedvt_info_var && var != cfg->gsharedvt_locals_var && var != cfg->lmf_addr_var) { /* * Make that the variable's liveness interval doesn't contain a call, since * that would cause the lvreg to be spilled, making the whole optimization * useless. */ /* This is too slow for JIT compilation */ #if 0 if (cfg->compile_aot && vreg_to_bb [var->dreg]) { MonoInst *ins; int def_index, call_index, ins_index; gboolean spilled = FALSE; def_index = -1; call_index = -1; ins_index = 0; for (ins = vreg_to_bb [var->dreg]->code; ins; ins = ins->next) { const char *spec = INS_INFO (ins->opcode); if ((spec [MONO_INST_DEST] != ' ') && (ins->dreg == var->dreg)) def_index = ins_index; if (((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg)) || ((spec [MONO_INST_SRC1] != ' ') && (ins->sreg1 == var->dreg))) { if (call_index > def_index) { spilled = TRUE; break; } } if (MONO_IS_CALL (ins)) call_index = ins_index; ins_index ++; } if (spilled) break; } #endif if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("CONVERTED R%d(%d) TO VREG.\n", var->dreg, vmv->idx); var->flags |= MONO_INST_IS_DEAD; cfg->vreg_to_inst [var->dreg] = NULL; } break; } } /* * Compress the varinfo and vars tables so the liveness computation is faster and * takes up less space. */ pos = 0; for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if (pos < i && cfg->locals_start == i) cfg->locals_start = pos; if (!(var->flags & MONO_INST_IS_DEAD)) { if (pos < i) { cfg->varinfo [pos] = cfg->varinfo [i]; cfg->varinfo [pos]->inst_c0 = pos; memcpy (&cfg->vars [pos], &cfg->vars [i], sizeof (MonoMethodVar)); cfg->vars [pos].idx = pos; #if SIZEOF_REGISTER == 4 if (cfg->varinfo [pos]->type == STACK_I8) { /* Modify the two component vars too */ MonoInst *var1; var1 = get_vreg_to_inst (cfg, MONO_LVREG_LS (cfg->varinfo [pos]->dreg)); var1->inst_c0 = pos; var1 = get_vreg_to_inst (cfg, MONO_LVREG_MS (cfg->varinfo [pos]->dreg)); var1->inst_c0 = pos; } #endif } pos ++; } } cfg->num_varinfo = pos; if (cfg->locals_start > cfg->num_varinfo) cfg->locals_start = cfg->num_varinfo; } /* * mono_allocate_gsharedvt_vars: * * Allocate variables with gsharedvt types to entries in the MonoGSharedVtMethodRuntimeInfo.entries array. * Initialize cfg->gsharedvt_vreg_to_idx with the mapping between vregs and indexes. */ void mono_allocate_gsharedvt_vars (MonoCompile *cfg) { int i; cfg->gsharedvt_vreg_to_idx = (int *)mono_mempool_alloc0 (cfg->mempool, sizeof (int) * cfg->next_vreg); for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *ins = cfg->varinfo [i]; int idx; if (mini_is_gsharedvt_variable_type (ins->inst_vtype)) { if (i >= cfg->locals_start) { /* Local */ idx = get_gsharedvt_info_slot (cfg, ins->inst_vtype, MONO_RGCTX_INFO_LOCAL_OFFSET); cfg->gsharedvt_vreg_to_idx [ins->dreg] = idx + 1; ins->opcode = OP_GSHAREDVT_LOCAL; ins->inst_imm = idx; } else { /* Arg */ cfg->gsharedvt_vreg_to_idx [ins->dreg] = -1; ins->opcode = OP_GSHAREDVT_ARG_REGOFFSET; } } } } /** * mono_spill_global_vars: * * Generate spill code for variables which are not allocated to registers, * and replace vregs with their allocated hregs. *need_local_opts is set to TRUE if * code is generated which could be optimized by the local optimization passes. */ void mono_spill_global_vars (MonoCompile *cfg, gboolean *need_local_opts) { MonoBasicBlock *bb; char spec2 [16]; int orig_next_vreg; guint32 *vreg_to_lvreg; guint32 *lvregs; guint32 i, lvregs_len, lvregs_size; gboolean dest_has_lvreg = FALSE; MonoStackType stacktypes [128]; MonoInst **live_range_start, **live_range_end; MonoBasicBlock **live_range_start_bb, **live_range_end_bb; *need_local_opts = FALSE; memset (spec2, 0, sizeof (spec2)); /* FIXME: Move this function to mini.c */ stacktypes [(int)'i'] = STACK_PTR; stacktypes [(int)'l'] = STACK_I8; stacktypes [(int)'f'] = STACK_R8; #ifdef MONO_ARCH_SIMD_INTRINSICS stacktypes [(int)'x'] = STACK_VTYPE; #endif #if SIZEOF_REGISTER == 4 /* Create MonoInsts for longs */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *ins = cfg->varinfo [i]; if ((ins->opcode != OP_REGVAR) && !(ins->flags & MONO_INST_IS_DEAD)) { switch (ins->type) { case STACK_R8: case STACK_I8: { MonoInst *tree; if (ins->type == STACK_R8 && !COMPILE_SOFT_FLOAT (cfg)) break; g_assert (ins->opcode == OP_REGOFFSET); tree = get_vreg_to_inst (cfg, MONO_LVREG_LS (ins->dreg)); g_assert (tree); tree->opcode = OP_REGOFFSET; tree->inst_basereg = ins->inst_basereg; tree->inst_offset = ins->inst_offset + MINI_LS_WORD_OFFSET; tree = get_vreg_to_inst (cfg, MONO_LVREG_MS (ins->dreg)); g_assert (tree); tree->opcode = OP_REGOFFSET; tree->inst_basereg = ins->inst_basereg; tree->inst_offset = ins->inst_offset + MINI_MS_WORD_OFFSET; break; } default: break; } } } #endif if (cfg->compute_gc_maps) { /* registers need liveness info even for !non refs */ for (i = 0; i < cfg->num_varinfo; i++) { MonoInst *ins = cfg->varinfo [i]; if (ins->opcode == OP_REGVAR) ins->flags |= MONO_INST_GC_TRACK; } } /* FIXME: widening and truncation */ /* * As an optimization, when a variable allocated to the stack is first loaded into * an lvreg, we will remember the lvreg and use it the next time instead of loading * the variable again. */ orig_next_vreg = cfg->next_vreg; vreg_to_lvreg = (guint32 *)mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * cfg->next_vreg); lvregs_size = 1024; lvregs = (guint32 *)mono_mempool_alloc (cfg->mempool, sizeof (guint32) * lvregs_size); lvregs_len = 0; /* * These arrays contain the first and last instructions accessing a given * variable. * Since we emit bblocks in the same order we process them here, and we * don't split live ranges, these will precisely describe the live range of * the variable, i.e. the instruction range where a valid value can be found * in the variables location. * The live range is computed using the liveness info computed by the liveness pass. * We can't use vmv->range, since that is an abstract live range, and we need * one which is instruction precise. * FIXME: Variables used in out-of-line bblocks have a hole in their live range. */ /* FIXME: Only do this if debugging info is requested */ live_range_start = g_new0 (MonoInst*, cfg->next_vreg); live_range_end = g_new0 (MonoInst*, cfg->next_vreg); live_range_start_bb = g_new (MonoBasicBlock*, cfg->next_vreg); live_range_end_bb = g_new (MonoBasicBlock*, cfg->next_vreg); /* Add spill loads/stores */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins; if (cfg->verbose_level > 2) printf ("\nSPILL BLOCK %d:\n", bb->block_num); /* Clear vreg_to_lvreg array */ for (i = 0; i < lvregs_len; i++) vreg_to_lvreg [lvregs [i]] = 0; lvregs_len = 0; cfg->cbb = bb; MONO_BB_FOR_EACH_INS (bb, ins) { const char *spec = INS_INFO (ins->opcode); int regtype, srcindex, sreg, tmp_reg, prev_dreg, num_sregs; gboolean store, no_lvreg; int sregs [MONO_MAX_SRC_REGS]; if (G_UNLIKELY (cfg->verbose_level > 2)) mono_print_ins (ins); if (ins->opcode == OP_NOP) continue; /* * We handle LDADDR here as well, since it can only be decomposed * when variable addresses are known. */ if (ins->opcode == OP_LDADDR) { MonoInst *var = (MonoInst *)ins->inst_p0; if (var->opcode == OP_VTARG_ADDR) { /* Happens on SPARC/S390 where vtypes are passed by reference */ MonoInst *vtaddr = var->inst_left; if (vtaddr->opcode == OP_REGVAR) { ins->opcode = OP_MOVE; ins->sreg1 = vtaddr->dreg; } else if (var->inst_left->opcode == OP_REGOFFSET) { ins->opcode = OP_LOAD_MEMBASE; ins->inst_basereg = vtaddr->inst_basereg; ins->inst_offset = vtaddr->inst_offset; } else NOT_IMPLEMENTED; } else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg] < 0) { /* gsharedvt arg passed by ref */ g_assert (var->opcode == OP_GSHAREDVT_ARG_REGOFFSET); ins->opcode = OP_LOAD_MEMBASE; ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; } else if (cfg->gsharedvt && cfg->gsharedvt_vreg_to_idx [var->dreg]) { MonoInst *load, *load2, *load3; int idx = cfg->gsharedvt_vreg_to_idx [var->dreg] - 1; int reg1, reg2, reg3; MonoInst *info_var = cfg->gsharedvt_info_var; MonoInst *locals_var = cfg->gsharedvt_locals_var; /* * gsharedvt local. * Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx]. */ g_assert (var->opcode == OP_GSHAREDVT_LOCAL); g_assert (info_var); g_assert (locals_var); /* Mark the instruction used to compute the locals var as used */ cfg->gsharedvt_locals_var_ins = NULL; /* Load the offset */ if (info_var->opcode == OP_REGOFFSET) { reg1 = alloc_ireg (cfg); NEW_LOAD_MEMBASE (cfg, load, OP_LOAD_MEMBASE, reg1, info_var->inst_basereg, info_var->inst_offset); } else if (info_var->opcode == OP_REGVAR) { load = NULL; reg1 = info_var->dreg; } else { g_assert_not_reached (); } reg2 = alloc_ireg (cfg); NEW_LOAD_MEMBASE (cfg, load2, OP_LOADI4_MEMBASE, reg2, reg1, MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P)); /* Load the locals area address */ reg3 = alloc_ireg (cfg); if (locals_var->opcode == OP_REGOFFSET) { NEW_LOAD_MEMBASE (cfg, load3, OP_LOAD_MEMBASE, reg3, locals_var->inst_basereg, locals_var->inst_offset); } else if (locals_var->opcode == OP_REGVAR) { NEW_UNALU (cfg, load3, OP_MOVE, reg3, locals_var->dreg); } else { g_assert_not_reached (); } /* Compute the address */ ins->opcode = OP_PADD; ins->sreg1 = reg3; ins->sreg2 = reg2; mono_bblock_insert_before_ins (bb, ins, load3); mono_bblock_insert_before_ins (bb, load3, load2); if (load) mono_bblock_insert_before_ins (bb, load2, load); } else { g_assert (var->opcode == OP_REGOFFSET); ins->opcode = OP_ADD_IMM; ins->sreg1 = var->inst_basereg; ins->inst_imm = var->inst_offset; } *need_local_opts = TRUE; spec = INS_INFO (ins->opcode); } if (ins->opcode < MONO_CEE_LAST) { mono_print_ins (ins); g_assert_not_reached (); } /* * Store opcodes have destbasereg in the dreg, but in reality, it is an * src register. * FIXME: */ if (MONO_IS_STORE_MEMBASE (ins)) { tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; store = TRUE; spec2 [MONO_INST_DEST] = ' '; spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1]; spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST]; spec2 [MONO_INST_SRC3] = ' '; spec = spec2; } else if (MONO_IS_STORE_MEMINDEX (ins)) g_assert_not_reached (); else store = FALSE; no_lvreg = FALSE; if (G_UNLIKELY (cfg->verbose_level > 2)) { printf ("\t %.3s %d", spec, ins->dreg); num_sregs = mono_inst_get_src_registers (ins, sregs); for (srcindex = 0; srcindex < num_sregs; ++srcindex) printf (" %d", sregs [srcindex]); printf ("\n"); } /***************/ /* DREG */ /***************/ regtype = spec [MONO_INST_DEST]; g_assert (((ins->dreg == -1) && (regtype == ' ')) || ((ins->dreg != -1) && (regtype != ' '))); prev_dreg = -1; int dreg_using_dest_to_membase_op = -1; if ((ins->dreg != -1) && get_vreg_to_inst (cfg, ins->dreg)) { MonoInst *var = get_vreg_to_inst (cfg, ins->dreg); MonoInst *store_ins; int store_opcode; MonoInst *def_ins = ins; int dreg = ins->dreg; /* The original vreg */ store_opcode = mono_type_to_store_membase (cfg, var->inst_vtype); if (var->opcode == OP_REGVAR) { ins->dreg = var->dreg; } else if ((ins->dreg == ins->sreg1) && (spec [MONO_INST_DEST] == 'i') && (spec [MONO_INST_SRC1] == 'i') && !vreg_to_lvreg [ins->dreg] && (op_to_op_dest_membase (store_opcode, ins->opcode) != -1)) { /* * Instead of emitting a load+store, use a _membase opcode. */ g_assert (var->opcode == OP_REGOFFSET); if (ins->opcode == OP_MOVE) { NULLIFY_INS (ins); def_ins = NULL; } else { dreg_using_dest_to_membase_op = ins->dreg; ins->opcode = op_to_op_dest_membase (store_opcode, ins->opcode); ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; ins->dreg = -1; } spec = INS_INFO (ins->opcode); } else { guint32 lvreg; g_assert (var->opcode == OP_REGOFFSET); prev_dreg = ins->dreg; /* Invalidate any previous lvreg for this vreg */ vreg_to_lvreg [ins->dreg] = 0; lvreg = 0; if (COMPILE_SOFT_FLOAT (cfg) && store_opcode == OP_STORER8_MEMBASE_REG) { regtype = 'l'; store_opcode = OP_STOREI8_MEMBASE_REG; } ins->dreg = alloc_dreg (cfg, stacktypes [regtype]); #if SIZEOF_REGISTER != 8 if (regtype == 'l') { NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET, MONO_LVREG_LS (ins->dreg)); mono_bblock_insert_after_ins (bb, ins, store_ins); NEW_STORE_MEMBASE (cfg, store_ins, OP_STOREI4_MEMBASE_REG, var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET, MONO_LVREG_MS (ins->dreg)); mono_bblock_insert_after_ins (bb, ins, store_ins); def_ins = store_ins; } else #endif { g_assert (store_opcode != OP_STOREV_MEMBASE); /* Try to fuse the store into the instruction itself */ /* FIXME: Add more instructions */ if (!lvreg && ((ins->opcode == OP_ICONST) || ((ins->opcode == OP_I8CONST) && (ins->inst_c0 == 0)))) { ins->opcode = store_membase_reg_to_store_membase_imm (store_opcode); ins->inst_imm = ins->inst_c0; ins->inst_destbasereg = var->inst_basereg; ins->inst_offset = var->inst_offset; spec = INS_INFO (ins->opcode); } else if (!lvreg && ((ins->opcode == OP_MOVE) || (ins->opcode == OP_FMOVE) || (ins->opcode == OP_LMOVE) || (ins->opcode == OP_RMOVE))) { ins->opcode = store_opcode; ins->inst_destbasereg = var->inst_basereg; ins->inst_offset = var->inst_offset; no_lvreg = TRUE; tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; store = TRUE; spec2 [MONO_INST_DEST] = ' '; spec2 [MONO_INST_SRC1] = spec [MONO_INST_SRC1]; spec2 [MONO_INST_SRC2] = spec [MONO_INST_DEST]; spec2 [MONO_INST_SRC3] = ' '; spec = spec2; } else if (!lvreg && (op_to_op_store_membase (store_opcode, ins->opcode) != -1)) { // FIXME: The backends expect the base reg to be in inst_basereg ins->opcode = op_to_op_store_membase (store_opcode, ins->opcode); ins->dreg = -1; ins->inst_basereg = var->inst_basereg; ins->inst_offset = var->inst_offset; spec = INS_INFO (ins->opcode); } else { /* printf ("INS: "); mono_print_ins (ins); */ /* Create a store instruction */ NEW_STORE_MEMBASE (cfg, store_ins, store_opcode, var->inst_basereg, var->inst_offset, ins->dreg); /* Insert it after the instruction */ mono_bblock_insert_after_ins (bb, ins, store_ins); def_ins = store_ins; /* * We can't assign ins->dreg to var->dreg here, since the * sregs could use it. So set a flag, and do it after * the sregs. */ if ((!cfg->backend->use_fpstack || ((store_opcode != OP_STORER8_MEMBASE_REG) && (store_opcode != OP_STORER4_MEMBASE_REG))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT))) dest_has_lvreg = TRUE; } } } if (def_ins && !live_range_start [dreg]) { live_range_start [dreg] = def_ins; live_range_start_bb [dreg] = bb; } if (cfg->compute_gc_maps && def_ins && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_DEF); tmp->inst_c1 = dreg; mono_bblock_insert_after_ins (bb, def_ins, tmp); } } /************/ /* SREGS */ /************/ num_sregs = mono_inst_get_src_registers (ins, sregs); for (srcindex = 0; srcindex < 3; ++srcindex) { regtype = spec [MONO_INST_SRC1 + srcindex]; sreg = sregs [srcindex]; g_assert (((sreg == -1) && (regtype == ' ')) || ((sreg != -1) && (regtype != ' '))); if ((sreg != -1) && get_vreg_to_inst (cfg, sreg)) { MonoInst *var = get_vreg_to_inst (cfg, sreg); MonoInst *use_ins = ins; MonoInst *load_ins; guint32 load_opcode; if (var->opcode == OP_REGVAR) { sregs [srcindex] = var->dreg; //mono_inst_set_src_registers (ins, sregs); live_range_end [sreg] = use_ins; live_range_end_bb [sreg] = bb; if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE); /* var->dreg is a hreg */ tmp->inst_c1 = sreg; mono_bblock_insert_after_ins (bb, ins, tmp); } continue; } g_assert (var->opcode == OP_REGOFFSET); load_opcode = mono_type_to_load_membase (cfg, var->inst_vtype); g_assert (load_opcode != OP_LOADV_MEMBASE); if (vreg_to_lvreg [sreg]) { g_assert (vreg_to_lvreg [sreg] != -1); /* The variable is already loaded to an lvreg */ if (G_UNLIKELY (cfg->verbose_level > 2)) printf ("\t\tUse lvreg R%d for R%d.\n", vreg_to_lvreg [sreg], sreg); sregs [srcindex] = vreg_to_lvreg [sreg]; //mono_inst_set_src_registers (ins, sregs); continue; } /* Try to fuse the load into the instruction */ if ((srcindex == 0) && (op_to_op_src1_membase (cfg, load_opcode, ins->opcode) != -1)) { ins->opcode = op_to_op_src1_membase (cfg, load_opcode, ins->opcode); sregs [0] = var->inst_basereg; //mono_inst_set_src_registers (ins, sregs); ins->inst_offset = var->inst_offset; } else if ((srcindex == 1) && (op_to_op_src2_membase (cfg, load_opcode, ins->opcode) != -1)) { ins->opcode = op_to_op_src2_membase (cfg, load_opcode, ins->opcode); sregs [1] = var->inst_basereg; //mono_inst_set_src_registers (ins, sregs); ins->inst_offset = var->inst_offset; } else { if (MONO_IS_REAL_MOVE (ins)) { ins->opcode = OP_NOP; sreg = ins->dreg; } else { //printf ("%d ", srcindex); mono_print_ins (ins); sreg = alloc_dreg (cfg, stacktypes [regtype]); if ((!cfg->backend->use_fpstack || ((load_opcode != OP_LOADR8_MEMBASE) && (load_opcode != OP_LOADR4_MEMBASE))) && !((var)->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) && !no_lvreg) { if (var->dreg == prev_dreg) { /* * sreg refers to the value loaded by the load * emitted below, but we need to use ins->dreg * since it refers to the store emitted earlier. */ sreg = ins->dreg; } g_assert (sreg != -1); if (var->dreg == dreg_using_dest_to_membase_op) { if (cfg->verbose_level > 2) printf ("\tCan't cache R%d because it's part of a dreg dest_membase optimization\n", var->dreg); } else { vreg_to_lvreg [var->dreg] = sreg; } if (lvregs_len >= lvregs_size) { guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2); memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size); lvregs = new_lvregs; lvregs_size *= 2; } lvregs [lvregs_len ++] = var->dreg; } } sregs [srcindex] = sreg; //mono_inst_set_src_registers (ins, sregs); #if SIZEOF_REGISTER != 8 if (regtype == 'l') { NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_MS (sreg), var->inst_basereg, var->inst_offset + MINI_MS_WORD_OFFSET); mono_bblock_insert_before_ins (bb, ins, load_ins); NEW_LOAD_MEMBASE (cfg, load_ins, OP_LOADI4_MEMBASE, MONO_LVREG_LS (sreg), var->inst_basereg, var->inst_offset + MINI_LS_WORD_OFFSET); mono_bblock_insert_before_ins (bb, ins, load_ins); use_ins = load_ins; } else #endif { #if SIZEOF_REGISTER == 4 g_assert (load_opcode != OP_LOADI8_MEMBASE); #endif NEW_LOAD_MEMBASE (cfg, load_ins, load_opcode, sreg, var->inst_basereg, var->inst_offset); mono_bblock_insert_before_ins (bb, ins, load_ins); use_ins = load_ins; } if (cfg->verbose_level > 2) mono_print_ins_index (0, use_ins); } if (var->dreg < orig_next_vreg) { live_range_end [var->dreg] = use_ins; live_range_end_bb [var->dreg] = bb; } if (cfg->compute_gc_maps && var->dreg < orig_next_vreg && (var->flags & MONO_INST_GC_TRACK)) { MonoInst *tmp; MONO_INST_NEW (cfg, tmp, OP_GC_LIVENESS_USE); tmp->inst_c1 = var->dreg; mono_bblock_insert_after_ins (bb, ins, tmp); } } } mono_inst_set_src_registers (ins, sregs); if (dest_has_lvreg) { g_assert (ins->dreg != -1); vreg_to_lvreg [prev_dreg] = ins->dreg; if (lvregs_len >= lvregs_size) { guint32 *new_lvregs = mono_mempool_alloc0 (cfg->mempool, sizeof (guint32) * lvregs_size * 2); memcpy (new_lvregs, lvregs, sizeof (guint32) * lvregs_size); lvregs = new_lvregs; lvregs_size *= 2; } lvregs [lvregs_len ++] = prev_dreg; dest_has_lvreg = FALSE; } if (store) { tmp_reg = ins->dreg; ins->dreg = ins->sreg2; ins->sreg2 = tmp_reg; } if (MONO_IS_CALL (ins)) { /* Clear vreg_to_lvreg array */ for (i = 0; i < lvregs_len; i++) vreg_to_lvreg [lvregs [i]] = 0; lvregs_len = 0; } else if (ins->opcode == OP_NOP) { ins->dreg = -1; MONO_INST_NULLIFY_SREGS (ins); } if (cfg->verbose_level > 2) mono_print_ins_index (1, ins); } /* Extend the live range based on the liveness info */ if (cfg->compute_precise_live_ranges && bb->live_out_set && bb->code) { for (i = 0; i < cfg->num_varinfo; i ++) { MonoMethodVar *vi = MONO_VARINFO (cfg, i); if (vreg_is_volatile (cfg, vi->vreg)) /* The liveness info is incomplete */ continue; if (mono_bitset_test_fast (bb->live_in_set, i) && !live_range_start [vi->vreg]) { /* Live from at least the first ins of this bb */ live_range_start [vi->vreg] = bb->code; live_range_start_bb [vi->vreg] = bb; } if (mono_bitset_test_fast (bb->live_out_set, i)) { /* Live at least until the last ins of this bb */ live_range_end [vi->vreg] = bb->last_ins; live_range_end_bb [vi->vreg] = bb; } } } } /* * Emit LIVERANGE_START/LIVERANGE_END opcodes, the backend will implement them * by storing the current native offset into MonoMethodVar->live_range_start/end. */ if (cfg->compute_precise_live_ranges && cfg->comp_done & MONO_COMP_LIVENESS) { for (i = 0; i < cfg->num_varinfo; ++i) { int vreg = MONO_VARINFO (cfg, i)->vreg; MonoInst *ins; if (live_range_start [vreg]) { MONO_INST_NEW (cfg, ins, OP_LIVERANGE_START); ins->inst_c0 = i; ins->inst_c1 = vreg; mono_bblock_insert_after_ins (live_range_start_bb [vreg], live_range_start [vreg], ins); } if (live_range_end [vreg]) { MONO_INST_NEW (cfg, ins, OP_LIVERANGE_END); ins->inst_c0 = i; ins->inst_c1 = vreg; if (live_range_end [vreg] == live_range_end_bb [vreg]->last_ins) mono_add_ins_to_end (live_range_end_bb [vreg], ins); else mono_bblock_insert_after_ins (live_range_end_bb [vreg], live_range_end [vreg], ins); } } } if (cfg->gsharedvt_locals_var_ins) { /* Nullify if unused */ cfg->gsharedvt_locals_var_ins->opcode = OP_PCONST; cfg->gsharedvt_locals_var_ins->inst_imm = 0; } g_free (live_range_start); g_free (live_range_end); g_free (live_range_start_bb); g_free (live_range_end_bb); } /** * FIXME: * - use 'iadd' instead of 'int_add' * - handling ovf opcodes: decompose in method_to_ir. * - unify iregs/fregs * -> partly done, the missing parts are: * - a more complete unification would involve unifying the hregs as well, so * code wouldn't need if (fp) all over the place. but that would mean the hregs * would no longer map to the machine hregs, so the code generators would need to * be modified. Also, on ia64 for example, niregs + nfregs > 256 -> bitmasks * wouldn't work any more. Duplicating the code in mono_local_regalloc () into * fp/non-fp branches speeds it up by about 15%. * - use sext/zext opcodes instead of shifts * - add OP_ICALL * - get rid of TEMPLOADs if possible and use vregs instead * - clean up usage of OP_P/OP_ opcodes * - cleanup usage of DUMMY_USE * - cleanup the setting of ins->type for MonoInst's which are pushed on the * stack * - set the stack type and allocate a dreg in the EMIT_NEW macros * - get rid of all the <foo>2 stuff when the new JIT is ready. * - make sure handle_stack_args () is called before the branch is emitted * - when the new IR is done, get rid of all unused stuff * - COMPARE/BEQ as separate instructions or unify them ? * - keeping them separate allows specialized compare instructions like * compare_imm, compare_membase * - most back ends unify fp compare+branch, fp compare+ceq * - integrate mono_save_args into inline_method * - get rid of the empty bblocks created by MONO_EMIT_NEW_BRACH_BLOCK2 * - handle long shift opts on 32 bit platforms somehow: they require * 3 sregs (2 for arg1 and 1 for arg2) * - make byref a 'normal' type. * - use vregs for bb->out_stacks if possible, handle_global_vreg will make them a * variable if needed. * - do not start a new IL level bblock when cfg->cbb is changed by a function call * like inline_method. * - remove inlining restrictions * - fix LNEG and enable cfold of INEG * - generalize x86 optimizations like ldelema as a peephole optimization * - add store_mem_imm for amd64 * - optimize the loading of the interruption flag in the managed->native wrappers * - avoid special handling of OP_NOP in passes * - move code inserting instructions into one function/macro. * - try a coalescing phase after liveness analysis * - add float -> vreg conversion + local optimizations on !x86 * - figure out how to handle decomposed branches during optimizations, ie. * compare+branch, op_jump_table+op_br etc. * - promote RuntimeXHandles to vregs * - vtype cleanups: * - add a NEW_VARLOADA_VREG macro * - the vtype optimizations are blocked by the LDADDR opcodes generated for * accessing vtype fields. * - get rid of I8CONST on 64 bit platforms * - dealing with the increase in code size due to branches created during opcode * decomposition: * - use extended basic blocks * - all parts of the JIT * - handle_global_vregs () && local regalloc * - avoid introducing global vregs during decomposition, like 'vtable' in isinst * - sources of increase in code size: * - vtypes * - long compares * - isinst and castclass * - lvregs not allocated to global registers even if used multiple times * - call cctors outside the JIT, to make -v output more readable and JIT timings more * meaningful. * - check for fp stack leakage in other opcodes too. (-> 'exceptions' optimization) * - add all micro optimizations from the old JIT * - put tree optimizations into the deadce pass * - decompose op_start_handler/op_endfilter/op_endfinally earlier using an arch * specific function. * - unify the float comparison opcodes with the other comparison opcodes, i.e. * fcompare + branchCC. * - create a helper function for allocating a stack slot, taking into account * MONO_CFG_HAS_SPILLUP. * - merge r68207. * - optimize mono_regstate2_alloc_int/float. * - fix the pessimistic handling of variables accessed in exception handler blocks. * - need to write a tree optimization pass, but the creation of trees is difficult, i.e. * parts of the tree could be separated by other instructions, killing the tree * arguments, or stores killing loads etc. Also, should we fold loads into other * instructions if the result of the load is used multiple times ? * - make the REM_IMM optimization in mini-x86.c arch-independent. * - LAST MERGE: 108395. * - when returning vtypes in registers, generate IR and append it to the end of the * last bb instead of doing it in the epilog. * - change the store opcodes so they use sreg1 instead of dreg to store the base register. */ /* NOTES ----- - When to decompose opcodes: - earlier: this makes some optimizations hard to implement, since the low level IR no longer contains the necessary information. But it is easier to do. - later: harder to implement, enables more optimizations. - Branches inside bblocks: - created when decomposing complex opcodes. - branches to another bblock: harmless, but not tracked by the branch optimizations, so need to branch to a label at the start of the bblock. - branches to inside the same bblock: very problematic, trips up the local reg allocator. Can be fixed by spitting the current bblock, but that is a complex operation, since some local vregs can become global vregs etc. - Local/global vregs: - local vregs: temporary vregs used inside one bblock. Assigned to hregs by the local register allocator. - global vregs: used in more than one bblock. Have an associated MonoMethodVar structure, created by mono_create_var (). Assigned to hregs or the stack by the global register allocator. - When to do optimizations like alu->alu_imm: - earlier -> saves work later on since the IR will be smaller/simpler - later -> can work on more instructions - Handling of valuetypes: - When a vtype is pushed on the stack, a new temporary is created, an instruction computing its address (LDADDR) is emitted and pushed on the stack. Need to optimize cases when the vtype is used immediately as in argument passing, stloc etc. - Instead of the to_end stuff in the old JIT, simply call the function handling the values on the stack before emitting the last instruction of the bb. */ #else /* !DISABLE_JIT */ MONO_EMPTY_SOURCE_FILE (method_to_ir); #endif /* !DISABLE_JIT */
1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/mono/mono/mini/mini-llvm.c
/** * \file * llvm "Backend" for the mono JIT * * Copyright 2009-2011 Novell Inc (http://www.novell.com) * Copyright 2011 Xamarin Inc (http://www.xamarin.com) * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include "config.h" #include <mono/metadata/debug-helpers.h> #include <mono/metadata/debug-internals.h> #include <mono/metadata/mempool-internals.h> #include <mono/metadata/environment.h> #include <mono/metadata/object-internals.h> #include <mono/metadata/abi-details.h> #include <mono/metadata/tokentype.h> #include <mono/utils/mono-tls.h> #include <mono/utils/mono-dl.h> #include <mono/utils/mono-time.h> #include <mono/utils/freebsd-dwarf.h> #ifndef __STDC_LIMIT_MACROS #define __STDC_LIMIT_MACROS #endif #ifndef __STDC_CONSTANT_MACROS #define __STDC_CONSTANT_MACROS #endif #include "llvm-c/BitWriter.h" #include "llvm-c/Analysis.h" #include "mini-llvm-cpp.h" #include "llvm-jit.h" #include "aot-compiler.h" #include "mini-llvm.h" #include "mini-runtime.h" #include <mono/utils/mono-math.h> #ifndef DISABLE_JIT #if defined(TARGET_AMD64) && defined(TARGET_WIN32) && defined(HOST_WIN32) && defined(_MSC_VER) #define TARGET_X86_64_WIN32_MSVC #endif #if defined(TARGET_X86_64_WIN32_MSVC) #define TARGET_WIN32_MSVC #endif #if LLVM_API_VERSION < 900 #error "The version of the mono llvm repository is too old." #endif /* * Information associated by mono with LLVM modules. */ typedef struct { LLVMModuleRef lmodule; LLVMValueRef throw_icall, rethrow, throw_corlib_exception; GHashTable *llvm_types; LLVMValueRef dummy_got_var; const char *get_method_symbol; const char *get_unbox_tramp_symbol; const char *init_aotconst_symbol; GHashTable *plt_entries; GHashTable *plt_entries_ji; GHashTable *method_to_lmethod; GHashTable *method_to_call_info; GHashTable *lvalue_to_lcalls; GHashTable *direct_callables; /* Maps got slot index -> LLVMValueRef */ GHashTable *aotconst_vars; char **bb_names; int bb_names_len; GPtrArray *used; LLVMTypeRef ptr_type; GPtrArray *subprogram_mds; MonoEERef *mono_ee; LLVMExecutionEngineRef ee; gboolean external_symbols; gboolean emit_dwarf; int max_got_offset; LLVMValueRef personality; gpointer gc_poll_cold_wrapper_compiled; /* For AOT */ MonoAssembly *assembly; char *global_prefix; MonoAotFileInfo aot_info; const char *eh_frame_symbol; LLVMValueRef get_method, get_unbox_tramp, init_aotconst_func; LLVMValueRef init_methods [AOT_INIT_METHOD_NUM]; LLVMValueRef code_start, code_end; LLVMValueRef inited_var; LLVMValueRef unbox_tramp_indexes; LLVMValueRef unbox_trampolines; LLVMValueRef gc_poll_cold_wrapper; LLVMValueRef info_var; LLVMTypeRef *info_var_eltypes; int max_inited_idx, max_method_idx; gboolean has_jitted_code; gboolean static_link; gboolean llvm_only; gboolean interp; GHashTable *idx_to_lmethod; GHashTable *idx_to_unbox_tramp; GPtrArray *callsite_list; LLVMContextRef context; LLVMValueRef sentinel_exception; LLVMValueRef gc_safe_point_flag_var; LLVMValueRef interrupt_flag_var; void *di_builder, *cu; GHashTable *objc_selector_to_var; GPtrArray *cfgs; int unbox_tramp_num, unbox_tramp_elemsize; GHashTable *got_idx_to_type; GHashTable *no_method_table_lmethods; } MonoLLVMModule; /* * Information associated by the backend with mono basic blocks. */ typedef struct { LLVMBasicBlockRef bblock, end_bblock; LLVMValueRef finally_ind; gboolean added, invoke_target; /* * If this bblock is the start of a finally clause, this is a list of bblocks it * needs to branch to in ENDFINALLY. */ GSList *call_handler_return_bbs; /* * If this bblock is the start of a finally clause, this is the bblock that * CALL_HANDLER needs to branch to. */ LLVMBasicBlockRef call_handler_target_bb; /* The list of switch statements generated by ENDFINALLY instructions */ GSList *endfinally_switch_ins_list; GSList *phi_nodes; } BBInfo; /* * Structure containing emit state */ typedef struct { MonoMemPool *mempool; /* Maps method names to the corresponding LLVMValueRef */ GHashTable *emitted_method_decls; MonoCompile *cfg; LLVMValueRef lmethod; MonoLLVMModule *module; LLVMModuleRef lmodule; BBInfo *bblocks; int sindex, default_index, ex_index; LLVMBuilderRef builder; LLVMValueRef *values, *addresses; MonoType **vreg_cli_types; LLVMCallInfo *linfo; MonoMethodSignature *sig; GSList *builders; GHashTable *region_to_handler; GHashTable *clause_to_handler; LLVMBuilderRef alloca_builder; LLVMValueRef last_alloca; LLVMValueRef rgctx_arg; LLVMValueRef this_arg; LLVMTypeRef *vreg_types; gboolean *is_vphi; LLVMTypeRef method_type; LLVMBasicBlockRef init_bb, inited_bb; gboolean *is_dead; gboolean *unreachable; gboolean llvm_only; gboolean has_got_access; gboolean is_linkonce; gboolean emit_dummy_arg; gboolean has_safepoints; gboolean has_catch; int this_arg_pindex, rgctx_arg_pindex; LLVMValueRef imt_rgctx_loc; GHashTable *llvm_types; LLVMValueRef dbg_md; MonoDebugMethodInfo *minfo; /* For every clause, the clauses it is nested in */ GSList **nested_in; LLVMValueRef ex_var; GHashTable *exc_meta; GPtrArray *callsite_list; GPtrArray *phi_values; GPtrArray *bblock_list; char *method_name; GHashTable *jit_callees; LLVMValueRef long_bb_break_var; int *gc_var_indexes; LLVMValueRef gc_pin_area; LLVMValueRef il_state; LLVMValueRef il_state_ret; } EmitContext; typedef struct { MonoBasicBlock *bb; MonoInst *phi; MonoBasicBlock *in_bb; int sreg; } PhiNode; /* * Instruction metadata * This is the same as ins_info, but LREG != IREG. */ #ifdef MINI_OP #undef MINI_OP #endif #ifdef MINI_OP3 #undef MINI_OP3 #endif #define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ', #define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3, #define NONE ' ' #define IREG 'i' #define FREG 'f' #define VREG 'v' #define XREG 'x' #define LREG 'l' /* keep in sync with the enum in mini.h */ const char mini_llvm_ins_info[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 #if TARGET_SIZEOF_VOID_P == 4 #define GET_LONG_IMM(ins) ((ins)->inst_l) #else #define GET_LONG_IMM(ins) ((ins)->inst_imm) #endif #define LLVM_INS_INFO(opcode) (&mini_llvm_ins_info [((opcode) - OP_START - 1) * 4]) #if 0 #define TRACE_FAILURE(msg) do { printf ("%s\n", msg); } while (0) #else #define TRACE_FAILURE(msg) #endif #ifdef TARGET_X86 #define IS_TARGET_X86 1 #else #define IS_TARGET_X86 0 #endif #ifdef TARGET_AMD64 #define IS_TARGET_AMD64 1 #else #define IS_TARGET_AMD64 0 #endif #define ctx_ok(ctx) (!(ctx)->cfg->disable_llvm) enum { MAX_VECTOR_ELEMS = 32, // 2 vectors * 128 bits per vector / 8 bits per element ARM64_MAX_VECTOR_ELEMS = 16, }; const int mask_0_incr_1 [] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, }; static LLVMIntPredicate cond_to_llvm_cond [] = { LLVMIntEQ, LLVMIntNE, LLVMIntSLE, LLVMIntSGE, LLVMIntSLT, LLVMIntSGT, LLVMIntULE, LLVMIntUGE, LLVMIntULT, LLVMIntUGT, }; static LLVMRealPredicate fpcond_to_llvm_cond [] = { LLVMRealOEQ, LLVMRealUNE, LLVMRealOLE, LLVMRealOGE, LLVMRealOLT, LLVMRealOGT, LLVMRealULE, LLVMRealUGE, LLVMRealULT, LLVMRealUGT, LLVMRealORD, LLVMRealUNO }; /* See Table 3-1 ("Comparison Predicate for CMPPD and CMPPS Instructions") in * Vol. 2A of the Intel SDM. */ enum { SSE_eq_ord_nosignal = 0, SSE_lt_ord_signal = 1, SSE_le_ord_signal = 2, SSE_unord_nosignal = 3, SSE_neq_unord_nosignal = 4, SSE_nlt_unord_signal = 5, SSE_nle_unord_signal = 6, SSE_ord_nosignal = 7, }; static MonoLLVMModule aot_module; static GHashTable *intrins_id_to_intrins; static LLVMTypeRef i1_t, i2_t, i4_t, i8_t, r4_t, r8_t; static LLVMTypeRef sse_i1_t, sse_i2_t, sse_i4_t, sse_i8_t, sse_r4_t, sse_r8_t; static LLVMTypeRef v64_i1_t, v64_i2_t, v64_i4_t, v64_i8_t, v64_r4_t, v64_r8_t; static LLVMTypeRef v128_i1_t, v128_i2_t, v128_i4_t, v128_i8_t, v128_r4_t, v128_r8_t; static LLVMTypeRef void_func_t; static MonoLLVMModule *init_jit_module (void); static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code); static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder); static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name); static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name); static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit); static LLVMValueRef get_intrins (EmitContext *ctx, int id); static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id); static void llvm_jit_finalize_method (EmitContext *ctx); static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params); static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module); static void create_aot_info_var (MonoLLVMModule *module); static void set_invariant_load_flag (LLVMValueRef v); static void set_nonnull_load_flag (LLVMValueRef v); enum { INTRIN_scalar = 1 << 0, INTRIN_vector64 = 1 << 1, INTRIN_vector128 = 1 << 2, INTRIN_vectorwidths = 3, INTRIN_vectormask = 0x7, INTRIN_int8 = 1 << 3, INTRIN_int16 = 1 << 4, INTRIN_int32 = 1 << 5, INTRIN_int64 = 1 << 6, INTRIN_float32 = 1 << 7, INTRIN_float64 = 1 << 8, INTRIN_elementwidths = 6, }; typedef uint16_t llvm_ovr_tag_t; static LLVMTypeRef intrin_types [INTRIN_vectorwidths][INTRIN_elementwidths]; static const llvm_ovr_tag_t intrin_arm64_ovr [] = { #define INTRINS(sym, ...) 0, #define INTRINS_OVR(sym, ...) 0, #define INTRINS_OVR_2_ARG(sym, ...) 0, #define INTRINS_OVR_3_ARG(sym, ...) 0, #define INTRINS_OVR_TAG(sym, _, arch, spec) spec, #define INTRINS_OVR_TAG_KIND(sym, _, kind, arch, spec) spec, #include "llvm-intrinsics.h" }; enum { INTRIN_kind_ftoi = 1, INTRIN_kind_widen, INTRIN_kind_widen_across, INTRIN_kind_across, INTRIN_kind_arm64_dot_prod, }; static const uint8_t intrin_kind [] = { #define INTRINS(sym, ...) 0, #define INTRINS_OVR(sym, ...) 0, #define INTRINS_OVR_2_ARG(sym, ...) 0, #define INTRINS_OVR_3_ARG(sym, ...) 0, #define INTRINS_OVR_TAG(sym, _, arch, spec) 0, #define INTRINS_OVR_TAG_KIND(sym, _, arch, kind, spec) kind, #include "llvm-intrinsics.h" }; static inline llvm_ovr_tag_t ovr_tag_force_scalar (llvm_ovr_tag_t tag) { return (tag & ~INTRIN_vectormask) | INTRIN_scalar; } static inline llvm_ovr_tag_t ovr_tag_smaller_vector (llvm_ovr_tag_t tag) { return (tag & ~INTRIN_vectormask) | ((tag & INTRIN_vectormask) >> 1); } static inline llvm_ovr_tag_t ovr_tag_smaller_elements (llvm_ovr_tag_t tag) { return ((tag & ~INTRIN_vectormask) >> 1) | (tag & INTRIN_vectormask); } static inline llvm_ovr_tag_t ovr_tag_corresponding_integer (llvm_ovr_tag_t tag) { return ((tag & ~INTRIN_vectormask) >> 2) | (tag & INTRIN_vectormask); } static LLVMTypeRef ovr_tag_to_llvm_type (llvm_ovr_tag_t tag) { int vw = 0; int ew = 0; if (tag & INTRIN_vector64) vw = 1; else if (tag & INTRIN_vector128) vw = 2; if (tag & INTRIN_int16) ew = 1; else if (tag & INTRIN_int32) ew = 2; else if (tag & INTRIN_int64) ew = 3; else if (tag & INTRIN_float32) ew = 4; else if (tag & INTRIN_float64) ew = 5; return intrin_types [vw][ew]; } static int key_from_id_and_tag (int id, llvm_ovr_tag_t ovr_tag) { return (((int) ovr_tag) << 23) | id; } static llvm_ovr_tag_t ovr_tag_from_mono_vector_class (MonoClass *klass) { int size = mono_class_value_size (klass, NULL); llvm_ovr_tag_t ret = 0; switch (size) { case 8: ret |= INTRIN_vector64; break; case 16: ret |= INTRIN_vector128; break; } MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; switch (etype->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: ret |= INTRIN_int8; break; case MONO_TYPE_I2: case MONO_TYPE_U2: ret |= INTRIN_int16; break; case MONO_TYPE_I4: case MONO_TYPE_U4: ret |= INTRIN_int32; break; case MONO_TYPE_I8: case MONO_TYPE_U8: ret |= INTRIN_int64; break; case MONO_TYPE_R4: ret |= INTRIN_float32; break; case MONO_TYPE_R8: ret |= INTRIN_float64; break; } return ret; } static llvm_ovr_tag_t ovr_tag_from_llvm_type (LLVMTypeRef type) { llvm_ovr_tag_t ret = 0; LLVMTypeKind kind = LLVMGetTypeKind (type); LLVMTypeRef elem_t = NULL; switch (kind) { case LLVMVectorTypeKind: { elem_t = LLVMGetElementType (type); unsigned int bits = mono_llvm_get_prim_size_bits (type); switch (bits) { case 64: ret |= INTRIN_vector64; break; case 128: ret |= INTRIN_vector128; break; default: g_assert_not_reached (); } break; } default: g_assert_not_reached (); } if (elem_t == i1_t) ret |= INTRIN_int8; if (elem_t == i2_t) ret |= INTRIN_int16; if (elem_t == i4_t) ret |= INTRIN_int32; if (elem_t == i8_t) ret |= INTRIN_int64; if (elem_t == r4_t) ret |= INTRIN_float32; if (elem_t == r8_t) ret |= INTRIN_float64; return ret; } static inline void set_failure (EmitContext *ctx, const char *message) { TRACE_FAILURE (reason); ctx->cfg->exception_message = g_strdup (message); ctx->cfg->disable_llvm = TRUE; } static LLVMValueRef const_int1 (int v) { return LLVMConstInt (LLVMInt1Type (), v ? 1 : 0, FALSE); } static LLVMValueRef const_int8 (int v) { return LLVMConstInt (LLVMInt8Type (), v, FALSE); } static LLVMValueRef const_int32 (int v) { return LLVMConstInt (LLVMInt32Type (), v, FALSE); } static LLVMValueRef const_int64 (int64_t v) { return LLVMConstInt (LLVMInt64Type (), v, FALSE); } /* * IntPtrType: * * The LLVM type with width == TARGET_SIZEOF_VOID_P */ static LLVMTypeRef IntPtrType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (); } static LLVMTypeRef ObjRefType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0); } static LLVMTypeRef ThisType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0); } typedef struct { int32_t size; uint32_t align; } MonoSizeAlign; /* * get_vtype_size: * * Return the size of the LLVM representation of the vtype T. */ static MonoSizeAlign get_vtype_size_align (MonoType *t) { uint32_t align = 0; int32_t size = mono_class_value_size (mono_class_from_mono_type_internal (t), &align); /* LLVMArgAsIArgs depends on this since it stores whole words */ while (size < 2 * TARGET_SIZEOF_VOID_P && mono_is_power_of_two (size) == -1) size ++; MonoSizeAlign ret = { size, align }; return ret; } /* * simd_class_to_llvm_type: * * Return the LLVM type corresponding to the Mono.SIMD class KLASS */ static LLVMTypeRef simd_class_to_llvm_type (EmitContext *ctx, MonoClass *klass) { const char *klass_name = m_class_get_name (klass); if (!strcmp (klass_name, "Vector2d")) { return LLVMVectorType (LLVMDoubleType (), 2); } else if (!strcmp (klass_name, "Vector2l")) { return LLVMVectorType (LLVMInt64Type (), 2); } else if (!strcmp (klass_name, "Vector2ul")) { return LLVMVectorType (LLVMInt64Type (), 2); } else if (!strcmp (klass_name, "Vector4i")) { return LLVMVectorType (LLVMInt32Type (), 4); } else if (!strcmp (klass_name, "Vector4ui")) { return LLVMVectorType (LLVMInt32Type (), 4); } else if (!strcmp (klass_name, "Vector4f")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector8s")) { return LLVMVectorType (LLVMInt16Type (), 8); } else if (!strcmp (klass_name, "Vector8us")) { return LLVMVectorType (LLVMInt16Type (), 8); } else if (!strcmp (klass_name, "Vector16sb")) { return LLVMVectorType (LLVMInt8Type (), 16); } else if (!strcmp (klass_name, "Vector16b")) { return LLVMVectorType (LLVMInt8Type (), 16); } else if (!strcmp (klass_name, "Vector2")) { /* System.Numerics */ return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector3")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector4")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector`1") || !strcmp (klass_name, "Vector64`1") || !strcmp (klass_name, "Vector128`1") || !strcmp (klass_name, "Vector256`1")) { MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0]; int size = mono_class_value_size (klass, NULL); switch (etype->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMVectorType (LLVMInt8Type (), size); case MONO_TYPE_I2: case MONO_TYPE_U2: return LLVMVectorType (LLVMInt16Type (), size / 2); case MONO_TYPE_I4: case MONO_TYPE_U4: return LLVMVectorType (LLVMInt32Type (), size / 4); case MONO_TYPE_I8: case MONO_TYPE_U8: return LLVMVectorType (LLVMInt64Type (), size / 8); case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return LLVMVectorType (LLVMInt64Type (), size / 8); #else return LLVMVectorType (LLVMInt32Type (), size / 4); #endif case MONO_TYPE_R4: return LLVMVectorType (LLVMFloatType (), size / 4); case MONO_TYPE_R8: return LLVMVectorType (LLVMDoubleType (), size / 8); default: g_assert_not_reached (); return NULL; } } else { printf ("%s\n", klass_name); NOT_IMPLEMENTED; return NULL; } } static LLVMTypeRef simd_valuetuple_to_llvm_type (EmitContext *ctx, MonoClass *klass) { const char *klass_name = m_class_get_name (klass); if (!strcmp (klass_name, "ValueTuple`2")) { MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0]; if (etype->type != MONO_TYPE_GENERICINST) g_assert_not_reached (); MonoClass *eklass = etype->data.generic_class->cached_class; LLVMTypeRef ltype = simd_class_to_llvm_type (ctx, eklass); return LLVMArrayType (ltype, 2); } g_assert_not_reached (); } /* Return the 128 bit SIMD type corresponding to the mono type TYPE */ static inline G_GNUC_UNUSED LLVMTypeRef type_to_sse_type (int type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMVectorType (LLVMInt8Type (), 16); case MONO_TYPE_U2: case MONO_TYPE_I2: return LLVMVectorType (LLVMInt16Type (), 8); case MONO_TYPE_U4: case MONO_TYPE_I4: return LLVMVectorType (LLVMInt32Type (), 4); case MONO_TYPE_U8: case MONO_TYPE_I8: return LLVMVectorType (LLVMInt64Type (), 2); case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return LLVMVectorType (LLVMInt64Type (), 2); #else return LLVMVectorType (LLVMInt32Type (), 4); #endif case MONO_TYPE_R8: return LLVMVectorType (LLVMDoubleType (), 2); case MONO_TYPE_R4: return LLVMVectorType (LLVMFloatType (), 4); default: g_assert_not_reached (); return NULL; } } static LLVMTypeRef create_llvm_type_for_type (MonoLLVMModule *module, MonoClass *klass) { int i, size, nfields, esize; LLVMTypeRef *eltypes; char *name; MonoType *t; LLVMTypeRef ltype; t = m_class_get_byval_arg (klass); if (mini_type_is_hfa (t, &nfields, &esize)) { /* * This is needed on arm64 where HFAs are returned in * registers. */ /* SIMD types have size 16 in mono_class_value_size () */ if (m_class_is_simd_type (klass)) nfields = 16/ esize; size = nfields; eltypes = g_new (LLVMTypeRef, size); for (i = 0; i < size; ++i) eltypes [i] = esize == 4 ? LLVMFloatType () : LLVMDoubleType (); } else { MonoSizeAlign size_align = get_vtype_size_align (t); eltypes = g_new (LLVMTypeRef, size_align.size); size = 0; uint32_t bytes = 0; uint32_t chunk = size_align.align < TARGET_SIZEOF_VOID_P ? size_align.align : TARGET_SIZEOF_VOID_P; for (; chunk > 0; chunk = chunk >> 1) { for (; (bytes + chunk) <= size_align.size; bytes += chunk) { eltypes [size] = LLVMIntType (chunk * 8); ++size; } } } name = mono_type_full_name (m_class_get_byval_arg (klass)); ltype = LLVMStructCreateNamed (module->context, name); LLVMStructSetBody (ltype, eltypes, size, FALSE); g_free (eltypes); g_free (name); return ltype; } static LLVMTypeRef primitive_type_to_llvm_type (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMInt8Type (); case MONO_TYPE_I2: case MONO_TYPE_U2: return LLVMInt16Type (); case MONO_TYPE_I4: case MONO_TYPE_U4: return LLVMInt32Type (); case MONO_TYPE_I8: case MONO_TYPE_U8: return LLVMInt64Type (); case MONO_TYPE_R4: return LLVMFloatType (); case MONO_TYPE_R8: return LLVMDoubleType (); case MONO_TYPE_I: case MONO_TYPE_U: return IntPtrType (); default: return NULL; } } static MonoTypeEnum inst_c1_type (const MonoInst *ins) { return (MonoTypeEnum)ins->inst_c1; } /* * type_to_llvm_type: * * Return the LLVM type corresponding to T. */ static LLVMTypeRef type_to_llvm_type (EmitContext *ctx, MonoType *t) { if (m_type_is_byref (t)) return ThisType (); t = mini_get_underlying_type (t); LLVMTypeRef prim_llvm_type = primitive_type_to_llvm_type (t->type); if (prim_llvm_type != NULL) return prim_llvm_type; switch (t->type) { case MONO_TYPE_VOID: return LLVMVoidType (); case MONO_TYPE_OBJECT: return ObjRefType (); case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: { MonoClass *klass = mono_class_from_mono_type_internal (t); MonoClass *ptr_klass = m_class_get_element_class (klass); MonoType *ptr_type = m_class_get_byval_arg (ptr_klass); /* Handle primitive pointers */ switch (ptr_type->type) { case MONO_TYPE_I1: case MONO_TYPE_I2: case MONO_TYPE_I4: case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: return LLVMPointerType (type_to_llvm_type (ctx, ptr_type), 0); } return ObjRefType (); } case MONO_TYPE_VAR: case MONO_TYPE_MVAR: /* Because of generic sharing */ return ObjRefType (); case MONO_TYPE_GENERICINST: if (!mono_type_generic_inst_is_valuetype (t)) return ObjRefType (); /* Fall through */ case MONO_TYPE_VALUETYPE: case MONO_TYPE_TYPEDBYREF: { MonoClass *klass; LLVMTypeRef ltype; klass = mono_class_from_mono_type_internal (t); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) return simd_class_to_llvm_type (ctx, klass); if (m_class_is_enumtype (klass)) return type_to_llvm_type (ctx, mono_class_enum_basetype_internal (klass)); ltype = (LLVMTypeRef)g_hash_table_lookup (ctx->module->llvm_types, klass); if (!ltype) { ltype = create_llvm_type_for_type (ctx->module, klass); g_hash_table_insert (ctx->module->llvm_types, klass, ltype); } return ltype; } default: printf ("X: %d\n", t->type); ctx->cfg->exception_message = g_strdup_printf ("type %s", mono_type_full_name (t)); ctx->cfg->disable_llvm = TRUE; return NULL; } } static gboolean primitive_type_is_unsigned (MonoTypeEnum t) { switch (t) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_CHAR: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: return TRUE; default: return FALSE; } } /* * type_is_unsigned: * * Return whenever T is an unsigned int type. */ static gboolean type_is_unsigned (EmitContext *ctx, MonoType *t) { t = mini_get_underlying_type (t); if (m_type_is_byref (t)) return FALSE; return primitive_type_is_unsigned (t->type); } /* * type_to_llvm_arg_type: * * Same as type_to_llvm_type, but treat i8/i16 as i32. */ static LLVMTypeRef type_to_llvm_arg_type (EmitContext *ctx, MonoType *t) { LLVMTypeRef ptype = type_to_llvm_type (ctx, t); if (ctx->cfg->llvm_only) return ptype; /* * This works on all abis except arm64/ios which passes multiple * arguments in one stack slot. */ #ifndef TARGET_ARM64 if (ptype == LLVMInt8Type () || ptype == LLVMInt16Type ()) { /* * LLVM generates code which only sets the lower bits, while JITted * code expects all the bits to be set. */ ptype = LLVMInt32Type (); } #endif return ptype; } /* * llvm_type_to_stack_type: * * Return the LLVM type which needs to be used when a value of type TYPE is pushed * on the IL stack. */ static G_GNUC_UNUSED LLVMTypeRef llvm_type_to_stack_type (MonoCompile *cfg, LLVMTypeRef type) { if (type == NULL) return NULL; if (type == LLVMInt8Type ()) return LLVMInt32Type (); else if (type == LLVMInt16Type ()) return LLVMInt32Type (); else if (!cfg->r4fp && type == LLVMFloatType ()) return LLVMDoubleType (); else return type; } /* * regtype_to_llvm_type: * * Return the LLVM type corresponding to the regtype C used in instruction * descriptions. */ static LLVMTypeRef regtype_to_llvm_type (char c) { switch (c) { case 'i': return LLVMInt32Type (); case 'l': return LLVMInt64Type (); case 'f': return LLVMDoubleType (); default: return NULL; } } /* * op_to_llvm_type: * * Return the LLVM type corresponding to the unary/binary opcode OPCODE. */ static LLVMTypeRef op_to_llvm_type (int opcode) { switch (opcode) { case OP_ICONV_TO_I1: case OP_LCONV_TO_I1: return LLVMInt8Type (); case OP_ICONV_TO_U1: case OP_LCONV_TO_U1: return LLVMInt8Type (); case OP_ICONV_TO_I2: case OP_LCONV_TO_I2: return LLVMInt16Type (); case OP_ICONV_TO_U2: case OP_LCONV_TO_U2: return LLVMInt16Type (); case OP_ICONV_TO_I4: case OP_LCONV_TO_I4: return LLVMInt32Type (); case OP_ICONV_TO_U4: case OP_LCONV_TO_U4: return LLVMInt32Type (); case OP_ICONV_TO_I8: return LLVMInt64Type (); case OP_ICONV_TO_R4: return LLVMFloatType (); case OP_ICONV_TO_R8: return LLVMDoubleType (); case OP_ICONV_TO_U8: return LLVMInt64Type (); case OP_FCONV_TO_I4: return LLVMInt32Type (); case OP_FCONV_TO_I8: return LLVMInt64Type (); case OP_FCONV_TO_I1: case OP_FCONV_TO_U1: case OP_RCONV_TO_I1: case OP_RCONV_TO_U1: return LLVMInt8Type (); case OP_FCONV_TO_I2: case OP_FCONV_TO_U2: case OP_RCONV_TO_I2: case OP_RCONV_TO_U2: return LLVMInt16Type (); case OP_FCONV_TO_U4: case OP_RCONV_TO_U4: return LLVMInt32Type (); case OP_FCONV_TO_U8: case OP_RCONV_TO_U8: return LLVMInt64Type (); case OP_IADD_OVF: case OP_IADD_OVF_UN: case OP_ISUB_OVF: case OP_ISUB_OVF_UN: case OP_IMUL_OVF: case OP_IMUL_OVF_UN: return LLVMInt32Type (); case OP_LADD_OVF: case OP_LADD_OVF_UN: case OP_LSUB_OVF: case OP_LSUB_OVF_UN: case OP_LMUL_OVF: case OP_LMUL_OVF_UN: return LLVMInt64Type (); default: printf ("%s\n", mono_inst_name (opcode)); g_assert_not_reached (); return NULL; } } #define CLAUSE_START(clause) ((clause)->try_offset) #define CLAUSE_END(clause) (((clause))->try_offset + ((clause))->try_len) /* * load_store_to_llvm_type: * * Return the size/sign/zero extension corresponding to the load/store opcode * OPCODE. */ static LLVMTypeRef load_store_to_llvm_type (int opcode, int *size, gboolean *sext, gboolean *zext) { *sext = FALSE; *zext = FALSE; switch (opcode) { case OP_LOADI1_MEMBASE: case OP_STOREI1_MEMBASE_REG: case OP_STOREI1_MEMBASE_IMM: case OP_ATOMIC_LOAD_I1: case OP_ATOMIC_STORE_I1: *size = 1; *sext = TRUE; return LLVMInt8Type (); case OP_LOADU1_MEMBASE: case OP_LOADU1_MEM: case OP_ATOMIC_LOAD_U1: case OP_ATOMIC_STORE_U1: *size = 1; *zext = TRUE; return LLVMInt8Type (); case OP_LOADI2_MEMBASE: case OP_STOREI2_MEMBASE_REG: case OP_STOREI2_MEMBASE_IMM: case OP_ATOMIC_LOAD_I2: case OP_ATOMIC_STORE_I2: *size = 2; *sext = TRUE; return LLVMInt16Type (); case OP_LOADU2_MEMBASE: case OP_LOADU2_MEM: case OP_ATOMIC_LOAD_U2: case OP_ATOMIC_STORE_U2: *size = 2; *zext = TRUE; return LLVMInt16Type (); case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: case OP_LOADI4_MEM: case OP_LOADU4_MEM: case OP_STOREI4_MEMBASE_REG: case OP_STOREI4_MEMBASE_IMM: case OP_ATOMIC_LOAD_I4: case OP_ATOMIC_STORE_I4: case OP_ATOMIC_LOAD_U4: case OP_ATOMIC_STORE_U4: *size = 4; return LLVMInt32Type (); case OP_LOADI8_MEMBASE: case OP_LOADI8_MEM: case OP_STOREI8_MEMBASE_REG: case OP_STOREI8_MEMBASE_IMM: case OP_ATOMIC_LOAD_I8: case OP_ATOMIC_STORE_I8: case OP_ATOMIC_LOAD_U8: case OP_ATOMIC_STORE_U8: *size = 8; return LLVMInt64Type (); case OP_LOADR4_MEMBASE: case OP_STORER4_MEMBASE_REG: case OP_ATOMIC_LOAD_R4: case OP_ATOMIC_STORE_R4: *size = 4; return LLVMFloatType (); case OP_LOADR8_MEMBASE: case OP_STORER8_MEMBASE_REG: case OP_ATOMIC_LOAD_R8: case OP_ATOMIC_STORE_R8: *size = 8; return LLVMDoubleType (); case OP_LOAD_MEMBASE: case OP_LOAD_MEM: case OP_STORE_MEMBASE_REG: case OP_STORE_MEMBASE_IMM: *size = TARGET_SIZEOF_VOID_P; return IntPtrType (); default: g_assert_not_reached (); return NULL; } } /* * ovf_op_to_intrins: * * Return the LLVM intrinsics corresponding to the overflow opcode OPCODE. */ static IntrinsicId ovf_op_to_intrins (int opcode) { switch (opcode) { case OP_IADD_OVF: return INTRINS_SADD_OVF_I32; case OP_IADD_OVF_UN: return INTRINS_UADD_OVF_I32; case OP_ISUB_OVF: return INTRINS_SSUB_OVF_I32; case OP_ISUB_OVF_UN: return INTRINS_USUB_OVF_I32; case OP_IMUL_OVF: return INTRINS_SMUL_OVF_I32; case OP_IMUL_OVF_UN: return INTRINS_UMUL_OVF_I32; case OP_LADD_OVF: return INTRINS_SADD_OVF_I64; case OP_LADD_OVF_UN: return INTRINS_UADD_OVF_I64; case OP_LSUB_OVF: return INTRINS_SSUB_OVF_I64; case OP_LSUB_OVF_UN: return INTRINS_USUB_OVF_I64; case OP_LMUL_OVF: return INTRINS_SMUL_OVF_I64; case OP_LMUL_OVF_UN: return INTRINS_UMUL_OVF_I64; default: g_assert_not_reached (); return (IntrinsicId)0; } } static IntrinsicId simd_ins_to_intrins (int opcode) { switch (opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_CVTPD2DQ: return INTRINS_SSE_CVTPD2DQ; case OP_CVTPS2DQ: return INTRINS_SSE_CVTPS2DQ; case OP_CVTPD2PS: return INTRINS_SSE_CVTPD2PS; case OP_CVTTPD2DQ: return INTRINS_SSE_CVTTPD2DQ; case OP_CVTTPS2DQ: return INTRINS_SSE_CVTTPS2DQ; case OP_SSE_SQRTSS: return INTRINS_SSE_SQRT_SS; case OP_SSE2_SQRTSD: return INTRINS_SSE_SQRT_SD; #endif default: g_assert_not_reached (); return (IntrinsicId)0; } } static LLVMTypeRef simd_op_to_llvm_type (int opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_EXTRACT_R8: case OP_EXPAND_R8: return sse_r8_t; case OP_EXTRACT_I8: case OP_EXPAND_I8: return sse_i8_t; case OP_EXTRACT_I4: case OP_EXPAND_I4: return sse_i4_t; case OP_EXTRACT_I2: case OP_EXTRACTX_U2: case OP_EXPAND_I2: return sse_i2_t; case OP_EXTRACT_I1: case OP_EXPAND_I1: return sse_i1_t; case OP_EXTRACT_R4: case OP_EXPAND_R4: return sse_r4_t; case OP_CVTPD2DQ: case OP_CVTPD2PS: case OP_CVTTPD2DQ: return sse_r8_t; case OP_CVTPS2DQ: case OP_CVTTPS2DQ: return sse_r4_t; case OP_SQRTPS: case OP_RSQRTPS: case OP_DUPPS_LOW: case OP_DUPPS_HIGH: return sse_r4_t; case OP_SQRTPD: case OP_DUPPD: return sse_r8_t; default: g_assert_not_reached (); return NULL; } #else return NULL; #endif } static void set_cold_cconv (LLVMValueRef func) { /* * xcode10 (watchOS) and ARM/ARM64 doesn't seem to support preserveall, it fails with: * fatal error: error in backend: Unsupported calling convention */ #if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64) LLVMSetFunctionCallConv (func, LLVMColdCallConv); #endif } static void set_call_cold_cconv (LLVMValueRef func) { #if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64) LLVMSetInstructionCallConv (func, LLVMColdCallConv); #endif } /* * get_bb: * * Return the LLVM basic block corresponding to BB. */ static LLVMBasicBlockRef get_bb (EmitContext *ctx, MonoBasicBlock *bb) { char bb_name_buf [128]; char *bb_name; if (ctx->bblocks [bb->block_num].bblock == NULL) { if (bb->flags & BB_EXCEPTION_HANDLER) { int clause_index = (mono_get_block_region_notry (ctx->cfg, bb->region) >> 8) - 1; sprintf (bb_name_buf, "EH_CLAUSE%d_BB%d", clause_index, bb->block_num); bb_name = bb_name_buf; } else if (bb->block_num < 256) { if (!ctx->module->bb_names) { ctx->module->bb_names_len = 256; ctx->module->bb_names = g_new0 (char*, ctx->module->bb_names_len); } if (!ctx->module->bb_names [bb->block_num]) { char *n; n = g_strdup_printf ("BB%d", bb->block_num); mono_memory_barrier (); ctx->module->bb_names [bb->block_num] = n; } bb_name = ctx->module->bb_names [bb->block_num]; } else { sprintf (bb_name_buf, "BB%d", bb->block_num); bb_name = bb_name_buf; } ctx->bblocks [bb->block_num].bblock = LLVMAppendBasicBlock (ctx->lmethod, bb_name); ctx->bblocks [bb->block_num].end_bblock = ctx->bblocks [bb->block_num].bblock; } return ctx->bblocks [bb->block_num].bblock; } /* * get_end_bb: * * Return the last LLVM bblock corresponding to BB. * This might not be equal to the bb returned by get_bb () since we need to generate * multiple LLVM bblocks for a mono bblock to handle throwing exceptions. */ static LLVMBasicBlockRef get_end_bb (EmitContext *ctx, MonoBasicBlock *bb) { get_bb (ctx, bb); return ctx->bblocks [bb->block_num].end_bblock; } static LLVMBasicBlockRef gen_bb (EmitContext *ctx, const char *prefix) { char bb_name [128]; sprintf (bb_name, "%s%d", prefix, ++ ctx->ex_index); return LLVMAppendBasicBlock (ctx->lmethod, bb_name); } /* * resolve_patch: * * Return the target of the patch identified by TYPE and TARGET. */ static gpointer resolve_patch (MonoCompile *cfg, MonoJumpInfoType type, gconstpointer target) { MonoJumpInfo ji; ERROR_DECL (error); gpointer res; memset (&ji, 0, sizeof (ji)); ji.type = type; ji.data.target = target; res = mono_resolve_patch_target (cfg->method, NULL, &ji, FALSE, error); mono_error_assert_ok (error); return res; } /* * convert_full: * * Emit code to convert the LLVM value V to DTYPE. */ static LLVMValueRef convert_full (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype, gboolean is_unsigned) { LLVMTypeRef stype = LLVMTypeOf (v); if (stype != dtype) { gboolean ext = FALSE; /* Extend */ if (dtype == LLVMInt64Type () && (stype == LLVMInt32Type () || stype == LLVMInt16Type () || stype == LLVMInt8Type ())) ext = TRUE; else if (dtype == LLVMInt32Type () && (stype == LLVMInt16Type () || stype == LLVMInt8Type ())) ext = TRUE; else if (dtype == LLVMInt16Type () && (stype == LLVMInt8Type ())) ext = TRUE; if (ext) return is_unsigned ? LLVMBuildZExt (ctx->builder, v, dtype, "") : LLVMBuildSExt (ctx->builder, v, dtype, ""); if (dtype == LLVMDoubleType () && stype == LLVMFloatType ()) return LLVMBuildFPExt (ctx->builder, v, dtype, ""); /* Trunc */ if (stype == LLVMInt64Type () && (dtype == LLVMInt32Type () || dtype == LLVMInt16Type () || dtype == LLVMInt8Type ())) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMInt32Type () && (dtype == LLVMInt16Type () || dtype == LLVMInt8Type ())) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMInt16Type () && dtype == LLVMInt8Type ()) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMDoubleType () && dtype == LLVMFloatType ()) return LLVMBuildFPTrunc (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind && LLVMGetTypeKind (dtype) == LLVMPointerTypeKind) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (dtype) == LLVMPointerTypeKind) return LLVMBuildIntToPtr (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind) return LLVMBuildPtrToInt (ctx->builder, v, dtype, ""); if (mono_arch_is_soft_float ()) { if (stype == LLVMInt32Type () && dtype == LLVMFloatType ()) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); if (stype == LLVMInt32Type () && dtype == LLVMDoubleType ()) return LLVMBuildBitCast (ctx->builder, LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""), dtype, ""); } if (LLVMGetTypeKind (stype) == LLVMVectorTypeKind && LLVMGetTypeKind (dtype) == LLVMVectorTypeKind) { if (mono_llvm_get_prim_size_bits (stype) == mono_llvm_get_prim_size_bits (dtype)) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); } mono_llvm_dump_value (v); mono_llvm_dump_type (dtype); printf ("\n"); g_assert_not_reached (); return NULL; } else { return v; } } static LLVMValueRef convert (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype) { return convert_full (ctx, v, dtype, FALSE); } static void emit_memset (EmitContext *ctx, LLVMBuilderRef builder, LLVMValueRef v, LLVMValueRef size, int alignment) { LLVMValueRef args [5]; int aindex = 0; args [aindex ++] = v; args [aindex ++] = LLVMConstInt (LLVMInt8Type (), 0, FALSE); args [aindex ++] = size; args [aindex ++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); LLVMBuildCall (builder, get_intrins (ctx, INTRINS_MEMSET), args, aindex, ""); } /* * emit_volatile_load: * * If vreg is volatile, emit a load from its address. */ static LLVMValueRef emit_volatile_load (EmitContext *ctx, int vreg) { MonoType *t; LLVMValueRef v; // On arm64, we pass the rgctx in a callee saved // register on arm64 (x15), and llvm might keep the value in that register // even through the register is marked as 'reserved' inside llvm. v = mono_llvm_build_load (ctx->builder, ctx->addresses [vreg], "", TRUE); t = ctx->vreg_cli_types [vreg]; if (t && !m_type_is_byref (t)) { /* * Might have to zero extend since llvm doesn't have * unsigned types. */ if (t->type == MONO_TYPE_U1 || t->type == MONO_TYPE_U2 || t->type == MONO_TYPE_CHAR || t->type == MONO_TYPE_BOOLEAN) v = LLVMBuildZExt (ctx->builder, v, LLVMInt32Type (), ""); else if (t->type == MONO_TYPE_I1 || t->type == MONO_TYPE_I2) v = LLVMBuildSExt (ctx->builder, v, LLVMInt32Type (), ""); else if (t->type == MONO_TYPE_U8) v = LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""); } return v; } /* * emit_volatile_store: * * If VREG is volatile, emit a store from its value to its address. */ static void emit_volatile_store (EmitContext *ctx, int vreg) { MonoInst *var = get_vreg_to_inst (ctx->cfg, vreg); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { g_assert (ctx->addresses [vreg]); #ifdef TARGET_WASM /* Need volatile stores otherwise the compiler might move them */ mono_llvm_build_store (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg], TRUE, LLVM_BARRIER_NONE); #else LLVMBuildStore (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg]); #endif } } static LLVMTypeRef sig_to_llvm_sig_no_cinfo (EmitContext *ctx, MonoMethodSignature *sig) { LLVMTypeRef ret_type; LLVMTypeRef *param_types = NULL; LLVMTypeRef res; int i, pindex; ret_type = type_to_llvm_type (ctx, sig->ret); if (!ctx_ok (ctx)) return NULL; param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3); pindex = 0; if (sig->hasthis) param_types [pindex ++] = ThisType (); for (i = 0; i < sig->param_count; ++i) param_types [pindex ++] = type_to_llvm_arg_type (ctx, sig->params [i]); if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } res = LLVMFunctionType (ret_type, param_types, pindex, FALSE); g_free (param_types); return res; } /* * sig_to_llvm_sig_full: * * Return the LLVM signature corresponding to the mono signature SIG using the * calling convention information in CINFO. Fill out the parameter mapping information in CINFO. */ static LLVMTypeRef sig_to_llvm_sig_full (EmitContext *ctx, MonoMethodSignature *sig, LLVMCallInfo *cinfo) { LLVMTypeRef ret_type; LLVMTypeRef *param_types = NULL; LLVMTypeRef res; int i, j, pindex, vret_arg_pindex = 0; gboolean vretaddr = FALSE; MonoType *rtype; if (!cinfo) return sig_to_llvm_sig_no_cinfo (ctx, sig); ret_type = type_to_llvm_type (ctx, sig->ret); if (!ctx_ok (ctx)) return NULL; rtype = mini_get_underlying_type (sig->ret); switch (cinfo->ret.storage) { case LLVMArgVtypeInReg: /* LLVM models this by returning an aggregate value */ if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgNone) { LLVMTypeRef members [2]; members [0] = IntPtrType (); ret_type = LLVMStructType (members, 1, FALSE); } else if (cinfo->ret.pair_storage [0] == LLVMArgNone && cinfo->ret.pair_storage [1] == LLVMArgNone) { /* Empty struct */ ret_type = LLVMVoidType (); } else if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgInIReg) { LLVMTypeRef members [2]; members [0] = IntPtrType (); members [1] = IntPtrType (); ret_type = LLVMStructType (members, 2, FALSE); } else { g_assert_not_reached (); } break; case LLVMArgVtypeByVal: /* Vtype returned normally by val */ break; case LLVMArgVtypeAsScalar: { int size = mono_class_value_size (mono_class_from_mono_type_internal (rtype), NULL); /* LLVM models this by returning an int */ if (size < TARGET_SIZEOF_VOID_P) { g_assert (cinfo->ret.nslots == 1); ret_type = LLVMIntType (size * 8); } else { g_assert (cinfo->ret.nslots == 1 || cinfo->ret.nslots == 2); ret_type = LLVMIntType (cinfo->ret.nslots * sizeof (target_mgreg_t) * 8); } break; } case LLVMArgAsIArgs: ret_type = LLVMArrayType (IntPtrType (), cinfo->ret.nslots); break; case LLVMArgFpStruct: { /* Vtype returned as a fp struct */ LLVMTypeRef members [16]; /* Have to create our own structure since we don't map fp structures to LLVM fp structures yet */ for (i = 0; i < cinfo->ret.nslots; ++i) members [i] = cinfo->ret.esize == 8 ? LLVMDoubleType () : LLVMFloatType (); ret_type = LLVMStructType (members, cinfo->ret.nslots, FALSE); break; } case LLVMArgVtypeByRef: /* Vtype returned using a hidden argument */ ret_type = LLVMVoidType (); break; case LLVMArgVtypeRetAddr: case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: case LLVMArgGsharedvtVariable: vretaddr = TRUE; ret_type = LLVMVoidType (); break; case LLVMArgWasmVtypeAsScalar: g_assert (cinfo->ret.esize); ret_type = LLVMIntType (cinfo->ret.esize * 8); break; default: break; } param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3); pindex = 0; if (cinfo->ret.storage == LLVMArgVtypeByRef) { /* * Has to be the first argument because of the sret argument attribute * FIXME: This might conflict with passing 'this' as the first argument, but * this is only used on arm64 which has a dedicated struct return register. */ cinfo->vret_arg_pindex = pindex; param_types [pindex] = type_to_llvm_arg_type (ctx, sig->ret); if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; } if (!ctx->llvm_only && cinfo->rgctx_arg) { cinfo->rgctx_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } if (cinfo->imt_arg) { cinfo->imt_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } if (vretaddr) { /* Compute the index in the LLVM signature where the vret arg needs to be passed */ vret_arg_pindex = pindex; if (cinfo->vret_arg_index == 1) { /* Add the slots consumed by the first argument */ LLVMArgInfo *ainfo = &cinfo->args [0]; switch (ainfo->storage) { case LLVMArgVtypeInReg: for (j = 0; j < 2; ++j) { if (ainfo->pair_storage [j] == LLVMArgInIReg) vret_arg_pindex ++; } break; default: vret_arg_pindex ++; } } cinfo->vret_arg_pindex = vret_arg_pindex; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); if (sig->hasthis) { cinfo->this_arg_pindex = pindex; param_types [pindex ++] = ThisType (); cinfo->args [0].pindex = cinfo->this_arg_pindex; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &cinfo->args [i + sig->hasthis]; if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); ainfo->pindex = pindex; switch (ainfo->storage) { case LLVMArgVtypeInReg: for (j = 0; j < 2; ++j) { switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: param_types [pindex ++] = LLVMIntType (TARGET_SIZEOF_VOID_P * 8); break; case LLVMArgNone: break; default: g_assert_not_reached (); } } break; case LLVMArgVtypeByVal: param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type); if (!ctx_ok (ctx)) break; param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; break; case LLVMArgAsIArgs: if (ainfo->esize == 8) param_types [pindex] = LLVMArrayType (LLVMInt64Type (), ainfo->nslots); else param_types [pindex] = LLVMArrayType (IntPtrType (), ainfo->nslots); pindex ++; break; case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type); if (!ctx_ok (ctx)) break; param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; break; case LLVMArgAsFpArgs: { int j; /* Emit dummy fp arguments if needed so the rest is passed on the stack */ for (j = 0; j < ainfo->ndummy_fpargs; ++j) param_types [pindex ++] = LLVMDoubleType (); for (j = 0; j < ainfo->nslots; ++j) param_types [pindex ++] = ainfo->esize == 8 ? LLVMDoubleType () : LLVMFloatType (); break; } case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: g_assert (ainfo->esize); param_types [pindex ++] = LLVMIntType (ainfo->esize * 8); break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: param_types [pindex ++] = LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0); break; case LLVMArgGsharedvtVariable: param_types [pindex ++] = LLVMPointerType (IntPtrType (), 0); break; default: param_types [pindex ++] = type_to_llvm_arg_type (ctx, ainfo->type); break; } } if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); if (ctx->llvm_only && cinfo->rgctx_arg) { /* Pass the rgctx as the last argument */ cinfo->rgctx_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } else if (ctx->llvm_only && cinfo->dummy_arg) { /* Pass a dummy arg last */ cinfo->dummy_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } res = LLVMFunctionType (ret_type, param_types, pindex, FALSE); g_free (param_types); return res; } static LLVMTypeRef sig_to_llvm_sig (EmitContext *ctx, MonoMethodSignature *sig) { return sig_to_llvm_sig_full (ctx, sig, NULL); } /* * LLVMFunctionType1: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType0 (LLVMTypeRef ReturnType, int IsVarArg) { return LLVMFunctionType (ReturnType, NULL, 0, IsVarArg); } /* * LLVMFunctionType1: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType1 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, int IsVarArg) { LLVMTypeRef param_types [1]; param_types [0] = ParamType1; return LLVMFunctionType (ReturnType, param_types, 1, IsVarArg); } /* * LLVMFunctionType2: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType2 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, int IsVarArg) { LLVMTypeRef param_types [2]; param_types [0] = ParamType1; param_types [1] = ParamType2; return LLVMFunctionType (ReturnType, param_types, 2, IsVarArg); } /* * LLVMFunctionType3: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType3 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, int IsVarArg) { LLVMTypeRef param_types [3]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; return LLVMFunctionType (ReturnType, param_types, 3, IsVarArg); } static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType4 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, LLVMTypeRef ParamType4, int IsVarArg) { LLVMTypeRef param_types [4]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; param_types [3] = ParamType4; return LLVMFunctionType (ReturnType, param_types, 4, IsVarArg); } static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType5 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, LLVMTypeRef ParamType4, LLVMTypeRef ParamType5, int IsVarArg) { LLVMTypeRef param_types [5]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; param_types [3] = ParamType4; param_types [4] = ParamType5; return LLVMFunctionType (ReturnType, param_types, 5, IsVarArg); } /* * create_builder: * * Create an LLVM builder and remember it so it can be freed later. */ static LLVMBuilderRef create_builder (EmitContext *ctx) { LLVMBuilderRef builder = LLVMCreateBuilder (); if (mono_use_fast_math) mono_llvm_set_fast_math (builder); ctx->builders = g_slist_prepend_mempool (ctx->cfg->mempool, ctx->builders, builder); emit_default_dbg_loc (ctx, builder); return builder; } static char* get_aotconst_name (MonoJumpInfoType type, gconstpointer data, int got_offset) { char *name; int len; switch (type) { case MONO_PATCH_INFO_JIT_ICALL_ID: name = g_strdup_printf ("jit_icall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name); break; case MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL: name = g_strdup_printf ("jit_icall_addr_nocall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name); break; case MONO_PATCH_INFO_RGCTX_SLOT_INDEX: { MonoJumpInfoRgctxEntry *entry = (MonoJumpInfoRgctxEntry*)data; name = g_strdup_printf ("rgctx_slot_index_%s", mono_rgctx_info_type_to_str (entry->info_type)); break; } case MONO_PATCH_INFO_AOT_MODULE: case MONO_PATCH_INFO_GC_SAFE_POINT_FLAG: case MONO_PATCH_INFO_GC_CARD_TABLE_ADDR: case MONO_PATCH_INFO_GC_NURSERY_START: case MONO_PATCH_INFO_GC_NURSERY_BITS: case MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG: name = g_strdup_printf ("%s", mono_ji_type_to_string (type)); len = strlen (name); for (int i = 0; i < len; ++i) name [i] = tolower (name [i]); break; default: name = g_strdup_printf ("%s_%d", mono_ji_type_to_string (type), got_offset); len = strlen (name); for (int i = 0; i < len; ++i) name [i] = tolower (name [i]); break; } return name; } static int compute_aot_got_offset (MonoLLVMModule *module, MonoJumpInfo *ji, LLVMTypeRef llvm_type) { guint32 got_offset = mono_aot_get_got_offset (ji); LLVMTypeRef lookup_type = (LLVMTypeRef) g_hash_table_lookup (module->got_idx_to_type, GINT_TO_POINTER (got_offset)); if (!lookup_type) { lookup_type = llvm_type; } else if (llvm_type != lookup_type) { lookup_type = module->ptr_type; } else { return got_offset; } g_hash_table_insert (module->got_idx_to_type, GINT_TO_POINTER (got_offset), lookup_type); return got_offset; } /* Allocate a GOT slot for TYPE/DATA, and emit IR to load it */ static LLVMValueRef get_aotconst_module (MonoLLVMModule *module, LLVMBuilderRef builder, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type, guint32 *out_got_offset, MonoJumpInfo **out_ji) { guint32 got_offset; LLVMValueRef load; MonoJumpInfo tmp_ji; tmp_ji.type = type; tmp_ji.data.target = data; MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji); if (out_ji) *out_ji = ji; got_offset = compute_aot_got_offset (module, ji, llvm_type); module->max_got_offset = MAX (module->max_got_offset, got_offset); if (out_got_offset) *out_got_offset = got_offset; if (module->static_link && type == MONO_PATCH_INFO_GC_SAFE_POINT_FLAG) { if (!module->gc_safe_point_flag_var) { const char *symbol = "mono_polling_required"; module->gc_safe_point_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol); LLVMSetLinkage (module->gc_safe_point_flag_var, LLVMExternalLinkage); } return module->gc_safe_point_flag_var; } if (module->static_link && type == MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG) { if (!module->interrupt_flag_var) { const char *symbol = "mono_thread_interruption_request_flag"; module->interrupt_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol); LLVMSetLinkage (module->interrupt_flag_var, LLVMExternalLinkage); } return module->interrupt_flag_var; } LLVMValueRef const_var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (got_offset)); if (!const_var) { LLVMTypeRef type = llvm_type; // FIXME: char *name = get_aotconst_name (ji->type, ji->data.target, got_offset); char *symbol = g_strdup_printf ("aotconst_%s", name); g_free (name); LLVMValueRef v = LLVMAddGlobal (module->lmodule, type, symbol); LLVMSetVisibility (v, LLVMHiddenVisibility); LLVMSetLinkage (v, LLVMInternalLinkage); LLVMSetInitializer (v, LLVMConstNull (type)); // FIXME: LLVMSetAlignment (v, 8); g_hash_table_insert (module->aotconst_vars, GINT_TO_POINTER (got_offset), v); const_var = v; } load = LLVMBuildLoad (builder, const_var, ""); if (mono_aot_is_shared_got_offset (got_offset)) set_invariant_load_flag (load); if (type == MONO_PATCH_INFO_LDSTR) set_nonnull_load_flag (load); load = LLVMBuildBitCast (builder, load, llvm_type, ""); return load; } static LLVMValueRef get_aotconst (EmitContext *ctx, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type) { MonoCompile *cfg; guint32 got_offset; MonoJumpInfo *ji; LLVMValueRef load; cfg = ctx->cfg; load = get_aotconst_module (ctx->module, ctx->builder, type, data, llvm_type, &got_offset, &ji); ji->next = cfg->patch_info; cfg->patch_info = ji; /* * If the got slot is shared, it means its initialized when the aot image is loaded, so we don't need to * explicitly initialize it. */ if (!mono_aot_is_shared_got_offset (got_offset)) { //mono_print_ji (ji); //printf ("\n"); ctx->cfg->got_access_count ++; } return load; } static LLVMValueRef get_dummy_aotconst (EmitContext *ctx, LLVMTypeRef llvm_type) { LLVMValueRef indexes [2]; LLVMValueRef got_entry_addr, load; LLVMBuilderRef builder = ctx->builder; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); got_entry_addr = LLVMBuildGEP (builder, ctx->module->dummy_got_var, indexes, 2, ""); load = LLVMBuildLoad (builder, got_entry_addr, ""); load = convert (ctx, load, llvm_type); return load; } typedef struct { MonoJumpInfo *ji; MonoMethod *method; LLVMValueRef load; LLVMTypeRef type; LLVMValueRef lmethod; } CallSite; static LLVMValueRef get_callee_llvmonly (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { LLVMValueRef callee; char *callee_name = NULL; if (ctx->module->static_link && ctx->module->assembly->image != mono_get_corlib ()) { if (type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); g_assert (info); if (info->func != info->wrapper) { type = MONO_PATCH_INFO_METHOD; data = mono_icall_get_wrapper_method (info); callee_name = mono_aot_get_mangled_method_name ((MonoMethod*)data); } } else if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_class_get_image (method->klass) != ctx->module->assembly->image && mono_aot_is_externally_callable (method)) callee_name = mono_aot_get_mangled_method_name (method); } } if (!callee_name) callee_name = mono_aot_get_direct_call_symbol (type, data); if (callee_name) { /* Directly callable */ // FIXME: Locking callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetVisibility (callee, LLVMHiddenVisibility); g_hash_table_insert (ctx->module->direct_callables, (char*)callee_name, callee); } else { /* LLVMTypeRef's are uniqued */ if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig) return LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0)); g_free (callee_name); } return callee; } /* * Change references to icalls/pinvokes/jit icalls to their wrappers when in corlib, so * they can be called directly. */ if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); if (info->func != info->wrapper) { type = MONO_PATCH_INFO_METHOD; data = mono_icall_get_wrapper_method (info); } } if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_method_is_icall (method) || m_method_is_pinvoke (method)) data = mono_marshal_get_native_wrapper (method, TRUE, TRUE); } /* * Instead of emitting an indirect call through a got slot, emit a placeholder, and * replace it with a direct call or an indirect call in mono_llvm_fixup_aot_module () * after all methods have been emitted. */ if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_class_get_image (method->klass)->assembly == ctx->module->assembly) { MonoJumpInfo tmp_ji; tmp_ji.type = type; tmp_ji.data.target = method; MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji); ji->next = ctx->cfg->patch_info; ctx->cfg->patch_info = ji; LLVMTypeRef llvm_type = LLVMPointerType (llvm_sig, 0); ctx->cfg->got_access_count ++; CallSite *info = g_new0 (CallSite, 1); info->method = method; info->ji = ji; info->type = llvm_type; /* * Emit a dummy load to represent the callee, and either replace it with * a reference to the llvm method for the callee, or from a load from the * GOT. */ LLVMValueRef load = get_dummy_aotconst (ctx, llvm_type); info->load = load; info->lmethod = ctx->lmethod; g_ptr_array_add (ctx->callsite_list, info); return load; } } /* * All other calls are made through the GOT. */ callee = get_aotconst (ctx, type, data, LLVMPointerType (llvm_sig, 0)); return callee; } /* * get_callee: * * Return an llvm value representing the callee given by the arguments. */ static LLVMValueRef get_callee (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { LLVMValueRef callee; char *callee_name; MonoJumpInfo *ji = NULL; if (ctx->llvm_only) return get_callee_llvmonly (ctx, llvm_sig, type, data); callee_name = NULL; /* Cross-assembly direct calls */ if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *cmethod = (MonoMethod*)data; if (m_class_get_image (cmethod->klass) != ctx->module->assembly->image) { MonoJumpInfo tmp_ji; memset (&tmp_ji, 0, sizeof (MonoJumpInfo)); tmp_ji.type = type; tmp_ji.data.target = data; if (mono_aot_is_direct_callable (&tmp_ji)) { /* * This will add a reference to cmethod's image so it will * be loaded when the current AOT image is loaded, so * the GOT slots used by the init method code are initialized. */ tmp_ji.type = MONO_PATCH_INFO_IMAGE; tmp_ji.data.image = m_class_get_image (cmethod->klass); ji = mono_aot_patch_info_dup (&tmp_ji); mono_aot_get_got_offset (ji); callee_name = mono_aot_get_mangled_method_name (cmethod); callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetLinkage (callee, LLVMExternalLinkage); g_hash_table_insert (ctx->module->direct_callables, callee_name, callee); } else { /* LLVMTypeRef's are uniqued */ if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig) callee = LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0)); g_free (callee_name); } return callee; } } } callee_name = mono_aot_get_plt_symbol (type, data); if (!callee_name) return NULL; if (ctx->cfg->compile_aot) /* Add a patch so referenced wrappers can be compiled in full aot mode */ mono_add_patch_info (ctx->cfg, 0, type, data); // FIXME: Locking callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->plt_entries, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetVisibility (callee, LLVMHiddenVisibility); g_hash_table_insert (ctx->module->plt_entries, (char*)callee_name, callee); } if (ctx->cfg->compile_aot) { ji = g_new0 (MonoJumpInfo, 1); ji->type = type; ji->data.target = data; g_hash_table_insert (ctx->module->plt_entries_ji, ji, callee); } return callee; } static LLVMValueRef get_jit_callee (EmitContext *ctx, const char *name, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { gpointer target; // This won't be patched so compile the wrapper immediately if (type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); target = (gpointer)mono_icall_get_wrapper_full (info, TRUE); } else { target = resolve_patch (ctx->cfg, type, data); } LLVMValueRef tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); LLVMValueRef callee = LLVMBuildLoad (ctx->builder, tramp_var, ""); return callee; } static int get_handler_clause (MonoCompile *cfg, MonoBasicBlock *bb) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; /* Directly */ if (bb->region != -1 && MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY)) return (bb->region >> 8) - 1; /* Indirectly */ for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (clause, bb->real_offset) && clause->flags == MONO_EXCEPTION_CLAUSE_NONE) return i; } return -1; } static MonoExceptionClause * get_most_deep_clause (MonoCompile *cfg, EmitContext *ctx, MonoBasicBlock *bb) { if (bb == cfg->bb_init) return NULL; // Since they're sorted by nesting we just need // the first one that the bb is a member of for (int i = 0; i < cfg->header->num_clauses; i++) { MonoExceptionClause *curr = &cfg->header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (curr, bb->real_offset)) return curr; } return NULL; } static void set_metadata_flag (LLVMValueRef v, const char *flag_name) { LLVMValueRef md_arg; int md_kind; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("mono", 4); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_nonnull_load_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; flag_name = "nonnull"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("<index>", strlen ("<index>")); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_nontemporal_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; // FIXME: Cache this flag_name = "nontemporal"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = const_int32 (1); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_invariant_load_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; // FIXME: Cache this flag_name = "invariant.load"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("<index>", strlen ("<index>")); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } /* * emit_call: * * Emit an LLVM call or invoke instruction depending on whenever the call is inside * a try region. */ static LLVMValueRef emit_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, LLVMValueRef callee, LLVMValueRef *args, int pindex) { MonoCompile *cfg = ctx->cfg; LLVMValueRef lcall = NULL; LLVMBuilderRef builder = *builder_ref; MonoExceptionClause *clause; if (ctx->llvm_only) { clause = bb ? get_most_deep_clause (cfg, ctx, bb) : NULL; // FIXME: Use an invoke only for calls inside try-catch blocks if (clause && (!cfg->deopt || ctx->has_catch)) { /* * Have to use an invoke instead of a call, branching to the * handler bblock of the clause containing this bblock. */ intptr_t key = CLAUSE_END (clause); LLVMBasicBlockRef lpad_bb = (LLVMBasicBlockRef)g_hash_table_lookup (ctx->exc_meta, (gconstpointer)key); // FIXME: Find the one that has the lowest end bound for the right start address // FIXME: Finally + nesting if (lpad_bb) { LLVMBasicBlockRef noex_bb = gen_bb (ctx, "CALL_NOEX_BB"); /* Use an invoke */ lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, lpad_bb, ""); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; } } } else { int clause_index = get_handler_clause (cfg, bb); if (clause_index != -1) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *ec = &header->clauses [clause_index]; MonoBasicBlock *tblock; LLVMBasicBlockRef ex_bb, noex_bb; /* * Have to use an invoke instead of a call, branching to the * handler bblock of the clause containing this bblock. */ g_assert (ec->flags == MONO_EXCEPTION_CLAUSE_NONE || ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY || ec->flags == MONO_EXCEPTION_CLAUSE_FAULT); tblock = cfg->cil_offset_to_bb [ec->handler_offset]; g_assert (tblock); ctx->bblocks [tblock->block_num].invoke_target = TRUE; ex_bb = get_bb (ctx, tblock); noex_bb = gen_bb (ctx, "NOEX_BB"); /* Use an invoke */ lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, ex_bb, ""); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; } } if (!lcall) { lcall = LLVMBuildCall (builder, callee, args, pindex, ""); ctx->builder = builder; } if (builder_ref) *builder_ref = ctx->builder; return lcall; } static LLVMValueRef emit_load (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef addr, LLVMValueRef base, const char *name, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier) { LLVMValueRef res; /* * We emit volatile loads for loads which can fault, because otherwise * LLVM will generate invalid code when encountering a load from a * NULL address. */ if (barrier != LLVM_BARRIER_NONE) res = mono_llvm_build_atomic_load (*builder_ref, addr, name, is_volatile, size, barrier); else res = mono_llvm_build_load (*builder_ref, addr, name, is_volatile); return res; } static void emit_store_general (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier) { if (barrier != LLVM_BARRIER_NONE) mono_llvm_build_aligned_store (*builder_ref, value, addr, barrier, size); else mono_llvm_build_store (*builder_ref, value, addr, is_volatile, barrier); } static void emit_store (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile) { emit_store_general (ctx, bb, builder_ref, size, value, addr, base, is_faulting, is_volatile, LLVM_BARRIER_NONE); } /* * emit_cond_system_exception: * * Emit code to throw the exception EXC_TYPE if the condition CMP is false. * Might set the ctx exception. */ static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit) { LLVMBasicBlockRef ex_bb, ex2_bb = NULL, noex_bb; LLVMBuilderRef builder; MonoClass *exc_class; LLVMValueRef args [2]; LLVMValueRef callee; gboolean no_pc = FALSE; static MonoClass *exc_classes [MONO_EXC_INTRINS_NUM]; if (IS_TARGET_AMD64) /* Some platforms don't require the pc argument */ no_pc = TRUE; int exc_id = mini_exception_id_by_name (exc_type); if (!exc_classes [exc_id]) exc_classes [exc_id] = mono_class_load_from_name (mono_get_corlib (), "System", exc_type); exc_class = exc_classes [exc_id]; ex_bb = gen_bb (ctx, "EX_BB"); if (ctx->llvm_only) ex2_bb = gen_bb (ctx, "EX2_BB"); noex_bb = gen_bb (ctx, "NOEX_BB"); LLVMValueRef branch = LLVMBuildCondBr (ctx->builder, cmp, ex_bb, noex_bb); if (exc_id == MONO_EXC_NULL_REF && !ctx->cfg->disable_llvm_implicit_null_checks && !force_explicit) { mono_llvm_set_implicit_branch (ctx->builder, branch); } /* Emit exception throwing code */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, ex_bb); if (ctx->cfg->llvm_only) { LLVMBuildBr (builder, ex2_bb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb); if (exc_id == MONO_EXC_NULL_REF) { static LLVMTypeRef sig; if (!sig) sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); /* Can't cache this */ callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception)); emit_call (ctx, bb, &builder, callee, NULL, 0); } else { static LLVMTypeRef sig; if (!sig) sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE); callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_corlib_exception)); args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE); emit_call (ctx, bb, &builder, callee, args, 1); } LLVMBuildUnreachable (builder); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; ctx->ex_index ++; return; } callee = ctx->module->throw_corlib_exception; if (!callee) { LLVMTypeRef sig; if (no_pc) sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE); else sig = LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), LLVMPointerType (LLVMInt8Type (), 0), FALSE); const MonoJitICallId icall_id = MONO_JIT_ICALL_mono_llvm_throw_corlib_exception_abs_trampoline; if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } else { /* * Differences between the LLVM/non-LLVM throw corlib exception trampoline: * - On x86, LLVM generated code doesn't push the arguments * - The trampoline takes the throw address as an arguments, not a pc offset. */ callee = get_jit_callee (ctx, "llvm_throw_corlib_exception_trampoline", sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); /* * Make sure that ex_bb starts with the invoke, so the block address points to it, and not to the load * added by get_jit_callee (). */ ex2_bb = gen_bb (ctx, "EX2_BB"); LLVMBuildBr (builder, ex2_bb); ex_bb = ex2_bb; ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb); } } args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE); /* * The LLVM mono branch contains changes so a block address can be passed as an * argument to a call. */ if (no_pc) { emit_call (ctx, bb, &builder, callee, args, 1); } else { args [1] = LLVMBlockAddress (ctx->lmethod, ex_bb); emit_call (ctx, bb, &builder, callee, args, 2); } LLVMBuildUnreachable (builder); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; ctx->ex_index ++; return; } /* * emit_args_to_vtype: * * Emit code to store the vtype in the arguments args to the address ADDRESS. */ static void emit_args_to_vtype (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args) { int j, size, nslots; MonoClass *klass; t = mini_get_underlying_type (t); klass = mono_class_from_mono_type_internal (t); size = mono_class_value_size (klass, NULL); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), ""); if (ainfo->storage == LLVMArgAsFpArgs) nslots = ainfo->nslots; else nslots = 2; for (j = 0; j < nslots; ++j) { LLVMValueRef index [2], addr, daddr; int part_size = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size; LLVMTypeRef part_type; while (part_size != 1 && part_size != 2 && part_size != 4 && part_size < 8) part_size ++; if (ainfo->pair_storage [j] == LLVMArgNone) continue; switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: { part_type = LLVMIntType (part_size * 8); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) { index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE); addr = LLVMBuildGEP (builder, address, index, 1, ""); } else { daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); } LLVMBuildStore (builder, convert (ctx, args [j], part_type), LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (part_type, 0), "")); break; } case LLVMArgInFPReg: { LLVMTypeRef arg_type; if (ainfo->esize == 8) arg_type = LLVMDoubleType (); else arg_type = LLVMFloatType (); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), ""); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); LLVMBuildStore (builder, args [j], addr); break; } case LLVMArgNone: break; default: g_assert_not_reached (); } size -= TARGET_SIZEOF_VOID_P; } } /* * emit_vtype_to_args: * * Emit code to load a vtype at address ADDRESS into scalar arguments. Store the arguments * into ARGS, and the number of arguments into NARGS. */ static void emit_vtype_to_args (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args, guint32 *nargs) { int pindex = 0; int j, nslots; LLVMTypeRef arg_type; t = mini_get_underlying_type (t); int32_t size = get_vtype_size_align (t).size; if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), ""); if (ainfo->storage == LLVMArgAsFpArgs) nslots = ainfo->nslots; else nslots = 2; for (j = 0; j < nslots; ++j) { LLVMValueRef index [2], addr, daddr; int partsize = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size; if (ainfo->pair_storage [j] == LLVMArgNone) continue; switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) { index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE); addr = LLVMBuildGEP (builder, address, index, 1, ""); } else { daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); } args [pindex ++] = convert (ctx, LLVMBuildLoad (builder, LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (LLVMIntType (partsize * 8), 0), ""), ""), IntPtrType ()); break; case LLVMArgInFPReg: if (ainfo->esize == 8) arg_type = LLVMDoubleType (); else arg_type = LLVMFloatType (); daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); args [pindex ++] = LLVMBuildLoad (builder, addr, ""); break; case LLVMArgNone: break; default: g_assert_not_reached (); } size -= TARGET_SIZEOF_VOID_P; } *nargs = pindex; } static LLVMValueRef build_alloca_llvm_type_name (EmitContext *ctx, LLVMTypeRef t, int align, const char *name) { /* * Have to place all alloca's at the end of the entry bb, since otherwise they would * get executed every time control reaches them. */ LLVMPositionBuilder (ctx->alloca_builder, get_bb (ctx, ctx->cfg->bb_entry), ctx->last_alloca); ctx->last_alloca = mono_llvm_build_alloca (ctx->alloca_builder, t, NULL, align, name); return ctx->last_alloca; } static LLVMValueRef build_alloca_llvm_type (EmitContext *ctx, LLVMTypeRef t, int align) { return build_alloca_llvm_type_name (ctx, t, align, ""); } static LLVMValueRef build_named_alloca (EmitContext *ctx, MonoType *t, char const *name) { MonoClass *k = mono_class_from_mono_type_internal (t); int align; g_assert (!mini_is_gsharedvt_variable_type (t)); if (MONO_CLASS_IS_SIMD (ctx->cfg, k)) align = mono_class_value_size (k, NULL); else align = mono_class_min_align (k); /* Sometimes align is not a power of 2 */ while (mono_is_power_of_two (align) == -1) align ++; return build_alloca_llvm_type_name (ctx, type_to_llvm_type (ctx, t), align, name); } static LLVMValueRef build_alloca (EmitContext *ctx, MonoType *t) { return build_named_alloca (ctx, t, ""); } static LLVMValueRef emit_gsharedvt_ldaddr (EmitContext *ctx, int vreg) { /* * gsharedvt local. * Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx]. */ MonoCompile *cfg = ctx->cfg; LLVMBuilderRef builder = ctx->builder; LLVMValueRef offset, offset_var; LLVMValueRef info_var = ctx->values [cfg->gsharedvt_info_var->dreg]; LLVMValueRef locals_var = ctx->values [cfg->gsharedvt_locals_var->dreg]; LLVMValueRef ptr; char *name; g_assert (info_var); g_assert (locals_var); int idx = cfg->gsharedvt_vreg_to_idx [vreg] - 1; offset = LLVMConstInt (LLVMInt32Type (), MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P), FALSE); ptr = LLVMBuildAdd (builder, convert (ctx, info_var, IntPtrType ()), convert (ctx, offset, IntPtrType ()), ""); name = g_strdup_printf ("gsharedvt_local_%d_offset", vreg); offset_var = LLVMBuildLoad (builder, convert (ctx, ptr, LLVMPointerType (LLVMInt32Type (), 0)), name); return LLVMBuildAdd (builder, convert (ctx, locals_var, IntPtrType ()), convert (ctx, offset_var, IntPtrType ()), ""); } /* * Put the global into the 'llvm.used' array to prevent it from being optimized away. */ static void mark_as_used (MonoLLVMModule *module, LLVMValueRef global) { if (!module->used) module->used = g_ptr_array_sized_new (16); g_ptr_array_add (module->used, global); } static void emit_llvm_used (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMTypeRef used_type; LLVMValueRef used, *used_elem; int i; if (!module->used) return; used_type = LLVMArrayType (LLVMPointerType (LLVMInt8Type (), 0), module->used->len); used = LLVMAddGlobal (lmodule, used_type, "llvm.used"); used_elem = g_new0 (LLVMValueRef, module->used->len); for (i = 0; i < module->used->len; ++i) used_elem [i] = LLVMConstBitCast ((LLVMValueRef)g_ptr_array_index (module->used, i), LLVMPointerType (LLVMInt8Type (), 0)); LLVMSetInitializer (used, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), used_elem, module->used->len)); LLVMSetLinkage (used, LLVMAppendingLinkage); LLVMSetSection (used, "llvm.metadata"); } /* * emit_get_method: * * Emit a function mapping method indexes to their code */ static void emit_get_method (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, switch_ins, m; LLVMBasicBlockRef entry_bb, fail_bb, bb, code_start_bb, code_end_bb, main_bb; LLVMBasicBlockRef *bbs = NULL; LLVMTypeRef rtype; LLVMBuilderRef builder = LLVMCreateBuilder (); LLVMValueRef table = NULL; char *name; int i; gboolean emit_table = FALSE; #ifdef TARGET_WASM /* * Emit a table of functions instead of a switch statement, * its very efficient on wasm. This might be usable on * other platforms too. */ emit_table = TRUE; #endif rtype = LLVMPointerType (LLVMInt8Type (), 0); int table_len = module->max_method_idx + 1; if (emit_table) { LLVMTypeRef table_type; LLVMValueRef *table_elems; char *table_name; table_type = LLVMArrayType (rtype, table_len); table_name = g_strdup_printf ("%s_method_table", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); for (i = 0; i < table_len; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i)); if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m)) table_elems [i] = LLVMBuildBitCast (builder, m, rtype, ""); else table_elems [i] = LLVMConstNull (rtype); } LLVMSetInitializer (table, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), table_elems, table_len)); } /* * Emit a switch statement. Emitting a table of function addresses is smaller/faster, * but generating code seems safer. */ func = LLVMAddFunction (lmodule, module->get_method_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->get_method = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); /* * Return llvm_code_start/llvm_code_end when called with -1/-2. * Hopefully, the toolchain doesn't reorder these functions. If it does, * then we will have to find another solution. */ name = g_strdup_printf ("BB_CODE_START"); code_start_bb = LLVMAppendBasicBlock (func, name); g_free (name); LLVMPositionBuilderAtEnd (builder, code_start_bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_start, rtype, "")); name = g_strdup_printf ("BB_CODE_END"); code_end_bb = LLVMAppendBasicBlock (func, name); g_free (name); LLVMPositionBuilderAtEnd (builder, code_end_bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_end, rtype, "")); if (emit_table) { /* * Because table_len is computed using the method indexes available for us, it * might not include methods which are not compiled because of AOT profiles. * So table_len can be smaller than info->nmethods. Add a bounds check because * of that. * switch (index) { * case -1: return code_start; * case -2: return code_end; * default: return index < table_len ? method_table [index] : 0; */ fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), rtype, "")); main_bb = LLVMAppendBasicBlock (func, "MAIN"); LLVMPositionBuilderAtEnd (builder, main_bb); LLVMValueRef base = table; LLVMValueRef indexes [2]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMGetParam (func, 0); LLVMValueRef addr = LLVMBuildGEP (builder, base, indexes, 2, ""); LLVMValueRef res = mono_llvm_build_load (builder, addr, "", FALSE); LLVMBuildRet (builder, res); LLVMBasicBlockRef default_bb = LLVMAppendBasicBlock (func, "DEFAULT"); LLVMPositionBuilderAtEnd (builder, default_bb); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_len, FALSE), ""); LLVMBuildCondBr (builder, cmp, fail_bb, main_bb); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), default_bb, 0); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb); } else { bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1); for (i = 0; i < module->max_method_idx + 1; ++i) { name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i)); if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m)) LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, "")); else LLVMBuildRet (builder, LLVMConstNull (rtype)); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMConstNull (rtype)); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb); for (i = 0; i < module->max_method_idx + 1; ++i) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); } } mark_as_used (module, func); LLVMDisposeBuilder (builder); } /* * emit_get_unbox_tramp: * * Emit a function mapping method indexes to their unbox trampoline */ static void emit_get_unbox_tramp (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, switch_ins, m; LLVMBasicBlockRef entry_bb, fail_bb, bb; LLVMBasicBlockRef *bbs; LLVMTypeRef rtype; LLVMBuilderRef builder = LLVMCreateBuilder (); char *name; int i; gboolean emit_table = FALSE; /* Similar to emit_get_method () */ #ifndef TARGET_WATCHOS emit_table = TRUE; #endif rtype = LLVMPointerType (LLVMInt8Type (), 0); if (emit_table) { // About 10% of methods have an unbox tramp, so emit a table of indexes for them // that the runtime can search using a binary search int len = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) len ++; } LLVMTypeRef table_type, elemtype; LLVMValueRef *table_elems; LLVMValueRef table; char *table_name; int table_len; int elemsize; table_len = len; elemsize = module->max_method_idx < 65000 ? 2 : 4; // The index table elemtype = elemsize == 2 ? LLVMInt16Type () : LLVMInt32Type (); table_type = LLVMArrayType (elemtype, table_len); table_name = g_strdup_printf ("%s_unbox_tramp_indexes", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); int idx = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) table_elems [idx ++] = LLVMConstInt (elemtype, i, FALSE); } LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len)); module->unbox_tramp_indexes = table; // The trampoline table elemtype = rtype; table_type = LLVMArrayType (elemtype, table_len); table_name = g_strdup_printf ("%s_unbox_trampolines", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); idx = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) table_elems [idx ++] = LLVMBuildBitCast (builder, m, rtype, ""); } LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len)); module->unbox_trampolines = table; module->unbox_tramp_num = table_len; module->unbox_tramp_elemsize = elemsize; return; } func = LLVMAddFunction (lmodule, module->get_unbox_tramp_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->get_unbox_tramp = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1); for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (!m) continue; name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, "")); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMConstNull (rtype)); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (!m) continue; LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); } mark_as_used (module, func); LLVMDisposeBuilder (builder); } /* * emit_init_aotconst: * * Emit a function to initialize the aotconst_ variables. Called by the runtime. */ static void emit_init_aotconst (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder = LLVMCreateBuilder (); func = LLVMAddFunction (lmodule, module->init_aotconst_symbol, LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), IntPtrType (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->init_aotconst_func = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); LLVMPositionBuilderAtEnd (builder, entry_bb); #ifdef TARGET_WASM /* Emit a table of aotconst addresses instead of a switch statement to save space */ LLVMValueRef aotconsts; LLVMTypeRef aotconst_addr_type = LLVMPointerType (module->ptr_type, 0); int table_size = module->max_got_offset + 1; LLVMTypeRef aotconst_arr_type = LLVMArrayType (aotconst_addr_type, table_size); LLVMValueRef aotconst_dummy = LLVMAddGlobal (module->lmodule, module->ptr_type, "aotconst_dummy"); LLVMSetInitializer (aotconst_dummy, LLVMConstNull (module->ptr_type)); LLVMSetVisibility (aotconst_dummy, LLVMHiddenVisibility); LLVMSetLinkage (aotconst_dummy, LLVMInternalLinkage); aotconsts = LLVMAddGlobal (module->lmodule, aotconst_arr_type, "aotconsts"); LLVMValueRef *aotconst_init = g_new0 (LLVMValueRef, table_size); for (int i = 0; i < table_size; ++i) { LLVMValueRef aotconst = (LLVMValueRef)g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i)); if (aotconst) aotconst_init [i] = LLVMConstBitCast (aotconst, aotconst_addr_type); else aotconst_init [i] = LLVMConstBitCast (aotconst_dummy, aotconst_addr_type); } LLVMSetInitializer (aotconsts, LLVMConstArray (aotconst_addr_type, aotconst_init, table_size)); LLVMSetVisibility (aotconsts, LLVMHiddenVisibility); LLVMSetLinkage (aotconsts, LLVMInternalLinkage); LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "EXIT_BB"); LLVMBasicBlockRef main_bb = LLVMAppendBasicBlock (func, "BB"); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_size, FALSE), ""); LLVMBuildCondBr (builder, cmp, exit_bb, main_bb); LLVMPositionBuilderAtEnd (builder, main_bb); LLVMValueRef indexes [2]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMGetParam (func, 0); LLVMValueRef aotconst_addr = LLVMBuildLoad (builder, LLVMBuildGEP (builder, aotconsts, indexes, 2, ""), ""); LLVMBuildStore (builder, LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), module->ptr_type, ""), aotconst_addr); LLVMBuildBr (builder, exit_bb); LLVMPositionBuilderAtEnd (builder, exit_bb); LLVMBuildRetVoid (builder); #else LLVMValueRef switch_ins; LLVMBasicBlockRef fail_bb, bb; LLVMBasicBlockRef *bbs = NULL; char *name; bbs = g_new0 (LLVMBasicBlockRef, module->max_got_offset + 1); for (int i = 0; i < module->max_got_offset + 1; ++i) { name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); LLVMValueRef var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i)); if (var) { LLVMValueRef addr = LLVMBuildBitCast (builder, var, LLVMPointerType (IntPtrType (), 0), ""); LLVMBuildStore (builder, LLVMGetParam (func, 1), addr); } LLVMBuildRetVoid (builder); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRetVoid (builder); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); for (int i = 0; i < module->max_got_offset + 1; ++i) LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); #endif LLVMDisposeBuilder (builder); } /* Add a function to mark the beginning of LLVM code */ static void emit_llvm_code_start (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; func = LLVMAddFunction (lmodule, "llvm_code_start", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->code_start = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } /* * emit_init_func: * * Emit functions to initialize LLVM methods. * These are wrappers around the mini_llvm_init_method () JIT icall. * The wrappers handle adding the 'amodule' argument, loading the vtable from different locations, and they have * a cold calling convention. */ static LLVMValueRef emit_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, indexes [2], args [16], callee, info_var, index_var, inited_var, cmp; LLVMBasicBlockRef entry_bb, inited_bb, notinited_bb; LLVMBuilderRef builder; LLVMTypeRef icall_sig; const char *wrapper_name = mono_marshal_get_aot_init_wrapper_name (subtype); LLVMTypeRef func_type = NULL; LLVMTypeRef arg_type = module->ptr_type; char *name = g_strdup_printf ("%s_%s", module->global_prefix, wrapper_name); switch (subtype) { case AOT_INIT_METHOD: func_type = LLVMFunctionType1 (LLVMVoidType (), arg_type, FALSE); break; case AOT_INIT_METHOD_GSHARED_MRGCTX: case AOT_INIT_METHOD_GSHARED_VTABLE: func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, IntPtrType (), FALSE); break; case AOT_INIT_METHOD_GSHARED_THIS: func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, ObjRefType (), FALSE); break; default: g_assert_not_reached (); } func = LLVMAddFunction (lmodule, name, func_type); info_var = LLVMGetParam (func, 0); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); set_cold_cconv (func); entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); /* Load method_index which is emitted at the start of the method info */ indexes [0] = const_int32 (0); indexes [1] = const_int32 (0); // FIXME: Make sure its aligned index_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, LLVMBuildBitCast (builder, info_var, LLVMPointerType (LLVMInt32Type (), 0), ""), indexes, 1, ""), "method_index"); /* Check for is_inited here as well, since this can be called from JITted code which might not check it */ indexes [0] = const_int32 (0); indexes [1] = index_var; inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""), "is_inited"); cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), ""); inited_bb = LLVMAppendBasicBlock (func, "INITED"); notinited_bb = LLVMAppendBasicBlock (func, "NOT_INITED"); LLVMBuildCondBr (builder, cmp, notinited_bb, inited_bb); LLVMPositionBuilderAtEnd (builder, notinited_bb); LLVMValueRef amodule_var = get_aotconst_module (module, builder, MONO_PATCH_INFO_AOT_MODULE, NULL, LLVMPointerType (IntPtrType (), 0), NULL, NULL); args [0] = LLVMBuildPtrToInt (builder, module->info_var, IntPtrType (), ""); args [1] = LLVMBuildPtrToInt (builder, amodule_var, IntPtrType (), ""); args [2] = info_var; switch (subtype) { case AOT_INIT_METHOD: args [3] = LLVMConstNull (IntPtrType ()); break; case AOT_INIT_METHOD_GSHARED_VTABLE: args [3] = LLVMGetParam (func, 1); break; case AOT_INIT_METHOD_GSHARED_THIS: /* Load this->vtable */ args [3] = LLVMBuildBitCast (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), ""); indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoObject, vtable) / SIZEOF_VOID_P); args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable"); break; case AOT_INIT_METHOD_GSHARED_MRGCTX: /* Load mrgctx->vtable */ args [3] = LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), ""); indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable) / SIZEOF_VOID_P); args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable"); break; default: g_assert_not_reached (); break; } /* Call the mini_llvm_init_method JIT icall */ icall_sig = LLVMFunctionType4 (LLVMVoidType (), IntPtrType (), IntPtrType (), arg_type, IntPtrType (), FALSE); callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GINT_TO_POINTER (MONO_JIT_ICALL_mini_llvm_init_method), LLVMPointerType (icall_sig, 0), NULL, NULL); LLVMBuildCall (builder, callee, args, LLVMCountParamTypes (icall_sig), ""); /* * Set the inited flag * This is already done by the LLVM methods themselves, but its needed by JITted methods. */ indexes [0] = const_int32 (0); indexes [1] = index_var; LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, module->inited_var, indexes, 2, "")); LLVMBuildBr (builder, inited_bb); LLVMPositionBuilderAtEnd (builder, inited_bb); LLVMBuildRetVoid (builder); LLVMVerifyFunction (func, LLVMAbortProcessAction); LLVMDisposeBuilder (builder); g_free (name); return func; } /* Emit a wrapper around the parameterless JIT icall ICALL_ID with a cold calling convention */ static LLVMValueRef emit_icall_cold_wrapper (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoJitICallId icall_id, gboolean aot) { LLVMValueRef func, callee; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; LLVMTypeRef sig; char *name; name = g_strdup_printf ("%s_icall_cold_wrapper_%d", module->global_prefix, icall_id); func = LLVMAddFunction (lmodule, name, LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); set_cold_cconv (func); entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); if (aot) { callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id), LLVMPointerType (sig, 0), NULL, NULL); } else { MonoJitICallInfo * const info = mono_find_jit_icall_info (icall_id); gpointer target = (gpointer)mono_icall_get_wrapper_full (info, TRUE); LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, LLVMPointerType (sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); callee = LLVMBuildLoad (builder, tramp_var, ""); } LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildRetVoid (builder); LLVMVerifyFunction(func, LLVMAbortProcessAction); LLVMDisposeBuilder (builder); return func; } /* * Emit wrappers around the C icalls used to initialize llvm methods, to * make the calling code smaller and to enable usage of the llvm * cold calling convention. */ static void emit_init_funcs (MonoLLVMModule *module) { for (int i = 0; i < AOT_INIT_METHOD_NUM; ++i) module->init_methods [i] = emit_init_func (module, i); } static LLVMValueRef get_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype) { return module->init_methods [subtype]; } static void emit_gc_safepoint_poll (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoCompile *cfg) { gboolean is_aot = cfg == NULL || cfg->compile_aot; LLVMValueRef func = mono_llvm_get_or_insert_gc_safepoint_poll (lmodule); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); if (is_aot) { #if TARGET_WIN32 if (module->static_link) { LLVMSetLinkage (func, LLVMInternalLinkage); /* Prevent it from being optimized away, leading to asserts inside 'opt' */ mark_as_used (module, func); } else { LLVMSetLinkage (func, LLVMWeakODRLinkage); } #else LLVMSetLinkage (func, LLVMWeakODRLinkage); #endif } else { mono_llvm_add_func_attr (func, LLVM_ATTR_OPTIMIZE_NONE); // no need to waste time here, the function is already optimized and will be inlined. mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); // optnone attribute requires noinline (but it will be inlined anyway) if (!module->gc_poll_cold_wrapper_compiled) { ERROR_DECL (error); /* Compiling a method here is a bit ugly, but it works */ MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL); module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error); mono_error_assert_ok (error); } } LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.entry"); LLVMBasicBlockRef poll_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.poll"); LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.exit"); LLVMTypeRef ptr_type = LLVMPointerType (IntPtrType (), 0); LLVMBuilderRef builder = LLVMCreateBuilder (); /* entry: */ LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMValueRef poll_val_ptr; if (is_aot) { poll_val_ptr = get_aotconst_module (module, builder, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, NULL, ptr_type, NULL, NULL); } else { LLVMValueRef poll_val_int = LLVMConstInt (IntPtrType (), (guint64) &mono_polling_required, FALSE); poll_val_ptr = LLVMBuildIntToPtr (builder, poll_val_int, ptr_type, ""); } LLVMValueRef poll_val_ptr_load = LLVMBuildLoad (builder, poll_val_ptr, ""); // probably needs to be volatile LLVMValueRef poll_val = LLVMBuildPtrToInt (builder, poll_val_ptr_load, IntPtrType (), ""); LLVMValueRef poll_val_zero = LLVMConstNull (LLVMTypeOf (poll_val)); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, poll_val, poll_val_zero, ""); mono_llvm_build_weighted_branch (builder, cmp, exit_bb, poll_bb, 1000 /* weight for exit_bb */, 1 /* weight for poll_bb */); /* poll: */ LLVMPositionBuilderAtEnd (builder, poll_bb); LLVMValueRef call; if (is_aot) { LLVMValueRef icall_wrapper = emit_icall_cold_wrapper (module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, TRUE); module->gc_poll_cold_wrapper = icall_wrapper; call = LLVMBuildCall (builder, icall_wrapper, NULL, 0, ""); } else { // in JIT mode we have to emit @gc.safepoint_poll function for each method (module) // this function calls gc_poll_cold_wrapper_compiled via a global variable. // @gc.safepoint_poll will be inlined and can be deleted after -place-safepoints pass. LLVMTypeRef poll_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); LLVMTypeRef poll_sig_ptr = LLVMPointerType (poll_sig, 0); gpointer target = resolve_patch (cfg, MONO_PATCH_INFO_ABS, module->gc_poll_cold_wrapper_compiled); LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, poll_sig_ptr, "mono_threads_state_poll"); LLVMValueRef target_val = LLVMConstInt (LLVMInt64Type (), (guint64) target, FALSE); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (target_val, poll_sig_ptr)); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); LLVMValueRef callee = LLVMBuildLoad (builder, tramp_var, ""); call = LLVMBuildCall (builder, callee, NULL, 0, ""); } set_call_cold_cconv (call); LLVMBuildBr (builder, exit_bb); /* exit: */ LLVMPositionBuilderAtEnd (builder, exit_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } static void emit_llvm_code_end (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; func = LLVMAddFunction (lmodule, "llvm_code_end", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->code_end = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } static void emit_div_check (EmitContext *ctx, LLVMBuilderRef builder, MonoBasicBlock *bb, MonoInst *ins, LLVMValueRef lhs, LLVMValueRef rhs) { gboolean need_div_check = ctx->cfg->backend->need_div_check; if (bb->region) /* LLVM doesn't know that these can throw an exception since they are not called through an intrinsic */ need_div_check = TRUE; if (!need_div_check) return; switch (ins->opcode) { case OP_IDIV: case OP_LDIV: case OP_IREM: case OP_LREM: case OP_IDIV_UN: case OP_LDIV_UN: case OP_IREM_UN: case OP_LREM_UN: case OP_IDIV_IMM: case OP_LDIV_IMM: case OP_IREM_IMM: case OP_LREM_IMM: case OP_IDIV_UN_IMM: case OP_LDIV_UN_IMM: case OP_IREM_UN_IMM: case OP_LREM_UN_IMM: { LLVMValueRef cmp; gboolean is_signed = (ins->opcode == OP_IDIV || ins->opcode == OP_LDIV || ins->opcode == OP_IREM || ins->opcode == OP_LREM || ins->opcode == OP_IDIV_IMM || ins->opcode == OP_LDIV_IMM || ins->opcode == OP_IREM_IMM || ins->opcode == OP_LREM_IMM); cmp = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), 0, FALSE), ""); emit_cond_system_exception (ctx, bb, "DivideByZeroException", cmp, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; /* b == -1 && a == 0x80000000 */ if (is_signed) { LLVMValueRef c = (LLVMTypeOf (lhs) == LLVMInt32Type ()) ? LLVMConstInt (LLVMTypeOf (lhs), 0x80000000, FALSE) : LLVMConstInt (LLVMTypeOf (lhs), 0x8000000000000000LL, FALSE); LLVMValueRef cond1 = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), -1, FALSE), ""); LLVMValueRef cond2 = LLVMBuildICmp (builder, LLVMIntEQ, lhs, c, ""); cmp = LLVMBuildICmp (builder, LLVMIntEQ, LLVMBuildAnd (builder, cond1, cond2, ""), LLVMConstInt (LLVMInt1Type (), 1, FALSE), ""); emit_cond_system_exception (ctx, bb, "OverflowException", cmp, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; } break; } default: break; } } /* * emit_method_init: * * Emit code to initialize the GOT slots used by the method. */ static void emit_method_init (EmitContext *ctx) { LLVMValueRef indexes [16], args [16]; LLVMValueRef inited_var, cmp, call; LLVMBasicBlockRef inited_bb, notinited_bb; LLVMBuilderRef builder = ctx->builder; MonoCompile *cfg = ctx->cfg; MonoAotInitSubtype subtype; ctx->module->max_inited_idx = MAX (ctx->module->max_inited_idx, cfg->method_index); indexes [0] = const_int32 (0); indexes [1] = const_int32 (cfg->method_index); inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""), "is_inited"); args [0] = inited_var; args [1] = LLVMConstInt (LLVMInt8Type (), 1, FALSE); inited_var = LLVMBuildCall (ctx->builder, get_intrins (ctx, INTRINS_EXPECT_I8), args, 2, ""); cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), ""); inited_bb = ctx->inited_bb; notinited_bb = gen_bb (ctx, "NOTINITED_BB"); ctx->cfg->llvmonly_init_cond = LLVMBuildCondBr (ctx->builder, cmp, notinited_bb, inited_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, notinited_bb); LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), 0); char *symbol = g_strdup_printf ("info_dummy_%s", cfg->llvm_method_name); LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, type, symbol); g_free (symbol); cfg->llvm_dummy_info_var = info_var; int nargs = 0; args [nargs ++] = convert (ctx, info_var, ctx->module->ptr_type); switch (cfg->rgctx_access) { case MONO_RGCTX_ACCESS_MRGCTX: if (ctx->rgctx_arg) { args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); subtype = AOT_INIT_METHOD_GSHARED_MRGCTX; } else { g_assert (ctx->this_arg); args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ()); subtype = AOT_INIT_METHOD_GSHARED_THIS; } break; case MONO_RGCTX_ACCESS_VTABLE: args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); subtype = AOT_INIT_METHOD_GSHARED_VTABLE; break; case MONO_RGCTX_ACCESS_THIS: args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ()); subtype = AOT_INIT_METHOD_GSHARED_THIS; break; case MONO_RGCTX_ACCESS_NONE: subtype = AOT_INIT_METHOD; break; default: g_assert_not_reached (); } call = LLVMBuildCall (builder, ctx->module->init_methods [subtype], args, nargs, ""); /* * This enables llvm to keep arguments in their original registers/ * scratch registers, since the call will not clobber them. */ set_call_cold_cconv (call); // Set the inited flag indexes [0] = const_int32 (0); indexes [1] = const_int32 (cfg->method_index); LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, "")); LLVMBuildBr (builder, inited_bb); ctx->bblocks [cfg->bb_entry->block_num].end_bblock = inited_bb; builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, inited_bb); } static void emit_unbox_tramp (EmitContext *ctx, const char *method_name, LLVMTypeRef method_type, LLVMValueRef method, int method_index) { /* * Emit unbox trampoline using a tailcall */ LLVMValueRef tramp, call, *args; LLVMBuilderRef builder; LLVMBasicBlockRef lbb; LLVMCallInfo *linfo; char *tramp_name; int i, nargs; tramp_name = g_strdup_printf ("ut_%s", method_name); tramp = LLVMAddFunction (ctx->module->lmodule, tramp_name, method_type); LLVMSetLinkage (tramp, LLVMInternalLinkage); mono_llvm_add_func_attr (tramp, LLVM_ATTR_OPTIMIZE_FOR_SIZE); //mono_llvm_add_func_attr (tramp, LLVM_ATTR_NO_UNWIND); linfo = ctx->linfo; // FIXME: Reduce code duplication with mono_llvm_compile_method () etc. if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1) mono_llvm_add_param_attr (LLVMGetParam (tramp, ctx->rgctx_arg_pindex), LLVM_ATTR_IN_REG); if (ctx->cfg->vret_addr) { LLVMSetValueName (LLVMGetParam (tramp, linfo->vret_arg_pindex), "vret"); if (linfo->ret.storage == LLVMArgVtypeByRef) { mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET); mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS); } } lbb = LLVMAppendBasicBlock (tramp, ""); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, lbb); nargs = LLVMCountParamTypes (method_type); args = g_new0 (LLVMValueRef, nargs); for (i = 0; i < nargs; ++i) { args [i] = LLVMGetParam (tramp, i); if (i == ctx->this_arg_pindex) { LLVMTypeRef arg_type = LLVMTypeOf (args [i]); args [i] = LLVMBuildPtrToInt (builder, args [i], IntPtrType (), ""); args [i] = LLVMBuildAdd (builder, args [i], LLVMConstInt (IntPtrType (), MONO_ABI_SIZEOF (MonoObject), FALSE), ""); args [i] = LLVMBuildIntToPtr (builder, args [i], arg_type, ""); } } call = LLVMBuildCall (builder, method, args, nargs, ""); if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1) mono_llvm_add_instr_attr (call, 1 + ctx->rgctx_arg_pindex, LLVM_ATTR_IN_REG); if (linfo->ret.storage == LLVMArgVtypeByRef) mono_llvm_add_instr_attr (call, 1 + linfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET); // FIXME: This causes assertions in clang //mono_llvm_set_must_tailcall (call); if (LLVMGetReturnType (method_type) == LLVMVoidType ()) LLVMBuildRetVoid (builder); else LLVMBuildRet (builder, call); g_hash_table_insert (ctx->module->idx_to_unbox_tramp, GINT_TO_POINTER (method_index), tramp); LLVMDisposeBuilder (builder); } #ifdef TARGET_WASM static void emit_gc_pin (EmitContext *ctx, LLVMBuilderRef builder, int vreg) { LLVMValueRef index0 = LLVMConstInt (LLVMInt32Type (), 0, FALSE); LLVMValueRef index1 = LLVMConstInt (LLVMInt32Type (), ctx->gc_var_indexes [vreg] - 1, FALSE); LLVMValueRef indexes [] = { index0, index1 }; LLVMValueRef addr = LLVMBuildGEP (builder, ctx->gc_pin_area, indexes, 2, ""); mono_llvm_build_store (builder, convert (ctx, ctx->values [vreg], IntPtrType ()), addr, TRUE, LLVM_BARRIER_NONE); } #endif /* * emit_entry_bb: * * Emit code to load/convert arguments. */ static void emit_entry_bb (EmitContext *ctx, LLVMBuilderRef builder) { int i, j, pindex; MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig = ctx->sig; LLVMCallInfo *linfo = ctx->linfo; MonoBasicBlock *bb; char **names; LLVMBuilderRef old_builder = ctx->builder; ctx->builder = builder; ctx->alloca_builder = create_builder (ctx); #ifdef TARGET_WASM /* * For GC stack scanning to work, allocate an area on the stack and store * every ref vreg into it after its written. Because the stack is scanned * conservatively, the objects will be pinned, so the vregs can directly * reference the objects, there is no need to load them from the stack * on every access. */ ctx->gc_var_indexes = g_new0 (int, cfg->next_vreg); int ngc_vars = 0; for (i = 0; i < cfg->next_vreg; ++i) { if (vreg_is_ref (cfg, i)) { ctx->gc_var_indexes [i] = ngc_vars + 1; ngc_vars ++; } } // FIXME: Count only live vregs ctx->gc_pin_area = build_alloca_llvm_type_name (ctx, LLVMArrayType (IntPtrType (), ngc_vars), 0, "gc_pin"); #endif /* * Handle indirect/volatile variables by allocating memory for them * using 'alloca', and storing their address in a temporary. */ for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if ((var->opcode == OP_GSHAREDVT_LOCAL || var->opcode == OP_GSHAREDVT_ARG_REGOFFSET)) continue; if (var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (mini_type_is_vtype (var->inst_vtype) && !MONO_CLASS_IS_SIMD (ctx->cfg, var->klass))) { if (!ctx_ok (ctx)) return; /* Could be already created by an OP_VPHI */ if (!ctx->addresses [var->dreg]) { if (var->flags & MONO_INST_LMF) { // FIXME: Allocate a smaller struct in the deopt case int size = cfg->deopt ? MONO_ABI_SIZEOF (MonoLMFExt) : MONO_ABI_SIZEOF (MonoLMF); ctx->addresses [var->dreg] = build_alloca_llvm_type_name (ctx, LLVMArrayType (LLVMInt8Type (), size), sizeof (target_mgreg_t), "lmf"); } else { char *name = g_strdup_printf ("vreg_loc_%d", var->dreg); ctx->addresses [var->dreg] = build_named_alloca (ctx, var->inst_vtype, name); g_free (name); } } ctx->vreg_cli_types [var->dreg] = var->inst_vtype; } } names = g_new (char *, sig->param_count); mono_method_get_param_names (cfg->method, (const char **) names); for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis]; int reg = cfg->args [i + sig->hasthis]->dreg; char *name; pindex = ainfo->pindex; LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgAsFpArgs: { LLVMValueRef args [8]; int j; pindex += ainfo->ndummy_fpargs; /* The argument is received as a set of int/fp arguments, store them into the real argument */ memset (args, 0, sizeof (args)); if (ainfo->storage == LLVMArgVtypeInReg) { args [0] = LLVMGetParam (ctx->lmethod, pindex); if (ainfo->pair_storage [1] != LLVMArgNone) args [1] = LLVMGetParam (ctx->lmethod, pindex + 1); } else { g_assert (ainfo->nslots <= 8); for (j = 0; j < ainfo->nslots; ++j) args [j] = LLVMGetParam (ctx->lmethod, pindex + j); } ctx->addresses [reg] = build_alloca (ctx, ainfo->type); emit_args_to_vtype (ctx, builder, ainfo->type, ctx->addresses [reg], ainfo, args); break; } case LLVMArgVtypeByVal: { ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; } case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: { /* The argument is passed by ref */ ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; } case LLVMArgAsIArgs: { LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); int size; MonoType *t = mini_get_underlying_type (ainfo->type); /* The argument is received as an array of ints, store it into the real argument */ ctx->addresses [reg] = build_alloca (ctx, t); size = mono_class_value_size (mono_class_from_mono_type_internal (t), NULL); if (size == 0) { } else if (size < TARGET_SIZEOF_VOID_P) { /* The upper bits of the registers might not be valid */ LLVMValueRef val = LLVMBuildExtractValue (builder, arg, 0, ""); LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (size * 8), 0)); LLVMBuildStore (ctx->builder, LLVMBuildTrunc (builder, val, LLVMIntType (size * 8), ""), dest); } else { LLVMBuildStore (ctx->builder, arg, convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMTypeOf (arg), 0))); } break; } case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: { MonoType *t = mini_get_underlying_type (ainfo->type); /* The argument is received as a scalar */ ctx->addresses [reg] = build_alloca (ctx, t); LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)); LLVMBuildStore (ctx->builder, arg, dest); break; } case LLVMArgGsharedvtFixed: { /* These are non-gsharedvt arguments passed by ref, the rest of the IR treats them as scalars */ LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); if (names [i]) name = g_strdup_printf ("arg_%s", names [i]); else name = g_strdup_printf ("arg_%d", i); ctx->values [reg] = LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), name); break; } case LLVMArgGsharedvtFixedVtype: { LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); if (names [i]) name = g_strdup_printf ("vtype_arg_%s", names [i]); else name = g_strdup_printf ("vtype_arg_%d", i); /* Non-gsharedvt vtype argument passed by ref, the rest of the IR treats it as a vtype */ g_assert (ctx->addresses [reg]); LLVMSetValueName (ctx->addresses [reg], name); LLVMBuildStore (builder, LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), ""), ctx->addresses [reg]); break; } case LLVMArgGsharedvtVariable: /* The IR treats these as variables with addresses */ if (!ctx->addresses [reg]) ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; default: { LLVMTypeRef t; /* Needed to avoid phi argument mismatch errors since operations on pointers produce i32/i64 */ if (m_type_is_byref (ainfo->type)) t = IntPtrType (); else t = type_to_llvm_type (ctx, ainfo->type); ctx->values [reg] = convert_full (ctx, ctx->values [reg], llvm_type_to_stack_type (cfg, t), type_is_unsigned (ctx, ainfo->type)); break; } } switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgVtypeByVal: case LLVMArgAsIArgs: // FIXME: Enabling this fails on windows case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: { if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (ainfo->type))) /* Treat these as normal values */ ctx->values [reg] = LLVMBuildLoad (builder, ctx->addresses [reg], "simd_vtype"); break; } default: break; } } g_free (names); if (sig->hasthis) { /* Handle this arguments as inputs to phi nodes */ int reg = cfg->args [0]->dreg; if (ctx->vreg_types [reg]) ctx->values [reg] = convert (ctx, ctx->values [reg], ctx->vreg_types [reg]); } if (cfg->vret_addr) emit_volatile_store (ctx, cfg->vret_addr->dreg); if (sig->hasthis) emit_volatile_store (ctx, cfg->args [0]->dreg); for (i = 0; i < sig->param_count; ++i) if (!mini_type_is_vtype (sig->params [i])) emit_volatile_store (ctx, cfg->args [i + sig->hasthis]->dreg); if (sig->hasthis && !cfg->rgctx_var && cfg->gshared && !cfg->llvm_only) { LLVMValueRef this_alloc; /* * The exception handling code needs the location where the this argument was * stored for gshared methods. We create a separate alloca to hold it, and mark it * with the "mono.this" custom metadata to tell llvm that it needs to save its * location into the LSDA. */ this_alloc = mono_llvm_build_alloca (builder, ThisType (), LLVMConstInt (LLVMInt32Type (), 1, FALSE), 0, ""); /* This volatile store will keep the alloca alive */ mono_llvm_build_store (builder, ctx->values [cfg->args [0]->dreg], this_alloc, TRUE, LLVM_BARRIER_NONE); set_metadata_flag (this_alloc, "mono.this"); } if (cfg->rgctx_var) { if (!(cfg->rgctx_var->flags & MONO_INST_VOLATILE)) { /* FIXME: This could be volatile even in llvmonly mode if used inside a clause etc. */ g_assert (!ctx->addresses [cfg->rgctx_var->dreg]); ctx->values [cfg->rgctx_var->dreg] = ctx->rgctx_arg; } else { LLVMValueRef rgctx_alloc, store; /* * We handle the rgctx arg similarly to the this pointer. */ g_assert (ctx->addresses [cfg->rgctx_var->dreg]); rgctx_alloc = ctx->addresses [cfg->rgctx_var->dreg]; /* This volatile store will keep the alloca alive */ store = mono_llvm_build_store (builder, convert (ctx, ctx->rgctx_arg, IntPtrType ()), rgctx_alloc, TRUE, LLVM_BARRIER_NONE); (void)store; /* unused */ set_metadata_flag (rgctx_alloc, "mono.this"); } } #ifdef TARGET_WASM /* * Store ref arguments to the pin area. * FIXME: This might not be needed, since the caller already does it ? */ for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if (var->opcode == OP_ARG && vreg_is_ref (cfg, var->dreg) && ctx->values [var->dreg]) emit_gc_pin (ctx, builder, var->dreg); } #endif if (cfg->deopt) { LLVMValueRef addr, index [2]; MonoMethodHeader *header = cfg->header; int nfields = (sig->ret->type != MONO_TYPE_VOID ? 1 : 0) + sig->hasthis + sig->param_count + header->num_locals + 2; LLVMTypeRef *types = g_alloca (nfields * sizeof (LLVMTypeRef)); int findex = 0; /* method */ types [findex ++] = IntPtrType (); /* il_offset */ types [findex ++] = LLVMInt32Type (); int data_start = findex; /* data */ if (sig->ret->type != MONO_TYPE_VOID) types [findex ++] = IntPtrType (); if (sig->hasthis) types [findex ++] = IntPtrType (); for (int i = 0; i < sig->param_count; ++i) types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, sig->params [i]), 0); for (int i = 0; i < header->num_locals; ++i) types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, header->locals [i]), 0); g_assert (findex == nfields); char *name = g_strdup_printf ("%s_il_state", ctx->method_name); LLVMTypeRef il_state_type = LLVMStructCreateNamed (ctx->module->context, name); LLVMStructSetBody (il_state_type, types, nfields, FALSE); g_free (name); ctx->il_state = build_alloca_llvm_type_name (ctx, il_state_type, 0, "il_state"); g_assert (cfg->il_state_var); ctx->addresses [cfg->il_state_var->dreg] = ctx->il_state; /* Set il_state->il_offset = -1 */ index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); LLVMBuildStore (ctx->builder, LLVMConstInt (types [1], -1, FALSE), addr); /* * Set il_state->data [i] to either the address of the arg/local, or NULL. * Because of mono_liveness_handle_exception_clauses (), all locals used/reachable from * clauses are supposed to be volatile, so they have an address. */ findex = data_start; if (sig->ret->type != MONO_TYPE_VOID) { LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret); ctx->il_state_ret = build_alloca_llvm_type_name (ctx, ret_type, 0, "il_state_ret"); index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); LLVMBuildStore (ctx->builder, ctx->il_state_ret, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (ctx->il_state_ret), 0))); findex ++; } for (int i = 0; i < sig->hasthis + sig->param_count; ++i) { LLVMValueRef var_addr = ctx->addresses [cfg->args [i]->dreg]; index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); if (var_addr) LLVMBuildStore (ctx->builder, var_addr, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (var_addr), 0))); else LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr); findex ++; } for (int i = 0; i < header->num_locals; ++i) { LLVMValueRef var_addr = ctx->addresses [cfg->locals [i]->dreg]; index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); if (var_addr) LLVMBuildStore (ctx->builder, LLVMBuildBitCast (builder, var_addr, types [findex], ""), addr); else LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr); findex ++; } } /* Initialize the method if needed */ if (cfg->compile_aot) { /* Emit a location for the initialization code */ ctx->init_bb = gen_bb (ctx, "INIT_BB"); ctx->inited_bb = gen_bb (ctx, "INITED_BB"); LLVMBuildBr (ctx->builder, ctx->init_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb); ctx->bblocks [cfg->bb_entry->block_num].end_bblock = ctx->inited_bb; } /* Compute nesting between clauses */ ctx->nested_in = (GSList**)mono_mempool_alloc0 (cfg->mempool, sizeof (GSList*) * cfg->header->num_clauses); for (i = 0; i < cfg->header->num_clauses; ++i) { for (j = 0; j < cfg->header->num_clauses; ++j) { MonoExceptionClause *clause1 = &cfg->header->clauses [i]; MonoExceptionClause *clause2 = &cfg->header->clauses [j]; if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) ctx->nested_in [i] = g_slist_prepend_mempool (cfg->mempool, ctx->nested_in [i], GINT_TO_POINTER (j)); } } /* * For finally clauses, create an indicator variable telling OP_ENDFINALLY whenever * it needs to continue normally, or return back to the exception handling system. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { char name [128]; if (!(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER))) continue; if (bb->in_scount == 0) { LLVMValueRef val; sprintf (name, "finally_ind_bb%d", bb->block_num); val = LLVMBuildAlloca (builder, LLVMInt32Type (), name); LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), val); ctx->bblocks [bb->block_num].finally_ind = val; } else { /* Create a variable to hold the exception var */ if (!ctx->ex_var) ctx->ex_var = LLVMBuildAlloca (builder, ObjRefType (), "exvar"); } } ctx->builder = old_builder; } static gboolean needs_extra_arg (EmitContext *ctx, MonoMethod *method) { WrapperInfo *info = NULL; /* * When targeting wasm, the caller and callee signature has to match exactly. This means * that every method which can be called indirectly need an extra arg since the caller * will call it through an ftnptr and will pass an extra arg. */ if (!ctx->cfg->llvm_only || !ctx->emit_dummy_arg) return FALSE; if (method->wrapper_type) info = mono_marshal_get_wrapper_info (method); switch (method->wrapper_type) { case MONO_WRAPPER_OTHER: if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG) /* Already have an explicit extra arg */ return FALSE; break; case MONO_WRAPPER_MANAGED_TO_NATIVE: if (strstr (method->name, "icall_wrapper")) /* These are JIT icall wrappers which are only called from JITted code directly */ return FALSE; /* Normal icalls can be virtual methods which need an extra arg */ break; case MONO_WRAPPER_RUNTIME_INVOKE: case MONO_WRAPPER_ALLOC: case MONO_WRAPPER_CASTCLASS: case MONO_WRAPPER_WRITE_BARRIER: case MONO_WRAPPER_NATIVE_TO_MANAGED: return FALSE; case MONO_WRAPPER_STELEMREF: if (info->subtype != WRAPPER_SUBTYPE_VIRTUAL_STELEMREF) return FALSE; break; case MONO_WRAPPER_MANAGED_TO_MANAGED: if (info->subtype == WRAPPER_SUBTYPE_STRING_CTOR) return FALSE; break; default: break; } if (method->string_ctor) return FALSE; /* These are called from gsharedvt code with an indirect call which doesn't pass an extra arg */ if (method->klass == mono_get_string_class () && (strstr (method->name, "memcpy") || strstr (method->name, "bzero"))) return FALSE; return TRUE; } static inline gboolean is_supported_callconv (EmitContext *ctx, MonoCallInst *call) { #if defined(TARGET_WIN32) && defined(TARGET_AMD64) gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || (call->signature->call_convention == MONO_CALL_C) || (call->signature->call_convention == MONO_CALL_STDCALL); #else gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || ((call->signature->call_convention == MONO_CALL_C) && ctx->llvm_only); #endif return result; } static void process_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, MonoInst *ins) { MonoCompile *cfg = ctx->cfg; LLVMValueRef *values = ctx->values; LLVMValueRef *addresses = ctx->addresses; MonoCallInst *call = (MonoCallInst*)ins; MonoMethodSignature *sig = call->signature; LLVMValueRef callee = NULL, lcall; LLVMValueRef *args; LLVMCallInfo *cinfo; GSList *l; int i, len, nargs; gboolean vretaddr; LLVMTypeRef llvm_sig; gpointer target; gboolean is_virtual, calli; LLVMBuilderRef builder = *builder_ref; /* If both imt and rgctx arg are required, only pass the imt arg, the rgctx trampoline will pass the rgctx */ if (call->imt_arg_reg) call->rgctx_arg_reg = 0; if (!is_supported_callconv (ctx, call)) { set_failure (ctx, "non-default callconv"); return; } cinfo = call->cinfo; g_assert (cinfo); if (call->rgctx_arg_reg) cinfo->rgctx_arg = TRUE; if (call->imt_arg_reg) cinfo->imt_arg = TRUE; if (!call->rgctx_arg_reg && call->method && needs_extra_arg (ctx, call->method)) cinfo->dummy_arg = TRUE; vretaddr = (cinfo->ret.storage == LLVMArgVtypeRetAddr || cinfo->ret.storage == LLVMArgVtypeByRef || cinfo->ret.storage == LLVMArgGsharedvtFixed || cinfo->ret.storage == LLVMArgGsharedvtVariable || cinfo->ret.storage == LLVMArgGsharedvtFixedVtype); llvm_sig = sig_to_llvm_sig_full (ctx, sig, cinfo); if (!ctx_ok (ctx)) return; int const opcode = ins->opcode; is_virtual = opcode == OP_VOIDCALL_MEMBASE || opcode == OP_CALL_MEMBASE || opcode == OP_VCALL_MEMBASE || opcode == OP_LCALL_MEMBASE || opcode == OP_FCALL_MEMBASE || opcode == OP_RCALL_MEMBASE || opcode == OP_TAILCALL_MEMBASE; calli = !call->fptr_is_patch && (opcode == OP_VOIDCALL_REG || opcode == OP_CALL_REG || opcode == OP_VCALL_REG || opcode == OP_LCALL_REG || opcode == OP_FCALL_REG || opcode == OP_RCALL_REG || opcode == OP_TAILCALL_REG); /* FIXME: Avoid creating duplicate methods */ if (ins->flags & MONO_INST_HAS_METHOD) { if (is_virtual) { callee = NULL; } else { if (cfg->compile_aot) { callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_METHOD, call->method); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } else if (cfg->method == call->method) { callee = ctx->lmethod; } else { ERROR_DECL (error); static int tramp_index; char *name; name = g_strdup_printf ("[tramp_%d] %s", tramp_index, mono_method_full_name (call->method, TRUE)); tramp_index ++; /* * Use our trampoline infrastructure for lazy compilation instead of llvm's. * Make all calls through a global. The address of the global will be saved in * MonoJitDomainInfo.llvm_jit_callees and updated when the method it refers to is * compiled. */ LLVMValueRef tramp_var = (LLVMValueRef)g_hash_table_lookup (ctx->jit_callees, call->method); if (!tramp_var) { target = mono_create_jit_trampoline (call->method, error); if (!is_ok (error)) { set_failure (ctx, mono_error_get_message (error)); mono_error_cleanup (error); return; } tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); g_hash_table_insert (ctx->jit_callees, call->method, tramp_var); } callee = LLVMBuildLoad (builder, tramp_var, ""); } } if (!cfg->llvm_only && call->method && strstr (m_class_get_name (call->method->klass), "AsyncVoidMethodBuilder")) { /* LLVM miscompiles async methods */ set_failure (ctx, "#13734"); return; } } else if (calli) { } else { const MonoJitICallId jit_icall_id = call->jit_icall_id; if (jit_icall_id) { if (cfg->compile_aot) { callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id)); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } else { callee = get_jit_callee (ctx, "", llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id)); } } else { if (cfg->compile_aot) { callee = NULL; if (cfg->abs_patches) { MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr); if (abs_ji) { callee = get_callee (ctx, llvm_sig, abs_ji->type, abs_ji->data.target); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } } if (!callee) { set_failure (ctx, "aot"); return; } } else { if (cfg->abs_patches) { MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr); if (abs_ji) { ERROR_DECL (error); target = mono_resolve_patch_target (cfg->method, NULL, abs_ji, FALSE, error); mono_error_assert_ok (error); callee = get_jit_callee (ctx, "", llvm_sig, abs_ji->type, abs_ji->data.target); } else { g_assert_not_reached (); } } else { g_assert_not_reached (); } } } } if (is_virtual) { int size = TARGET_SIZEOF_VOID_P; LLVMValueRef index; g_assert (ins->inst_offset % size == 0); index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); callee = convert (ctx, LLVMBuildLoad (builder, LLVMBuildGEP (builder, convert (ctx, values [ins->inst_basereg], LLVMPointerType (LLVMPointerType (IntPtrType (), 0), 0)), &index, 1, ""), ""), LLVMPointerType (llvm_sig, 0)); } else if (calli) { callee = convert (ctx, values [ins->sreg1], LLVMPointerType (llvm_sig, 0)); } else { if (ins->flags & MONO_INST_HAS_METHOD) { } } /* * Collect and convert arguments */ nargs = (sig->param_count * 16) + sig->hasthis + vretaddr + call->rgctx_reg + call->imt_arg_reg + call->cinfo->dummy_arg + 1; len = sizeof (LLVMValueRef) * nargs; args = g_newa (LLVMValueRef, nargs); memset (args, 0, len); l = call->out_ireg_args; if (call->rgctx_arg_reg) { g_assert (values [call->rgctx_arg_reg]); g_assert (cinfo->rgctx_arg_pindex < nargs); /* * On ARM, the imt/rgctx argument is passed in a caller save register, but some of our trampolines etc. clobber it, leading to * problems is LLVM moves the arg assignment earlier. To work around this, save the argument into a stack slot and load * it using a volatile load. */ #ifdef TARGET_ARM if (!ctx->imt_rgctx_loc) ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P); LLVMBuildStore (builder, convert (ctx, ctx->values [call->rgctx_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc); args [cinfo->rgctx_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE); #else args [cinfo->rgctx_arg_pindex] = convert (ctx, values [call->rgctx_arg_reg], ctx->module->ptr_type); #endif } if (call->imt_arg_reg) { g_assert (!ctx->llvm_only); g_assert (values [call->imt_arg_reg]); g_assert (cinfo->imt_arg_pindex < nargs); #ifdef TARGET_ARM if (!ctx->imt_rgctx_loc) ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P); LLVMBuildStore (builder, convert (ctx, ctx->values [call->imt_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc); args [cinfo->imt_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE); #else args [cinfo->imt_arg_pindex] = convert (ctx, values [call->imt_arg_reg], ctx->module->ptr_type); #endif } switch (cinfo->ret.storage) { case LLVMArgGsharedvtVariable: { MonoInst *var = get_vreg_to_inst (cfg, call->inst.dreg); if (var && var->opcode == OP_GSHAREDVT_LOCAL) { args [cinfo->vret_arg_pindex] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), IntPtrType ()); } else { g_assert (addresses [call->inst.dreg]); args [cinfo->vret_arg_pindex] = convert (ctx, addresses [call->inst.dreg], IntPtrType ()); } break; } default: if (vretaddr) { if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); g_assert (cinfo->vret_arg_pindex < nargs); if (cinfo->ret.storage == LLVMArgVtypeByRef) args [cinfo->vret_arg_pindex] = addresses [call->inst.dreg]; else args [cinfo->vret_arg_pindex] = LLVMBuildPtrToInt (builder, addresses [call->inst.dreg], IntPtrType (), ""); } break; } /* * Sometimes the same method is called with two different signatures (i.e. with and without 'this'), so * use the real callee for argument type conversion. */ LLVMTypeRef callee_type = LLVMGetElementType (LLVMTypeOf (callee)); LLVMTypeRef *param_types = (LLVMTypeRef*)g_alloca (sizeof (LLVMTypeRef) * LLVMCountParamTypes (callee_type)); LLVMGetParamTypes (callee_type, param_types); for (i = 0; i < sig->param_count + sig->hasthis; ++i) { guint32 regpair; int reg, pindex; LLVMArgInfo *ainfo = &call->cinfo->args [i]; pindex = ainfo->pindex; regpair = (guint32)(gssize)(l->data); reg = regpair & 0xffffff; args [pindex] = values [reg]; switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgAsFpArgs: { guint32 nargs; int j; for (j = 0; j < ainfo->ndummy_fpargs; ++j) args [pindex + j] = LLVMConstNull (LLVMDoubleType ()); pindex += ainfo->ndummy_fpargs; g_assert (addresses [reg]); emit_vtype_to_args (ctx, builder, ainfo->type, addresses [reg], ainfo, args + pindex, &nargs); pindex += nargs; // FIXME: alignment // FIXME: Get rid of the VMOVE break; } case LLVMArgVtypeByVal: g_assert (addresses [reg]); args [pindex] = addresses [reg]; break; case LLVMArgVtypeAddr : case LLVMArgVtypeByRef: { g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0)); break; } case LLVMArgAsIArgs: g_assert (addresses [reg]); if (ainfo->esize == 8) args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (LLVMInt64Type (), ainfo->nslots), 0)), ""); else args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (IntPtrType (), ainfo->nslots), 0)), ""); break; case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: g_assert (addresses [reg]); args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)), ""); break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0)); break; case LLVMArgGsharedvtVariable: g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (IntPtrType (), 0)); break; default: g_assert (args [pindex]); if (i == 0 && sig->hasthis) args [pindex] = convert (ctx, args [pindex], param_types [pindex]); else args [pindex] = convert (ctx, args [pindex], type_to_llvm_arg_type (ctx, ainfo->type)); break; } g_assert (pindex <= nargs); l = l->next; } if (call->cinfo->dummy_arg) { g_assert (call->cinfo->dummy_arg_pindex < nargs); args [call->cinfo->dummy_arg_pindex] = LLVMConstNull (ctx->module->ptr_type); } // FIXME: Align call sites /* * Emit the call */ lcall = emit_call (ctx, bb, &builder, callee, args, LLVMCountParamTypes (llvm_sig)); mono_llvm_nonnull_state_update (ctx, lcall, call->method, args, LLVMCountParamTypes (llvm_sig)); // If we just allocated an object, it's not null. if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) { mono_llvm_set_call_nonnull_ret (lcall); } if (ins->opcode != OP_TAILCALL && ins->opcode != OP_TAILCALL_MEMBASE && LLVMGetInstructionOpcode (lcall) == LLVMCall) mono_llvm_set_call_notailcall (lcall); // Add original method name we are currently emitting as a custom string metadata (the only way to leave comments in LLVM IR) if (mono_debug_enabled () && call && call->method) mono_llvm_add_string_metadata (lcall, "managed_name", mono_method_full_name (call->method, TRUE)); // As per the LLVM docs, a function has a noalias return value if and only if // it is an allocation function. This is an allocation function. if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) { mono_llvm_set_call_noalias_ret (lcall); // All objects are expected to be 8-byte aligned (SGEN_ALLOC_ALIGN) mono_llvm_set_alignment_ret (lcall, 8); } /* * Modify cconv and parameter attributes to pass rgctx/imt correctly. */ #if defined(MONO_ARCH_IMT_REG) && defined(MONO_ARCH_RGCTX_REG) g_assert (MONO_ARCH_IMT_REG == MONO_ARCH_RGCTX_REG); #endif /* The two can't be used together, so use only one LLVM calling conv to pass them */ g_assert (!(call->rgctx_arg_reg && call->imt_arg_reg)); if (!sig->pinvoke && !cfg->llvm_only) LLVMSetInstructionCallConv (lcall, LLVMMono1CallConv); if (cinfo->ret.storage == LLVMArgVtypeByRef) mono_llvm_add_instr_attr (lcall, 1 + cinfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET); if (!ctx->llvm_only && call->rgctx_arg_reg) mono_llvm_add_instr_attr (lcall, 1 + cinfo->rgctx_arg_pindex, LLVM_ATTR_IN_REG); if (call->imt_arg_reg) mono_llvm_add_instr_attr (lcall, 1 + cinfo->imt_arg_pindex, LLVM_ATTR_IN_REG); /* Add byval attributes if needed */ for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &call->cinfo->args [i + sig->hasthis]; if (ainfo && ainfo->storage == LLVMArgVtypeByVal) mono_llvm_add_instr_attr (lcall, 1 + ainfo->pindex, LLVM_ATTR_BY_VAL); #ifdef TARGET_WASM if (ainfo && ainfo->storage == LLVMArgVtypeByRef) /* This causes llvm to make a copy of the value which is what we need */ mono_llvm_add_instr_byval_attr (lcall, 1 + ainfo->pindex, LLVMGetElementType (param_types [ainfo->pindex])); #endif } gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret)); gboolean should_promote_to_value = FALSE; const char *load_name = NULL; /* * Convert the result. Non-SIMD value types are manipulated via an * indirection. SIMD value types are represented directly as LLVM vector * values, and must have a corresponding LLVM value definition in * `values`. */ switch (cinfo->ret.storage) { case LLVMArgAsIArgs: case LLVMArgFpStruct: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); break; case LLVMArgVtypeByVal: /* * Only used by amd64 and x86. Only ever used when passing * arguments; never used for return values. */ g_assert_not_reached (); break; case LLVMArgVtypeInReg: { if (LLVMTypeOf (lcall) == LLVMVoidType ()) /* Empty struct */ break; if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, sig->ret); LLVMValueRef regs [2] = { 0 }; regs [0] = LLVMBuildExtractValue (builder, lcall, 0, ""); if (cinfo->ret.pair_storage [1] != LLVMArgNone) regs [1] = LLVMBuildExtractValue (builder, lcall, 1, ""); emit_args_to_vtype (ctx, builder, sig->ret, addresses [ins->dreg], &cinfo->ret, regs); load_name = "process_call_vtype_in_reg"; should_promote_to_value = is_simd; break; } case LLVMArgVtypeAsScalar: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); load_name = "process_call_vtype_as_scalar"; should_promote_to_value = is_simd; break; case LLVMArgVtypeRetAddr: case LLVMArgVtypeByRef: load_name = "process_call_vtype_ret_addr"; should_promote_to_value = is_simd; break; case LLVMArgGsharedvtVariable: break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: values [ins->dreg] = LLVMBuildLoad (builder, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0), FALSE), ""); break; case LLVMArgWasmVtypeAsScalar: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); break; default: if (sig->ret->type != MONO_TYPE_VOID) /* If the method returns an unsigned value, need to zext it */ values [ins->dreg] = convert_full (ctx, lcall, llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, sig->ret)), type_is_unsigned (ctx, sig->ret)); break; } if (should_promote_to_value) { g_assert (addresses [call->inst.dreg]); LLVMTypeRef addr_type = LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0); LLVMValueRef addr = convert_full (ctx, addresses [call->inst.dreg], addr_type, FALSE); values [ins->dreg] = LLVMBuildLoad (builder, addr, load_name); } *builder_ref = ctx->builder; } static void emit_llvmonly_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc) { MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mini_llvmonly_rethrow_exception : MONO_JIT_ICALL_mini_llvmonly_throw_exception; LLVMValueRef callee = rethrow ? ctx->module->rethrow : ctx->module->throw_icall; LLVMTypeRef exc_type = type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_exception_class ())); if (!callee) { LLVMTypeRef fun_sig = LLVMFunctionType1 (LLVMVoidType (), exc_type, FALSE); g_assert (ctx->cfg->compile_aot); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (icall_id)); } LLVMValueRef args [2]; args [0] = convert (ctx, exc, exc_type); emit_call (ctx, bb, &ctx->builder, callee, args, 1); LLVMBuildUnreachable (ctx->builder); ctx->builder = create_builder (ctx); } static void emit_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc) { MonoMethodSignature *throw_sig; LLVMValueRef * const pcallee = rethrow ? &ctx->module->rethrow : &ctx->module->throw_icall; LLVMValueRef callee = *pcallee; char const * const icall_name = rethrow ? "mono_arch_rethrow_exception" : "mono_arch_throw_exception"; #ifndef TARGET_X86 const #endif MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mono_arch_rethrow_exception : MONO_JIT_ICALL_mono_arch_throw_exception; if (!callee) { throw_sig = mono_metadata_signature_alloc (mono_get_corlib (), 1); throw_sig->ret = m_class_get_byval_arg (mono_get_void_class ()); throw_sig->params [0] = m_class_get_byval_arg (mono_get_object_class ()); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } else { #ifdef TARGET_X86 /* * LLVM doesn't push the exception argument, so we need a different * trampoline. */ icall_id = rethrow ? MONO_JIT_ICALL_mono_llvm_rethrow_exception_trampoline : MONO_JIT_ICALL_mono_llvm_throw_exception_trampoline; #endif callee = get_jit_callee (ctx, icall_name, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } mono_memory_barrier (); } LLVMValueRef arg; arg = convert (ctx, exc, type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_object_class ()))); emit_call (ctx, bb, &ctx->builder, callee, &arg, 1); } static void emit_resume_eh (EmitContext *ctx, MonoBasicBlock *bb) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception; LLVMValueRef callee; LLVMTypeRef fun_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); g_assert (ctx->cfg->compile_aot); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); emit_call (ctx, bb, &ctx->builder, callee, NULL, 0); LLVMBuildUnreachable (ctx->builder); ctx->builder = create_builder (ctx); } static LLVMValueRef mono_llvm_emit_clear_exception_call (EmitContext *ctx, LLVMBuilderRef builder) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_clear_exception; LLVMTypeRef call_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); LLVMValueRef callee = NULL; if (!callee) { callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } g_assert (builder && callee); return LLVMBuildCall (builder, callee, NULL, 0, ""); } static LLVMValueRef mono_llvm_emit_load_exception_call (EmitContext *ctx, LLVMBuilderRef builder) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_load_exception; LLVMTypeRef call_sig = LLVMFunctionType (ObjRefType (), NULL, 0, FALSE); LLVMValueRef callee = NULL; g_assert (ctx->cfg->compile_aot); if (!callee) { callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } g_assert (builder && callee); return LLVMBuildCall (builder, callee, NULL, 0, "load_exception"); } static LLVMValueRef mono_llvm_emit_match_exception_call (EmitContext *ctx, LLVMBuilderRef builder, gint32 region_start, gint32 region_end) { const char *icall_name = "mini_llvmonly_match_exception"; const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_match_exception; ctx->builder = builder; LLVMValueRef args[5]; const int num_args = G_N_ELEMENTS (args); args [0] = convert (ctx, get_aotconst (ctx, MONO_PATCH_INFO_AOT_JIT_INFO, GINT_TO_POINTER (ctx->cfg->method_index), LLVMPointerType (IntPtrType (), 0)), IntPtrType ()); args [1] = LLVMConstInt (LLVMInt32Type (), region_start, 0); args [2] = LLVMConstInt (LLVMInt32Type (), region_end, 0); if (ctx->cfg->rgctx_var) { if (ctx->cfg->llvm_only) { args [3] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); } else { LLVMValueRef rgctx_alloc = ctx->addresses [ctx->cfg->rgctx_var->dreg]; g_assert (rgctx_alloc); args [3] = LLVMBuildLoad (builder, convert (ctx, rgctx_alloc, LLVMPointerType (IntPtrType (), 0)), ""); } } else { args [3] = LLVMConstInt (IntPtrType (), 0, 0); } if (ctx->this_arg) args [4] = convert (ctx, ctx->this_arg, IntPtrType ()); else args [4] = LLVMConstInt (IntPtrType (), 0, 0); LLVMTypeRef match_sig = LLVMFunctionType5 (LLVMInt32Type (), IntPtrType (), LLVMInt32Type (), LLVMInt32Type (), IntPtrType (), IntPtrType (), FALSE); LLVMValueRef callee; g_assert (ctx->cfg->compile_aot); ctx->builder = builder; // get_callee expects ctx->builder to be the emitting builder callee = get_callee (ctx, match_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); g_assert (builder && callee); g_assert (ctx->ex_var); return LLVMBuildCall (builder, callee, args, num_args, icall_name); } // FIXME: This won't work because the code-finding makes this // not a constant. /*#define MONO_PERSONALITY_DEBUG*/ #ifdef MONO_PERSONALITY_DEBUG static const gboolean use_mono_personality_debug = TRUE; static const char *default_personality_name = "mono_debug_personality"; #else static const gboolean use_mono_personality_debug = FALSE; static const char *default_personality_name = "__gxx_personality_v0"; #endif static LLVMTypeRef default_cpp_lpad_exc_signature (void) { static LLVMTypeRef sig; if (!sig) { LLVMTypeRef signature [2]; signature [0] = LLVMPointerType (LLVMInt8Type (), 0); signature [1] = LLVMInt32Type (); sig = LLVMStructType (signature, 2, FALSE); } return sig; } static LLVMValueRef get_mono_personality (EmitContext *ctx) { LLVMValueRef personality = NULL; LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE); g_assert (ctx->cfg->compile_aot); if (!use_mono_personality_debug) { personality = LLVMGetNamedFunction (ctx->lmodule, default_personality_name); } else { personality = get_callee (ctx, personality_type, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_debug_personality)); } g_assert (personality); return personality; } static LLVMBasicBlockRef emit_landing_pad (EmitContext *ctx, int group_index, int group_size) { MonoCompile *cfg = ctx->cfg; LLVMBuilderRef old_builder = ctx->builder; MonoExceptionClause *group_start = cfg->header->clauses + group_index; LLVMBuilderRef lpadBuilder = create_builder (ctx); ctx->builder = lpadBuilder; MonoBasicBlock *handler_bb = cfg->cil_offset_to_bb [CLAUSE_START (group_start)]; g_assert (handler_bb); // <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+ LLVMValueRef personality = get_mono_personality (ctx); g_assert (personality); char *bb_name = g_strdup_printf ("LPAD%d_BB", group_index); LLVMBasicBlockRef lpad_bb = gen_bb (ctx, bb_name); g_free (bb_name); LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb); LLVMValueRef landing_pad = LLVMBuildLandingPad (lpadBuilder, default_cpp_lpad_exc_signature (), personality, 0, ""); g_assert (landing_pad); LLVMValueRef cast = LLVMBuildBitCast (lpadBuilder, ctx->module->sentinel_exception, LLVMPointerType (LLVMInt8Type (), 0), "int8TypeInfo"); LLVMAddClause (landing_pad, cast); if (ctx->cfg->deopt) { /* * Call mini_llvmonly_resume_exception_il_state (lmf, il_state) * * The call will execute the catch clause and the rest of the method and store the return * value into ctx->il_state_ret. */ if (!ctx->has_catch) { /* Unused */ LLVMBuildUnreachable (lpadBuilder); return lpad_bb; } const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception_il_state; LLVMValueRef callee; LLVMValueRef args [2]; LLVMTypeRef fun_sig = LLVMFunctionType2 (LLVMVoidType (), IntPtrType (), IntPtrType (), FALSE); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); g_assert (ctx->cfg->lmf_var); g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]); args [0] = LLVMBuildPtrToInt (ctx->builder, ctx->addresses [ctx->cfg->lmf_var->dreg], IntPtrType (), ""); args [1] = LLVMBuildPtrToInt (ctx->builder, ctx->il_state, IntPtrType (), ""); emit_call (ctx, NULL, &ctx->builder, callee, args, 2); /* Return the value set in ctx->il_state_ret */ LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (ctx->lmethod))); LLVMBuilderRef builder = ctx->builder; LLVMValueRef addr, retval, gep, indexes [2]; switch (ctx->linfo->ret.storage) { case LLVMArgNone: LLVMBuildRetVoid (builder); break; case LLVMArgNormal: case LLVMArgWasmVtypeAsScalar: case LLVMArgVtypeInReg: { if (ctx->sig->ret->type == MONO_TYPE_VOID) { LLVMBuildRetVoid (builder); break; } addr = ctx->il_state_ret; g_assert (addr); addr = convert (ctx, ctx->il_state_ret, LLVMPointerType (ret_type, 0)); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); gep = LLVMBuildGEP (builder, addr, indexes, 1, ""); LLVMBuildRet (builder, LLVMBuildLoad (builder, gep, "")); break; } case LLVMArgVtypeRetAddr: { LLVMValueRef ret_addr; g_assert (cfg->vret_addr); ret_addr = ctx->values [cfg->vret_addr->dreg]; addr = ctx->il_state_ret; g_assert (addr); /* The ret value is in il_state_ret, copy it to the memory pointed to by the vret arg */ ret_type = type_to_llvm_type (ctx, ctx->sig->ret); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); gep = LLVMBuildGEP (builder, addr, indexes, 1, ""); retval = convert (ctx, LLVMBuildLoad (builder, gep, ""), ret_type); LLVMBuildStore (builder, retval, convert (ctx, ret_addr, LLVMPointerType (ret_type, 0))); LLVMBuildRetVoid (builder); break; } default: g_assert_not_reached (); break; } return lpad_bb; } LLVMBasicBlockRef resume_bb = gen_bb (ctx, "RESUME_BB"); LLVMBuilderRef resume_builder = create_builder (ctx); ctx->builder = resume_builder; LLVMPositionBuilderAtEnd (resume_builder, resume_bb); emit_resume_eh (ctx, handler_bb); // Build match ctx->builder = lpadBuilder; LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb); gboolean finally_only = TRUE; MonoExceptionClause *group_cursor = group_start; for (int i = 0; i < group_size; i ++) { if (!(group_cursor->flags & MONO_EXCEPTION_CLAUSE_FINALLY || group_cursor->flags & MONO_EXCEPTION_CLAUSE_FAULT)) finally_only = FALSE; group_cursor++; } // FIXME: // Handle landing pad inlining if (!finally_only) { // So at each level of the exception stack we will match the exception again. // During that match, we need to compare against the handler types for the current // protected region. We send the try start and end so that we can only check against // handlers for this lexical protected region. LLVMValueRef match = mono_llvm_emit_match_exception_call (ctx, lpadBuilder, group_start->try_offset, group_start->try_offset + group_start->try_len); // if returns -1, resume LLVMValueRef switch_ins = LLVMBuildSwitch (lpadBuilder, match, resume_bb, group_size); // else move to that target bb for (int i = 0; i < group_size; i++) { MonoExceptionClause *clause = group_start + i; int clause_index = clause - cfg->header->clauses; MonoBasicBlock *handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index)); g_assert (handler_bb); g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb); } } else { int clause_index = group_start - cfg->header->clauses; MonoBasicBlock *finally_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index)); g_assert (finally_bb); LLVMBuildBr (ctx->builder, ctx->bblocks [finally_bb->block_num].call_handler_target_bb); } ctx->builder = old_builder; return lpad_bb; } static LLVMValueRef create_const_vector (LLVMTypeRef t, const int *vals, int count) { g_assert (count <= MAX_VECTOR_ELEMS); LLVMValueRef llvm_vals [MAX_VECTOR_ELEMS]; for (int i = 0; i < count; i++) llvm_vals [i] = LLVMConstInt (t, vals [i], FALSE); return LLVMConstVector (llvm_vals, count); } static LLVMValueRef create_const_vector_i32 (const int *mask, int count) { return create_const_vector (LLVMInt32Type (), mask, count); } static LLVMValueRef create_const_vector_4_i32 (int v0, int v1, int v2, int v3) { LLVMValueRef mask [4]; mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE); mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE); mask [2] = LLVMConstInt (LLVMInt32Type (), v2, FALSE); mask [3] = LLVMConstInt (LLVMInt32Type (), v3, FALSE); return LLVMConstVector (mask, 4); } static LLVMValueRef create_const_vector_2_i32 (int v0, int v1) { LLVMValueRef mask [2]; mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE); mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE); return LLVMConstVector (mask, 2); } static LLVMValueRef broadcast_element (EmitContext *ctx, LLVMValueRef elem, int count) { LLVMTypeRef t = LLVMTypeOf (elem); LLVMTypeRef init_vec_t = LLVMVectorType (t, 1); LLVMValueRef undef = LLVMGetUndef (init_vec_t); LLVMValueRef vec = LLVMBuildInsertElement (ctx->builder, undef, elem, const_int32 (0), ""); LLVMValueRef select_zero = LLVMConstNull (LLVMVectorType (LLVMInt32Type (), count)); return LLVMBuildShuffleVector (ctx->builder, vec, undef, select_zero, "broadcast"); } static LLVMValueRef broadcast_constant (int const_val, LLVMTypeRef elem_t, int count) { int vals [MAX_VECTOR_ELEMS]; for (int i = 0; i < count; ++i) vals [i] = const_val; return create_const_vector (elem_t, vals, count); } static LLVMValueRef create_shift_vector (EmitContext *ctx, LLVMValueRef type_donor, LLVMValueRef shiftamt) { LLVMTypeRef t = LLVMTypeOf (type_donor); unsigned int elems = LLVMGetVectorSize (t); LLVMTypeRef elem_t = LLVMGetElementType (t); shiftamt = convert_full (ctx, shiftamt, elem_t, TRUE); shiftamt = broadcast_element (ctx, shiftamt, elems); return shiftamt; } static LLVMTypeRef to_integral_vector_type (LLVMTypeRef t) { unsigned int elems = LLVMGetVectorSize (t); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int bits = mono_llvm_get_prim_size_bits (elem_t); return LLVMVectorType (LLVMIntType (bits), elems); } static LLVMValueRef bitcast_to_integral (EmitContext *ctx, LLVMValueRef vec) { LLVMTypeRef src_t = LLVMTypeOf (vec); LLVMTypeRef dst_t = to_integral_vector_type (src_t); if (dst_t != src_t) return LLVMBuildBitCast (ctx->builder, vec, dst_t, "bc2i"); return vec; } static LLVMValueRef extract_high_elements (EmitContext *ctx, LLVMValueRef src_vec) { LLVMTypeRef src_t = LLVMTypeOf (src_vec); unsigned int src_elems = LLVMGetVectorSize (src_t); unsigned int dst_elems = src_elems / 2; int mask [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 0; i < dst_elems; ++i) mask [i] = dst_elems + i; return LLVMBuildShuffleVector (ctx->builder, src_vec, LLVMGetUndef (src_t), create_const_vector_i32 (mask, dst_elems), "extract_high"); } static LLVMValueRef keep_lowest_element (EmitContext *ctx, LLVMTypeRef dst_t, LLVMValueRef vec) { LLVMTypeRef t = LLVMTypeOf (vec); g_assert (LLVMGetElementType (dst_t) == LLVMGetElementType (t)); unsigned int elems = LLVMGetVectorSize (dst_t); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; mask [0] = 0; for (unsigned int i = 1; i < elems; ++i) mask [i] = src_elems; return LLVMBuildShuffleVector (ctx->builder, vec, LLVMConstNull (t), create_const_vector_i32 (mask, elems), "keep_lowest"); } static LLVMValueRef concatenate_vectors (EmitContext *ctx, LLVMValueRef xs, LLVMValueRef ys) { LLVMTypeRef t = LLVMTypeOf (xs); unsigned int elems = LLVMGetVectorSize (t) * 2; int mask [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 0; i < elems; ++i) mask [i] = i; return LLVMBuildShuffleVector (ctx->builder, xs, ys, create_const_vector_i32 (mask, elems), "concat_vecs"); } static LLVMValueRef scalar_from_vector (EmitContext *ctx, LLVMValueRef xs) { return LLVMBuildExtractElement (ctx->builder, xs, const_int32 (0), "v2s"); } static LLVMValueRef vector_from_scalar (EmitContext *ctx, LLVMTypeRef type, LLVMValueRef x) { return LLVMBuildInsertElement (ctx->builder, LLVMConstNull (type), x, const_int32 (0), "s2v"); } typedef struct { EmitContext *ctx; MonoBasicBlock *bb; LLVMBasicBlockRef continuation; LLVMValueRef phi; LLVMValueRef switch_ins; LLVMBasicBlockRef tmp_block; LLVMBasicBlockRef default_case; LLVMTypeRef switch_index_type; const char *name; int max_cases; int i; } ImmediateUnrollCtx; static ImmediateUnrollCtx immediate_unroll_begin ( EmitContext *ctx, MonoBasicBlock *bb, int max_cases, LLVMValueRef switch_index, LLVMTypeRef return_type, const char *name) { LLVMBasicBlockRef default_case = gen_bb (ctx, name); LLVMBasicBlockRef continuation = gen_bb (ctx, name); LLVMValueRef switch_ins = LLVMBuildSwitch (ctx->builder, switch_index, default_case, max_cases); LLVMPositionBuilderAtEnd (ctx->builder, continuation); LLVMValueRef phi = LLVMBuildPhi (ctx->builder, return_type, name); ImmediateUnrollCtx ictx = { 0 }; ictx.ctx = ctx; ictx.bb = bb; ictx.continuation = continuation; ictx.phi = phi; ictx.switch_ins = switch_ins; ictx.default_case = default_case; ictx.switch_index_type = LLVMTypeOf (switch_index); ictx.name = name; ictx.max_cases = max_cases; return ictx; } static gboolean immediate_unroll_next (ImmediateUnrollCtx *ictx, int *i) { if (ictx->i >= ictx->max_cases) return FALSE; ictx->tmp_block = gen_bb (ictx->ctx, ictx->name); LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->tmp_block); *i = ictx->i; ++ictx->i; return TRUE; } static void immediate_unroll_commit (ImmediateUnrollCtx *ictx, int switch_const, LLVMValueRef value) { LLVMBuildBr (ictx->ctx->builder, ictx->continuation); LLVMAddCase (ictx->switch_ins, LLVMConstInt (ictx->switch_index_type, switch_const, FALSE), ictx->tmp_block); LLVMAddIncoming (ictx->phi, &value, &ictx->tmp_block, 1); } static void immediate_unroll_default (ImmediateUnrollCtx *ictx) { LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->default_case); } static void immediate_unroll_commit_default (ImmediateUnrollCtx *ictx, LLVMValueRef value) { LLVMBuildBr (ictx->ctx->builder, ictx->continuation); LLVMAddIncoming (ictx->phi, &value, &ictx->default_case, 1); } static void immediate_unroll_unreachable_default (ImmediateUnrollCtx *ictx) { immediate_unroll_default (ictx); LLVMBuildUnreachable (ictx->ctx->builder); } static LLVMValueRef immediate_unroll_end (ImmediateUnrollCtx *ictx, LLVMBasicBlockRef *continuation) { EmitContext *ctx = ictx->ctx; LLVMBuilderRef builder = ctx->builder; LLVMPositionBuilderAtEnd (builder, ictx->continuation); *continuation = ictx->continuation; ctx->bblocks [ictx->bb->block_num].end_bblock = ictx->continuation; return ictx->phi; } typedef struct { EmitContext *ctx; LLVMTypeRef intermediate_type; LLVMTypeRef return_type; gboolean needs_fake_scalar_op; llvm_ovr_tag_t ovr_tag; } ScalarOpFromVectorOpCtx; static inline gboolean check_needs_fake_scalar_op (MonoTypeEnum type) { #if defined(TARGET_ARM64) switch (type) { case MONO_TYPE_U1: case MONO_TYPE_I1: case MONO_TYPE_U2: case MONO_TYPE_I2: return TRUE; } #endif return FALSE; } static ScalarOpFromVectorOpCtx scalar_op_from_vector_op (EmitContext *ctx, LLVMTypeRef return_type, MonoInst *ins) { ScalarOpFromVectorOpCtx ret = { 0 }; ret.ctx = ctx; ret.intermediate_type = return_type; ret.return_type = return_type; ret.needs_fake_scalar_op = check_needs_fake_scalar_op (inst_c1_type (ins)); ret.ovr_tag = ovr_tag_from_llvm_type (return_type); if (!ret.needs_fake_scalar_op) { ret.ovr_tag = ovr_tag_force_scalar (ret.ovr_tag); ret.intermediate_type = ovr_tag_to_llvm_type (ret.ovr_tag); } return ret; } static void scalar_op_from_vector_op_process_args (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef *args, int num_args) { if (!sctx->needs_fake_scalar_op) for (int i = 0; i < num_args; ++i) args [i] = scalar_from_vector (sctx->ctx, args [i]); } static LLVMValueRef scalar_op_from_vector_op_process_result (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef result) { if (sctx->needs_fake_scalar_op) return keep_lowest_element (sctx->ctx, LLVMTypeOf (result), result); return vector_from_scalar (sctx->ctx, sctx->return_type, result); } static void emit_llvmonly_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBasicBlockRef cbb) { int clause_index = MONO_REGION_CLAUSE_INDEX (bb->region); MonoExceptionClause *clause = &ctx->cfg->header->clauses [clause_index]; // Make exception available to catch blocks if (!(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags & MONO_EXCEPTION_CLAUSE_FAULT)) { LLVMValueRef mono_exc = mono_llvm_emit_load_exception_call (ctx, ctx->builder); g_assert (ctx->ex_var); LLVMBuildStore (ctx->builder, LLVMBuildBitCast (ctx->builder, mono_exc, ObjRefType (), ""), ctx->ex_var); if (bb->in_scount == 1) { MonoInst *exvar = bb->in_stack [0]; g_assert (!ctx->values [exvar->dreg]); g_assert (ctx->ex_var); ctx->values [exvar->dreg] = LLVMBuildLoad (ctx->builder, ctx->ex_var, "save_exception"); emit_volatile_store (ctx, exvar->dreg); } mono_llvm_emit_clear_exception_call (ctx, ctx->builder); } #ifdef TARGET_WASM if (ctx->cfg->lmf_var && !ctx->cfg->deopt) { LLVMValueRef callee; LLVMValueRef args [1]; LLVMTypeRef sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE); /* * There might be an LMF on the stack inserted to enable stack walking, see * method_needs_stack_walk (). If an exception is thrown, the LMF popping code * is not executed, so do it here. */ g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]); callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_pop_lmf)); args [0] = convert (ctx, ctx->addresses [ctx->cfg->lmf_var->dreg], ctx->module->ptr_type); emit_call (ctx, bb, &ctx->builder, callee, args, 1); } #endif LLVMBuilderRef handler_builder = create_builder (ctx); LLVMBasicBlockRef target_bb = ctx->bblocks [bb->block_num].call_handler_target_bb; LLVMPositionBuilderAtEnd (handler_builder, target_bb); // Make the handler code end with a jump to cbb LLVMBuildBr (handler_builder, cbb); } static void emit_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef builder) { MonoCompile *cfg = ctx->cfg; LLVMValueRef *values = ctx->values; LLVMModuleRef lmodule = ctx->lmodule; BBInfo *bblocks = ctx->bblocks; LLVMTypeRef i8ptr; LLVMValueRef personality; LLVMValueRef landing_pad; LLVMBasicBlockRef target_bb; MonoInst *exvar; static int ti_generator; char ti_name [128]; LLVMValueRef type_info; int clause_index; GSList *l; // <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+ if (cfg->compile_aot) { /* Use a dummy personality function */ personality = LLVMGetNamedFunction (lmodule, "mono_personality"); g_assert (personality); } else { /* Can't cache this as each method is in its own llvm module */ LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE); personality = LLVMAddFunction (ctx->lmodule, "mono_personality", personality_type); mono_llvm_add_func_attr (personality, LLVM_ATTR_NO_UNWIND); LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (personality, "ENTRY"); LLVMBuilderRef builder2 = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder2, entry_bb); LLVMBuildRet (builder2, LLVMConstInt (LLVMInt32Type (), 0, FALSE)); LLVMDisposeBuilder (builder2); } i8ptr = LLVMPointerType (LLVMInt8Type (), 0); clause_index = (mono_get_block_region_notry (cfg, bb->region) >> 8) - 1; /* * Create the type info */ sprintf (ti_name, "type_info_%d", ti_generator); ti_generator ++; if (cfg->compile_aot) { /* decode_eh_frame () in aot-runtime.c will decode this */ type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name); LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE)); /* * These symbols are not really used, the clause_index is embedded into the EH tables generated by DwarfMonoException in LLVM. */ LLVMSetLinkage (type_info, LLVMInternalLinkage); } else { type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name); LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE)); } { LLVMTypeRef members [2], ret_type; members [0] = i8ptr; members [1] = LLVMInt32Type (); ret_type = LLVMStructType (members, 2, FALSE); landing_pad = LLVMBuildLandingPad (builder, ret_type, personality, 1, ""); LLVMAddClause (landing_pad, type_info); /* Store the exception into the exvar */ if (ctx->ex_var) LLVMBuildStore (builder, convert (ctx, LLVMBuildExtractValue (builder, landing_pad, 0, "ex_obj"), ObjRefType ()), ctx->ex_var); } /* * LLVM throw sites are associated with a one landing pad, and LLVM generated * code expects control to be transferred to this landing pad even in the * presence of nested clauses. The landing pad needs to branch to the landing * pads belonging to nested clauses based on the selector value returned by * the landing pad instruction, which is passed to the landing pad in a * register by the EH code. */ target_bb = bblocks [bb->block_num].call_handler_target_bb; g_assert (target_bb); /* * Branch to the correct landing pad */ LLVMValueRef ex_selector = LLVMBuildExtractValue (builder, landing_pad, 1, "ex_selector"); LLVMValueRef switch_ins = LLVMBuildSwitch (builder, ex_selector, target_bb, 0); for (l = ctx->nested_in [clause_index]; l; l = l->next) { int nesting_clause_index = GPOINTER_TO_INT (l->data); MonoBasicBlock *handler_bb; handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (nesting_clause_index)); g_assert (handler_bb); g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), nesting_clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb); } /* Start a new bblock which CALL_HANDLER can branch to */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, target_bb); ctx->bblocks [bb->block_num].end_bblock = target_bb; /* Store the exception into the IL level exvar */ if (bb->in_scount == 1) { g_assert (bb->in_scount == 1); exvar = bb->in_stack [0]; // FIXME: This is shared with filter clauses ? g_assert (!values [exvar->dreg]); g_assert (ctx->ex_var); values [exvar->dreg] = LLVMBuildLoad (builder, ctx->ex_var, ""); emit_volatile_store (ctx, exvar->dreg); } /* Make normal branches to the start of the clause branch to the new bblock */ bblocks [bb->block_num].bblock = target_bb; } static LLVMValueRef get_double_const (MonoCompile *cfg, double val) { //#ifdef TARGET_WASM #if 0 //Wasm requires us to canonicalize NaNs. if (mono_isnan (val)) *(gint64 *)&val = 0x7FF8000000000000ll; #endif return LLVMConstReal (LLVMDoubleType (), val); } static LLVMValueRef get_float_const (MonoCompile *cfg, float val) { //#ifdef TARGET_WASM #if 0 if (mono_isnan (val)) *(int *)&val = 0x7FC00000; #endif if (cfg->r4fp) return LLVMConstReal (LLVMFloatType (), val); else return LLVMConstFPExt (LLVMConstReal (LLVMFloatType (), val), LLVMDoubleType ()); } static LLVMValueRef call_overloaded_intrins (EmitContext *ctx, int id, llvm_ovr_tag_t ovr_tag, LLVMValueRef *args, const char *name) { int key = key_from_id_and_tag (id, ovr_tag); LLVMValueRef intrins = get_intrins (ctx, key); int nargs = LLVMCountParamTypes (LLVMGetElementType (LLVMTypeOf (intrins))); for (int i = 0; i < nargs; ++i) { LLVMTypeRef t1 = LLVMTypeOf (args [i]); LLVMTypeRef t2 = LLVMTypeOf (LLVMGetParam (intrins, i)); if (t1 != t2) args [i] = convert (ctx, args [i], t2); } return LLVMBuildCall (ctx->builder, intrins, args, nargs, name); } static LLVMValueRef call_intrins (EmitContext *ctx, int id, LLVMValueRef *args, const char *name) { return call_overloaded_intrins (ctx, id, 0, args, name); } static void process_bb (EmitContext *ctx, MonoBasicBlock *bb) { MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig = ctx->sig; LLVMValueRef method = ctx->lmethod; LLVMValueRef *values = ctx->values; LLVMValueRef *addresses = ctx->addresses; LLVMCallInfo *linfo = ctx->linfo; BBInfo *bblocks = ctx->bblocks; MonoInst *ins; LLVMBasicBlockRef cbb; LLVMBuilderRef builder; gboolean has_terminator; LLVMValueRef v; LLVMValueRef lhs, rhs, arg3; int nins = 0; cbb = get_end_bb (ctx, bb); builder = create_builder (ctx); ctx->builder = builder; LLVMPositionBuilderAtEnd (builder, cbb); if (!ctx_ok (ctx)) return; if (cfg->interp_entry_only && bb != cfg->bb_init && bb != cfg->bb_entry && bb != cfg->bb_exit) { /* The interp entry code is in bb_entry, skip the rest as we might not be able to compile it */ LLVMBuildUnreachable (builder); return; } if (bb->flags & BB_EXCEPTION_HANDLER) { if (!ctx->llvm_only && !bblocks [bb->block_num].invoke_target) { set_failure (ctx, "handler without invokes"); return; } if (ctx->llvm_only) emit_llvmonly_handler_start (ctx, bb, cbb); else emit_handler_start (ctx, bb, builder); if (!ctx_ok (ctx)) return; builder = ctx->builder; } /* Handle PHI nodes first */ /* They should be grouped at the start of the bb */ for (ins = bb->code; ins; ins = ins->next) { emit_dbg_loc (ctx, builder, ins->cil_code); if (ins->opcode == OP_NOP) continue; if (!MONO_IS_PHI (ins)) break; if (cfg->interp_entry_only) break; int i; gboolean empty = TRUE; /* Check that all input bblocks really branch to us */ for (i = 0; i < bb->in_count; ++i) { if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_NOT_REACHED) ins->inst_phi_args [i + 1] = -1; else empty = FALSE; } if (empty) { /* LLVM doesn't like phi instructions with zero operands */ ctx->is_dead [ins->dreg] = TRUE; continue; } /* Created earlier, insert it now */ LLVMInsertIntoBuilder (builder, values [ins->dreg]); for (i = 0; i < ins->inst_phi_args [0]; i++) { int sreg1 = ins->inst_phi_args [i + 1]; int count, j; /* * Count the number of times the incoming bblock branches to us, * since llvm requires a separate entry for each. */ if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_SWITCH) { MonoInst *switch_ins = bb->in_bb [i]->last_ins; count = 0; for (j = 0; j < GPOINTER_TO_UINT (switch_ins->klass); ++j) { if (switch_ins->inst_many_bb [j] == bb) count ++; } } else { count = 1; } /* Remember for later */ for (j = 0; j < count; ++j) { PhiNode *node = (PhiNode*)mono_mempool_alloc0 (ctx->mempool, sizeof (PhiNode)); node->bb = bb; node->phi = ins; node->in_bb = bb->in_bb [i]; node->sreg = sreg1; bblocks [bb->in_bb [i]->block_num].phi_nodes = g_slist_prepend_mempool (ctx->mempool, bblocks [bb->in_bb [i]->block_num].phi_nodes, node); } } } // Add volatile stores for PHI nodes // These need to be emitted after the PHI nodes for (ins = bb->code; ins; ins = ins->next) { const char *spec = LLVM_INS_INFO (ins->opcode); if (ins->opcode == OP_NOP) continue; if (!MONO_IS_PHI (ins)) break; if (spec [MONO_INST_DEST] != 'v') emit_volatile_store (ctx, ins->dreg); } has_terminator = FALSE; for (ins = bb->code; ins; ins = ins->next) { const char *spec = LLVM_INS_INFO (ins->opcode); char *dname = NULL; char dname_buf [128]; emit_dbg_loc (ctx, builder, ins->cil_code); nins ++; if (nins > 1000) { /* * Some steps in llc are non-linear in the size of basic blocks, see #5714. * Start a new bblock. * Prevent the bblocks to be merged by doing a volatile load + cond branch * from localloc-ed memory. */ if (!cfg->llvm_only) ;//set_failure (ctx, "basic block too long"); if (!ctx->long_bb_break_var) { ctx->long_bb_break_var = build_alloca_llvm_type_name (ctx, LLVMInt32Type (), 0, "long_bb_break"); mono_llvm_build_store (ctx->alloca_builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE); } cbb = gen_bb (ctx, "CONT_LONG_BB"); LLVMBasicBlockRef dummy_bb = gen_bb (ctx, "CONT_LONG_BB_DUMMY"); LLVMValueRef load = mono_llvm_build_load (builder, ctx->long_bb_break_var, "", TRUE); /* * The long_bb_break_var is initialized to 0 in the prolog, so this branch will always go to 'cbb' * but llvm doesn't know that, so the branch is not going to be eliminated. */ LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, load, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMBuildCondBr (builder, cmp, cbb, dummy_bb); /* Emit a dummy false bblock which does nothing but contains a volatile store so it cannot be eliminated */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, dummy_bb); mono_llvm_build_store (builder, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE); LLVMBuildBr (builder, cbb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, cbb); ctx->bblocks [bb->block_num].end_bblock = cbb; nins = 0; emit_dbg_loc (ctx, builder, ins->cil_code); } if (has_terminator) /* There could be instructions after a terminator, skip them */ break; if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins)) { sprintf (dname_buf, "t%d", ins->dreg); dname = dname_buf; } if (spec [MONO_INST_SRC1] != ' ' && spec [MONO_INST_SRC1] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) && var->opcode != OP_GSHAREDVT_ARG_REGOFFSET) { lhs = emit_volatile_load (ctx, ins->sreg1); } else { /* It is ok for SETRET to have an uninitialized argument */ if (!values [ins->sreg1] && ins->opcode != OP_SETRET) { set_failure (ctx, "sreg1"); return; } lhs = values [ins->sreg1]; } } else { lhs = NULL; } if (spec [MONO_INST_SRC2] != ' ' && spec [MONO_INST_SRC2] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg2); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { rhs = emit_volatile_load (ctx, ins->sreg2); } else { if (!values [ins->sreg2]) { set_failure (ctx, "sreg2"); return; } rhs = values [ins->sreg2]; } } else { rhs = NULL; } if (spec [MONO_INST_SRC3] != ' ' && spec [MONO_INST_SRC3] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg3); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { arg3 = emit_volatile_load (ctx, ins->sreg3); } else { if (!values [ins->sreg3]) { set_failure (ctx, "sreg3"); return; } arg3 = values [ins->sreg3]; } } else { arg3 = NULL; } //mono_print_ins (ins); gboolean skip_volatile_store = FALSE; switch (ins->opcode) { case OP_NOP: case OP_NOT_NULL: case OP_LIVERANGE_START: case OP_LIVERANGE_END: break; case OP_ICONST: values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE); break; case OP_I8CONST: #if TARGET_SIZEOF_VOID_P == 4 values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); #else values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), (gint64)ins->inst_c0, FALSE); #endif break; case OP_R8CONST: values [ins->dreg] = get_double_const (cfg, *(double*)ins->inst_p0); break; case OP_R4CONST: values [ins->dreg] = get_float_const (cfg, *(float*)ins->inst_p0); break; case OP_DUMMY_ICONST: values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); break; case OP_DUMMY_I8CONST: values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), 0, FALSE); break; case OP_DUMMY_R8CONST: values [ins->dreg] = LLVMConstReal (LLVMDoubleType (), 0.0f); break; case OP_BR: { LLVMBasicBlockRef target_bb = get_bb (ctx, ins->inst_target_bb); LLVMBuildBr (builder, target_bb); has_terminator = TRUE; break; } case OP_SWITCH: { int i; LLVMValueRef v; char bb_name [128]; LLVMBasicBlockRef new_bb; LLVMBuilderRef new_builder; // The default branch is already handled // FIXME: Handle it here /* Start new bblock */ sprintf (bb_name, "SWITCH_DEFAULT_BB%d", ctx->default_index ++); new_bb = LLVMAppendBasicBlock (ctx->lmethod, bb_name); lhs = convert (ctx, lhs, LLVMInt32Type ()); v = LLVMBuildSwitch (builder, lhs, new_bb, GPOINTER_TO_UINT (ins->klass)); for (i = 0; i < GPOINTER_TO_UINT (ins->klass); ++i) { MonoBasicBlock *target_bb = ins->inst_many_bb [i]; LLVMAddCase (v, LLVMConstInt (LLVMInt32Type (), i, FALSE), get_bb (ctx, target_bb)); } new_builder = create_builder (ctx); LLVMPositionBuilderAtEnd (new_builder, new_bb); LLVMBuildUnreachable (new_builder); has_terminator = TRUE; g_assert (!ins->next); break; } case OP_SETRET: switch (linfo->ret.storage) { case LLVMArgNormal: case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: case LLVMArgWasmVtypeAsScalar: { LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method))); LLVMValueRef retval = LLVMGetUndef (ret_type); gboolean src_in_reg = FALSE; gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret)); switch (linfo->ret.storage) { case LLVMArgNormal: src_in_reg = TRUE; break; case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: src_in_reg = is_simd; break; } if (src_in_reg && (!lhs || ctx->is_dead [ins->sreg1])) { /* * The method did not set its return value, probably because it * ends with a throw. */ LLVMBuildRet (builder, retval); break; } switch (linfo->ret.storage) { case LLVMArgNormal: retval = convert (ctx, lhs, type_to_llvm_type (ctx, sig->ret)); break; case LLVMArgVtypeInReg: if (is_simd) { /* The return type is an LLVM aggregate type, so a bare bitcast cannot be used to do this conversion. */ int width = mono_type_size (sig->ret, NULL); int elems = width / TARGET_SIZEOF_VOID_P; /* The return value might not be set if there is a throw */ LLVMValueRef val = LLVMBuildBitCast (builder, lhs, LLVMVectorType (IntPtrType (), elems), ""); for (int i = 0; i < elems; ++i) { LLVMValueRef element = LLVMBuildExtractElement (builder, val, const_int32 (i), ""); retval = LLVMBuildInsertValue (builder, retval, element, i, "setret_simd_vtype_in_reg"); } } else { LLVMValueRef addr = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""); for (int i = 0; i < 2; ++i) { if (linfo->ret.pair_storage [i] == LLVMArgInIReg) { LLVMValueRef indexes [2], part_addr; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), i, FALSE); part_addr = LLVMBuildGEP (builder, addr, indexes, 2, ""); retval = LLVMBuildInsertValue (builder, retval, LLVMBuildLoad (builder, part_addr, ""), i, ""); } else { g_assert (linfo->ret.pair_storage [i] == LLVMArgNone); } } } break; case LLVMArgVtypeAsScalar: if (is_simd) { retval = LLVMBuildBitCast (builder, values [ins->sreg1], ret_type, "setret_simd_vtype_as_scalar"); } else { g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), ""); } break; case LLVMArgWasmVtypeAsScalar: g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), ""); break; } LLVMBuildRet (builder, retval); break; } case LLVMArgVtypeByRef: { LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtFixed: { LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret); /* The return value is in lhs, need to store to the vret argument */ /* sreg1 might not be set */ if (lhs) { g_assert (cfg->vret_addr); g_assert (values [cfg->vret_addr->dreg]); LLVMBuildStore (builder, convert (ctx, lhs, ret_type), convert (ctx, values [cfg->vret_addr->dreg], LLVMPointerType (ret_type, 0))); } LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtFixedVtype: { /* Already set */ LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtVariable: { /* Already set */ LLVMBuildRetVoid (builder); break; } case LLVMArgVtypeRetAddr: { LLVMBuildRetVoid (builder); break; } case LLVMArgAsIArgs: case LLVMArgFpStruct: { LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method))); LLVMValueRef retval; g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, convert (ctx, addresses [ins->sreg1], LLVMPointerType (ret_type, 0)), ""); LLVMBuildRet (builder, retval); break; } case LLVMArgNone: LLVMBuildRetVoid (builder); break; default: g_assert_not_reached (); break; } has_terminator = TRUE; break; case OP_ICOMPARE: case OP_FCOMPARE: case OP_RCOMPARE: case OP_LCOMPARE: case OP_COMPARE: case OP_ICOMPARE_IMM: case OP_LCOMPARE_IMM: case OP_COMPARE_IMM: { CompRelation rel; LLVMValueRef cmp, args [16]; gboolean likely = (ins->flags & MONO_INST_LIKELY) != 0; gboolean unlikely = FALSE; if (MONO_IS_COND_BRANCH_OP (ins->next)) { if (ins->next->inst_false_bb->out_of_line) likely = TRUE; else if (ins->next->inst_true_bb->out_of_line) unlikely = TRUE; } if (ins->next->opcode == OP_NOP) break; if (ins->next->opcode == OP_BR) /* The comparison result is not needed */ continue; rel = mono_opcode_to_cond (ins->next->opcode); if (ins->opcode == OP_ICOMPARE_IMM) { lhs = convert (ctx, lhs, LLVMInt32Type ()); rhs = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); } if (ins->opcode == OP_LCOMPARE_IMM) { lhs = convert (ctx, lhs, LLVMInt64Type ()); rhs = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); } if (ins->opcode == OP_LCOMPARE) { lhs = convert (ctx, lhs, LLVMInt64Type ()); rhs = convert (ctx, rhs, LLVMInt64Type ()); } if (ins->opcode == OP_ICOMPARE) { lhs = convert (ctx, lhs, LLVMInt32Type ()); rhs = convert (ctx, rhs, LLVMInt32Type ()); } if (lhs && rhs) { if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind) rhs = convert (ctx, rhs, LLVMTypeOf (lhs)); else if (LLVMGetTypeKind (LLVMTypeOf (rhs)) == LLVMPointerTypeKind) lhs = convert (ctx, lhs, LLVMTypeOf (rhs)); } /* We use COMPARE+SETcc/Bcc, llvm uses SETcc+br cond */ if (ins->opcode == OP_FCOMPARE) { cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), ""); } else if (ins->opcode == OP_RCOMPARE) { cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), ""); } else if (ins->opcode == OP_COMPARE_IMM) { LLVMIntPredicate llvm_pred = cond_to_llvm_cond [rel]; if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && ins->inst_imm == 0) { // We are emitting a NULL check for a pointer gboolean nonnull = mono_llvm_is_nonnull (lhs); if (nonnull && llvm_pred == LLVMIntEQ) cmp = LLVMConstInt (LLVMInt1Type (), FALSE, FALSE); else if (nonnull && llvm_pred == LLVMIntNE) cmp = LLVMConstInt (LLVMInt1Type (), TRUE, FALSE); else cmp = LLVMBuildICmp (builder, llvm_pred, lhs, LLVMConstNull (LLVMTypeOf (lhs)), ""); } else { cmp = LLVMBuildICmp (builder, llvm_pred, convert (ctx, lhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), ""); } } else if (ins->opcode == OP_LCOMPARE_IMM) { cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); } else if (ins->opcode == OP_COMPARE) { if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); else cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], convert (ctx, lhs, IntPtrType ()), convert (ctx, rhs, IntPtrType ()), ""); } else cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); if (likely || unlikely) { args [0] = cmp; args [1] = LLVMConstInt (LLVMInt1Type (), likely ? 1 : 0, FALSE); cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, ""); } if (MONO_IS_COND_BRANCH_OP (ins->next)) { if (ins->next->inst_true_bb == ins->next->inst_false_bb) { /* * If the target bb contains PHI instructions, LLVM requires * two PHI entries for this bblock, while we only generate one. * So convert this to an unconditional bblock. (bxc #171). */ LLVMBuildBr (builder, get_bb (ctx, ins->next->inst_true_bb)); } else { LLVMBuildCondBr (builder, cmp, get_bb (ctx, ins->next->inst_true_bb), get_bb (ctx, ins->next->inst_false_bb)); } has_terminator = TRUE; } else if (MONO_IS_SETCC (ins->next)) { sprintf (dname_buf, "t%d", ins->next->dreg); dname = dname_buf; values [ins->next->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); /* Add stores for volatile variables */ emit_volatile_store (ctx, ins->next->dreg); } else if (MONO_IS_COND_EXC (ins->next)) { gboolean force_explicit_branch = FALSE; if (bb->region != -1) { /* Don't tag null check branches in exception-handling * regions with `make.implicit`. */ force_explicit_branch = TRUE; } emit_cond_system_exception (ctx, bb, (const char*)ins->next->inst_p1, cmp, force_explicit_branch); if (!ctx_ok (ctx)) break; builder = ctx->builder; } else { set_failure (ctx, "next"); break; } ins = ins->next; break; } case OP_FCEQ: case OP_FCNEQ: case OP_FCLT: case OP_FCLT_UN: case OP_FCGT: case OP_FCGT_UN: case OP_FCGE: case OP_FCLE: { CompRelation rel; LLVMValueRef cmp; rel = mono_opcode_to_cond (ins->opcode); cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); break; } case OP_RCEQ: case OP_RCNEQ: case OP_RCLT: case OP_RCLT_UN: case OP_RCGT: case OP_RCGT_UN: { CompRelation rel; LLVMValueRef cmp; rel = mono_opcode_to_cond (ins->opcode); cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); break; } case OP_PHI: case OP_FPHI: case OP_VPHI: case OP_XPHI: { // Handled above skip_volatile_store = TRUE; break; } case OP_MOVE: case OP_LMOVE: case OP_XMOVE: case OP_SETFRET: g_assert (lhs); values [ins->dreg] = lhs; break; case OP_FMOVE: case OP_RMOVE: { MonoInst *var = get_vreg_to_inst (cfg, ins->dreg); g_assert (lhs); values [ins->dreg] = lhs; if (var && m_class_get_byval_arg (var->klass)->type == MONO_TYPE_R4) { /* * This is added by the spilling pass in case of the JIT, * but we have to do it ourselves. */ values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ()); } break; } case OP_MOVE_F_TO_I4: { values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""), LLVMInt32Type (), ""); break; } case OP_MOVE_I4_TO_F: { values [ins->dreg] = LLVMBuildFPExt (builder, LLVMBuildBitCast (builder, lhs, LLVMFloatType (), ""), LLVMDoubleType (), ""); break; } case OP_MOVE_F_TO_I8: { values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMInt64Type (), ""); break; } case OP_MOVE_I8_TO_F: { values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMDoubleType (), ""); break; } case OP_IADD: case OP_ISUB: case OP_IAND: case OP_IMUL: case OP_IDIV: case OP_IDIV_UN: case OP_IREM: case OP_IREM_UN: case OP_IOR: case OP_IXOR: case OP_ISHL: case OP_ISHR: case OP_ISHR_UN: case OP_FADD: case OP_FSUB: case OP_FMUL: case OP_FDIV: case OP_LADD: case OP_LSUB: case OP_LMUL: case OP_LDIV: case OP_LDIV_UN: case OP_LREM: case OP_LREM_UN: case OP_LAND: case OP_LOR: case OP_LXOR: case OP_LSHL: case OP_LSHR: case OP_LSHR_UN: lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); emit_div_check (ctx, builder, bb, ins, lhs, rhs); if (!ctx_ok (ctx)) break; builder = ctx->builder; switch (ins->opcode) { case OP_IADD: case OP_LADD: values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, dname); break; case OP_ISUB: case OP_LSUB: values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, dname); break; case OP_IMUL: case OP_LMUL: values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, dname); break; case OP_IREM: case OP_LREM: values [ins->dreg] = LLVMBuildSRem (builder, lhs, rhs, dname); break; case OP_IREM_UN: case OP_LREM_UN: values [ins->dreg] = LLVMBuildURem (builder, lhs, rhs, dname); break; case OP_IDIV: case OP_LDIV: values [ins->dreg] = LLVMBuildSDiv (builder, lhs, rhs, dname); break; case OP_IDIV_UN: case OP_LDIV_UN: values [ins->dreg] = LLVMBuildUDiv (builder, lhs, rhs, dname); break; case OP_FDIV: case OP_RDIV: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname); break; case OP_IAND: case OP_LAND: values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, dname); break; case OP_IOR: case OP_LOR: values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, dname); break; case OP_IXOR: case OP_LXOR: values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, dname); break; case OP_ISHL: case OP_LSHL: values [ins->dreg] = LLVMBuildShl (builder, lhs, rhs, dname); break; case OP_ISHR: case OP_LSHR: values [ins->dreg] = LLVMBuildAShr (builder, lhs, rhs, dname); break; case OP_ISHR_UN: case OP_LSHR_UN: values [ins->dreg] = LLVMBuildLShr (builder, lhs, rhs, dname); break; case OP_FADD: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname); break; case OP_FSUB: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname); break; case OP_FMUL: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname); break; default: g_assert_not_reached (); } break; case OP_RADD: case OP_RSUB: case OP_RMUL: case OP_RDIV: { lhs = convert (ctx, lhs, LLVMFloatType ()); rhs = convert (ctx, rhs, LLVMFloatType ()); switch (ins->opcode) { case OP_RADD: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname); break; case OP_RSUB: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname); break; case OP_RMUL: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname); break; case OP_RDIV: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname); break; default: g_assert_not_reached (); break; } break; } case OP_IADD_IMM: case OP_ISUB_IMM: case OP_IMUL_IMM: case OP_IREM_IMM: case OP_IREM_UN_IMM: case OP_IDIV_IMM: case OP_IDIV_UN_IMM: case OP_IAND_IMM: case OP_IOR_IMM: case OP_IXOR_IMM: case OP_ISHL_IMM: case OP_ISHR_IMM: case OP_ISHR_UN_IMM: case OP_LADD_IMM: case OP_LSUB_IMM: case OP_LMUL_IMM: case OP_LREM_IMM: case OP_LAND_IMM: case OP_LOR_IMM: case OP_LXOR_IMM: case OP_LSHL_IMM: case OP_LSHR_IMM: case OP_LSHR_UN_IMM: case OP_ADD_IMM: case OP_AND_IMM: case OP_MUL_IMM: case OP_SHL_IMM: case OP_SHR_IMM: case OP_SHR_UN_IMM: { LLVMValueRef imm; if (spec [MONO_INST_SRC1] == 'l') { imm = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); } else { imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); } emit_div_check (ctx, builder, bb, ins, lhs, imm); if (!ctx_ok (ctx)) break; builder = ctx->builder; #if TARGET_SIZEOF_VOID_P == 4 if (ins->opcode == OP_LSHL_IMM || ins->opcode == OP_LSHR_IMM || ins->opcode == OP_LSHR_UN_IMM) imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); #endif if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind) lhs = convert (ctx, lhs, IntPtrType ()); imm = convert (ctx, imm, LLVMTypeOf (lhs)); switch (ins->opcode) { case OP_IADD_IMM: case OP_LADD_IMM: case OP_ADD_IMM: values [ins->dreg] = LLVMBuildAdd (builder, lhs, imm, dname); break; case OP_ISUB_IMM: case OP_LSUB_IMM: values [ins->dreg] = LLVMBuildSub (builder, lhs, imm, dname); break; case OP_IMUL_IMM: case OP_MUL_IMM: case OP_LMUL_IMM: values [ins->dreg] = LLVMBuildMul (builder, lhs, imm, dname); break; case OP_IDIV_IMM: case OP_LDIV_IMM: values [ins->dreg] = LLVMBuildSDiv (builder, lhs, imm, dname); break; case OP_IDIV_UN_IMM: case OP_LDIV_UN_IMM: values [ins->dreg] = LLVMBuildUDiv (builder, lhs, imm, dname); break; case OP_IREM_IMM: case OP_LREM_IMM: values [ins->dreg] = LLVMBuildSRem (builder, lhs, imm, dname); break; case OP_IREM_UN_IMM: values [ins->dreg] = LLVMBuildURem (builder, lhs, imm, dname); break; case OP_IAND_IMM: case OP_LAND_IMM: case OP_AND_IMM: values [ins->dreg] = LLVMBuildAnd (builder, lhs, imm, dname); break; case OP_IOR_IMM: case OP_LOR_IMM: values [ins->dreg] = LLVMBuildOr (builder, lhs, imm, dname); break; case OP_IXOR_IMM: case OP_LXOR_IMM: values [ins->dreg] = LLVMBuildXor (builder, lhs, imm, dname); break; case OP_ISHL_IMM: case OP_LSHL_IMM: values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname); break; case OP_SHL_IMM: if (TARGET_SIZEOF_VOID_P == 8) { /* The IL is not regular */ lhs = convert (ctx, lhs, LLVMInt64Type ()); imm = convert (ctx, imm, LLVMInt64Type ()); } values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname); break; case OP_ISHR_IMM: case OP_LSHR_IMM: case OP_SHR_IMM: values [ins->dreg] = LLVMBuildAShr (builder, lhs, imm, dname); break; case OP_ISHR_UN_IMM: /* This is used to implement conv.u4, so the lhs could be an i8 */ lhs = convert (ctx, lhs, LLVMInt32Type ()); imm = convert (ctx, imm, LLVMInt32Type ()); values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname); break; case OP_LSHR_UN_IMM: case OP_SHR_UN_IMM: values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname); break; default: g_assert_not_reached (); } break; } case OP_INEG: values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname); break; case OP_LNEG: if (LLVMTypeOf (lhs) != LLVMInt64Type ()) lhs = convert (ctx, lhs, LLVMInt64Type ()); values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt64Type (), 0, FALSE), lhs, dname); break; case OP_FNEG: lhs = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname); break; case OP_RNEG: lhs = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname); break; case OP_INOT: { guint32 v = 0xffffffff; values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt32Type (), v, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname); break; } case OP_LNOT: { if (LLVMTypeOf (lhs) != LLVMInt64Type ()) lhs = convert (ctx, lhs, LLVMInt64Type ()); guint64 v = 0xffffffffffffffffLL; values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt64Type (), v, FALSE), lhs, dname); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_X86_LEA: { LLVMValueRef v1, v2; rhs = LLVMBuildSExt (builder, convert (ctx, rhs, LLVMInt32Type ()), LLVMInt64Type (), ""); v1 = LLVMBuildMul (builder, convert (ctx, rhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ((unsigned long long)1 << ins->backend.shift_amount), FALSE), ""); v2 = LLVMBuildAdd (builder, convert (ctx, lhs, IntPtrType ()), v1, ""); values [ins->dreg] = LLVMBuildAdd (builder, v2, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), dname); break; } case OP_X86_BSF32: case OP_X86_BSF64: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt1Type (), 1, TRUE), }; int op = ins->opcode == OP_X86_BSF32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64; values [ins->dreg] = call_intrins (ctx, op, args, dname); break; } case OP_X86_BSR32: case OP_X86_BSR64: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt1Type (), 1, TRUE), }; int op = ins->opcode == OP_X86_BSR32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64; LLVMValueRef width = ins->opcode == OP_X86_BSR32 ? const_int32 (31) : const_int64 (63); LLVMValueRef tz = call_intrins (ctx, op, args, ""); values [ins->dreg] = LLVMBuildXor (builder, tz, width, dname); break; } #endif case OP_ICONV_TO_I1: case OP_ICONV_TO_I2: case OP_ICONV_TO_I4: case OP_ICONV_TO_U1: case OP_ICONV_TO_U2: case OP_ICONV_TO_U4: case OP_LCONV_TO_I1: case OP_LCONV_TO_I2: case OP_LCONV_TO_U1: case OP_LCONV_TO_U2: case OP_LCONV_TO_U4: { gboolean sign; sign = (ins->opcode == OP_ICONV_TO_I1) || (ins->opcode == OP_ICONV_TO_I2) || (ins->opcode == OP_ICONV_TO_I4) || (ins->opcode == OP_LCONV_TO_I1) || (ins->opcode == OP_LCONV_TO_I2); /* Have to do two casts since our vregs have type int */ v = LLVMBuildTrunc (builder, lhs, op_to_llvm_type (ins->opcode), ""); if (sign) values [ins->dreg] = LLVMBuildSExt (builder, v, LLVMInt32Type (), dname); else values [ins->dreg] = LLVMBuildZExt (builder, v, LLVMInt32Type (), dname); break; } case OP_ICONV_TO_I8: values [ins->dreg] = LLVMBuildSExt (builder, lhs, LLVMInt64Type (), dname); break; case OP_ICONV_TO_U8: values [ins->dreg] = LLVMBuildZExt (builder, lhs, LLVMInt64Type (), dname); break; case OP_FCONV_TO_I4: case OP_RCONV_TO_I4: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt32Type (), dname); break; case OP_FCONV_TO_I1: case OP_RCONV_TO_I1: values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt8Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U1: case OP_RCONV_TO_U1: values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildTrunc (builder, LLVMBuildFPToUI (builder, lhs, IntPtrType (), dname), LLVMInt8Type (), ""), LLVMInt32Type (), ""); break; case OP_FCONV_TO_I2: case OP_RCONV_TO_I2: values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U2: case OP_RCONV_TO_U2: values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildFPToUI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U4: case OP_RCONV_TO_U4: values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt32Type (), dname); break; case OP_FCONV_TO_U8: case OP_RCONV_TO_U8: values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt64Type (), dname); break; case OP_FCONV_TO_I8: case OP_RCONV_TO_I8: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt64Type (), dname); break; case OP_ICONV_TO_R8: case OP_LCONV_TO_R8: values [ins->dreg] = LLVMBuildSIToFP (builder, lhs, LLVMDoubleType (), dname); break; case OP_ICONV_TO_R_UN: case OP_LCONV_TO_R_UN: values [ins->dreg] = LLVMBuildUIToFP (builder, lhs, LLVMDoubleType (), dname); break; #if TARGET_SIZEOF_VOID_P == 4 case OP_LCONV_TO_U: #endif case OP_LCONV_TO_I4: values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname); break; case OP_ICONV_TO_R4: case OP_LCONV_TO_R4: v = LLVMBuildSIToFP (builder, lhs, LLVMFloatType (), ""); if (cfg->r4fp) values [ins->dreg] = v; else values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname); break; case OP_FCONV_TO_R4: v = LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""); if (cfg->r4fp) values [ins->dreg] = v; else values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname); break; case OP_RCONV_TO_R8: values [ins->dreg] = LLVMBuildFPExt (builder, lhs, LLVMDoubleType (), dname); break; case OP_RCONV_TO_R4: values [ins->dreg] = lhs; break; case OP_SEXT_I4: values [ins->dreg] = LLVMBuildSExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname); break; case OP_ZEXT_I4: values [ins->dreg] = LLVMBuildZExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname); break; case OP_TRUNC_I4: values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname); break; case OP_LOCALLOC_IMM: { LLVMValueRef v; guint32 size = ins->inst_imm; size = (size + (MONO_ARCH_FRAME_ALIGNMENT - 1)) & ~ (MONO_ARCH_FRAME_ALIGNMENT - 1); v = mono_llvm_build_alloca (builder, LLVMInt8Type (), LLVMConstInt (LLVMInt32Type (), size, FALSE), MONO_ARCH_FRAME_ALIGNMENT, ""); if (ins->flags & MONO_INST_INIT) emit_memset (ctx, builder, v, const_int32 (size), MONO_ARCH_FRAME_ALIGNMENT); values [ins->dreg] = v; break; } case OP_LOCALLOC: { LLVMValueRef v, size; size = LLVMBuildAnd (builder, LLVMBuildAdd (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), MONO_ARCH_FRAME_ALIGNMENT - 1, FALSE), ""), LLVMConstInt (LLVMInt32Type (), ~ (MONO_ARCH_FRAME_ALIGNMENT - 1), FALSE), ""); v = mono_llvm_build_alloca (builder, LLVMInt8Type (), size, MONO_ARCH_FRAME_ALIGNMENT, ""); if (ins->flags & MONO_INST_INIT) emit_memset (ctx, builder, v, size, MONO_ARCH_FRAME_ALIGNMENT); values [ins->dreg] = v; break; } case OP_LOADI1_MEMBASE: case OP_LOADU1_MEMBASE: case OP_LOADI2_MEMBASE: case OP_LOADU2_MEMBASE: case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: case OP_LOADI8_MEMBASE: case OP_LOADR4_MEMBASE: case OP_LOADR8_MEMBASE: case OP_LOAD_MEMBASE: case OP_LOADI8_MEM: case OP_LOADU1_MEM: case OP_LOADU2_MEM: case OP_LOADI4_MEM: case OP_LOADU4_MEM: case OP_LOAD_MEM: { int size = 8; LLVMValueRef base, index, addr; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); if (sext || zext) dname = (char*)""; if ((ins->opcode == OP_LOADI8_MEM) || (ins->opcode == OP_LOAD_MEM) || (ins->opcode == OP_LOADI4_MEM) || (ins->opcode == OP_LOADU4_MEM) || (ins->opcode == OP_LOADU1_MEM) || (ins->opcode == OP_LOADU2_MEM)) { addr = LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE); base = addr; } else { /* _MEMBASE */ base = lhs; if (ins->inst_offset == 0) { LLVMValueRef gep_base, gep_offset; if (mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else { addr = base; } } else if (ins->inst_offset % size != 0) { /* Unaligned load */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } } addr = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) values [ins->dreg] = mono_llvm_build_aligned_load (builder, addr, dname, is_volatile, 1); else values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, base, dname, is_faulting, is_volatile, LLVM_BARRIER_NONE); if (!(is_faulting || is_volatile) && (ins->flags & MONO_INST_INVARIANT_LOAD)) { /* * These will signal LLVM that these loads do not alias any stores, and * they can't fail, allowing them to be hoisted out of loops. */ set_invariant_load_flag (values [ins->dreg]); } if (sext) values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (zext) values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (!cfg->r4fp && ins->opcode == OP_LOADR4_MEMBASE) values [ins->dreg] = LLVMBuildFPExt (builder, values [ins->dreg], LLVMDoubleType (), dname); break; } case OP_STOREI1_MEMBASE_REG: case OP_STOREI2_MEMBASE_REG: case OP_STOREI4_MEMBASE_REG: case OP_STOREI8_MEMBASE_REG: case OP_STORER4_MEMBASE_REG: case OP_STORER8_MEMBASE_REG: case OP_STORE_MEMBASE_REG: { int size = 8; LLVMValueRef index, addr, base; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; if (!values [ins->inst_destbasereg]) { set_failure (ctx, "inst_destbasereg"); break; } t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; LLVMValueRef gep_base, gep_offset; if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else if (ins->inst_offset % size != 0) { /* Unaligned store */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } if (is_volatile && LLVMGetInstructionOpcode (base) == LLVMAlloca && !(ins->flags & MONO_INST_VOLATILE)) /* Storing to an alloca cannot fail */ is_volatile = FALSE; LLVMValueRef srcval = convert (ctx, values [ins->sreg1], t); LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1); else emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile); break; } case OP_STOREI1_MEMBASE_IMM: case OP_STOREI2_MEMBASE_IMM: case OP_STOREI4_MEMBASE_IMM: case OP_STOREI8_MEMBASE_IMM: case OP_STORE_MEMBASE_IMM: { int size = 8; LLVMValueRef index, addr, base; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; LLVMValueRef gep_base, gep_offset; if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else if (ins->inst_offset % size != 0) { /* Unaligned store */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } LLVMValueRef srcval = convert (ctx, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), t); LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1); else emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile); break; } case OP_CHECK_THIS: emit_load (ctx, bb, &builder, TARGET_SIZEOF_VOID_P, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), lhs, "", TRUE, FALSE, LLVM_BARRIER_NONE); break; case OP_OUTARG_VTRETADDR: break; case OP_VOIDCALL: case OP_CALL: case OP_LCALL: case OP_FCALL: case OP_RCALL: case OP_VCALL: case OP_VOIDCALL_MEMBASE: case OP_CALL_MEMBASE: case OP_LCALL_MEMBASE: case OP_FCALL_MEMBASE: case OP_RCALL_MEMBASE: case OP_VCALL_MEMBASE: case OP_VOIDCALL_REG: case OP_CALL_REG: case OP_LCALL_REG: case OP_FCALL_REG: case OP_RCALL_REG: case OP_VCALL_REG: { process_call (ctx, bb, &builder, ins); break; } case OP_AOTCONST: { MonoJumpInfoType ji_type = ins->inst_c1; gpointer ji_data = ins->inst_p0; if (ji_type == MONO_PATCH_INFO_ICALL_ADDR) { char *symbol = mono_aot_get_direct_call_symbol (MONO_PATCH_INFO_ICALL_ADDR_CALL, ji_data); if (symbol) { /* * Avoid emitting a got entry for these since the method is directly called, and it might not be * resolvable at runtime using dlsym (). */ g_free (symbol); values [ins->dreg] = LLVMConstInt (IntPtrType (), 0, FALSE); break; } } values [ins->dreg] = get_aotconst (ctx, ji_type, ji_data, LLVMPointerType (IntPtrType (), 0)); break; } case OP_MEMMOVE: { int argn = 0; LLVMValueRef args [5]; args [argn++] = convert (ctx, values [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0)); args [argn++] = convert (ctx, values [ins->sreg2], LLVMPointerType (LLVMInt8Type (), 0)); args [argn++] = convert (ctx, values [ins->sreg3], LLVMInt64Type ()); args [argn++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); // is_volatile call_intrins (ctx, INTRINS_MEMMOVE, args, ""); break; } case OP_NOT_REACHED: LLVMBuildUnreachable (builder); has_terminator = TRUE; g_assert (bb->block_num < cfg->max_block_num); ctx->unreachable [bb->block_num] = TRUE; /* Might have instructions after this */ while (ins->next) { MonoInst *next = ins->next; /* * FIXME: If later code uses the regs defined by these instructions, * compilation will fail. */ const char *spec = INS_INFO (next->opcode); if (spec [MONO_INST_DEST] == 'i' && !MONO_IS_STORE_MEMBASE (next)) ctx->values [next->dreg] = LLVMConstNull (LLVMInt32Type ()); MONO_DELETE_INS (bb, next); } break; case OP_LDADDR: { MonoInst *var = ins->inst_i0; MonoClass *klass = var->klass; if (var->opcode == OP_VTARG_ADDR && !MONO_CLASS_IS_SIMD(cfg, klass)) { /* The variable contains the vtype address */ values [ins->dreg] = values [var->dreg]; } else if (var->opcode == OP_GSHAREDVT_LOCAL) { values [ins->dreg] = emit_gsharedvt_ldaddr (ctx, var->dreg); } else { values [ins->dreg] = addresses [var->dreg]; } break; } case OP_SIN: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SIN, args, dname); break; } case OP_SINF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SINF, args, dname); break; } case OP_EXP: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_EXP, args, dname); break; } case OP_EXPF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_EXPF, args, dname); break; } case OP_LOG2: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2, args, dname); break; } case OP_LOG2F: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2F, args, dname); break; } case OP_LOG10: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10, args, dname); break; } case OP_LOG10F: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10F, args, dname); break; } case OP_LOG: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG, args, dname); break; } case OP_TRUNC: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNC, args, dname); break; } case OP_TRUNCF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNCF, args, dname); break; } case OP_COS: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COS, args, dname); break; } case OP_COSF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COSF, args, dname); break; } case OP_SQRT: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SQRT, args, dname); break; } case OP_SQRTF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SQRTF, args, dname); break; } case OP_FLOOR: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FLOOR, args, dname); break; } case OP_FLOORF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FLOORF, args, dname); break; } case OP_CEIL: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_CEIL, args, dname); break; } case OP_CEILF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_CEILF, args, dname); break; } case OP_FMA: { LLVMValueRef args [3]; args [0] = convert (ctx, values [ins->sreg1], LLVMDoubleType ()); args [1] = convert (ctx, values [ins->sreg2], LLVMDoubleType ()); args [2] = convert (ctx, values [ins->sreg3], LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FMA, args, dname); break; } case OP_FMAF: { LLVMValueRef args [3]; args [0] = convert (ctx, values [ins->sreg1], LLVMFloatType ()); args [1] = convert (ctx, values [ins->sreg2], LLVMFloatType ()); args [2] = convert (ctx, values [ins->sreg3], LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FMAF, args, dname); break; } case OP_ABS: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname); break; } case OP_ABSF: { LLVMValueRef args [1]; #ifdef TARGET_AMD64 args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_ABSF, args, dname); #else /* llvm.fabs not supported on all platforms */ args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname); values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ()); #endif break; } case OP_RPOW: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMFloatType ()); args [1] = convert (ctx, rhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_POWF, args, dname); break; } case OP_FPOW: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); args [1] = convert (ctx, rhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_POW, args, dname); break; } case OP_FCOPYSIGN: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); args [1] = convert (ctx, rhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGN, args, dname); break; } case OP_RCOPYSIGN: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMFloatType ()); args [1] = convert (ctx, rhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGNF, args, dname); break; } case OP_IMIN: case OP_LMIN: case OP_IMAX: case OP_LMAX: case OP_IMIN_UN: case OP_LMIN_UN: case OP_IMAX_UN: case OP_LMAX_UN: case OP_FMIN: case OP_FMAX: case OP_RMIN: case OP_RMAX: { LLVMValueRef v; lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); switch (ins->opcode) { case OP_IMIN: case OP_LMIN: v = LLVMBuildICmp (builder, LLVMIntSLE, lhs, rhs, ""); break; case OP_IMAX: case OP_LMAX: v = LLVMBuildICmp (builder, LLVMIntSGE, lhs, rhs, ""); break; case OP_IMIN_UN: case OP_LMIN_UN: v = LLVMBuildICmp (builder, LLVMIntULE, lhs, rhs, ""); break; case OP_IMAX_UN: case OP_LMAX_UN: v = LLVMBuildICmp (builder, LLVMIntUGE, lhs, rhs, ""); break; case OP_FMAX: case OP_RMAX: v = LLVMBuildFCmp (builder, LLVMRealUGE, lhs, rhs, ""); break; case OP_FMIN: case OP_RMIN: v = LLVMBuildFCmp (builder, LLVMRealULE, lhs, rhs, ""); break; default: g_assert_not_reached (); break; } values [ins->dreg] = LLVMBuildSelect (builder, v, lhs, rhs, dname); break; } /* * See the ARM64 comment in mono/utils/atomic.h for an explanation of why this * hack is necessary (for now). */ #ifdef TARGET_ARM64 #define ARM64_ATOMIC_FENCE_FIX mono_llvm_build_fence (builder, LLVM_BARRIER_SEQ) #else #define ARM64_ATOMIC_FENCE_FIX #endif case OP_ATOMIC_EXCHANGE_I4: case OP_ATOMIC_EXCHANGE_I8: { LLVMValueRef args [2]; LLVMTypeRef t; if (ins->opcode == OP_ATOMIC_EXCHANGE_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); g_assert (ins->inst_offset == 0); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); args [1] = convert (ctx, rhs, t); ARM64_ATOMIC_FENCE_FIX; values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_XCHG, args [0], args [1]); ARM64_ATOMIC_FENCE_FIX; break; } case OP_ATOMIC_ADD_I4: case OP_ATOMIC_ADD_I8: case OP_ATOMIC_AND_I4: case OP_ATOMIC_AND_I8: case OP_ATOMIC_OR_I4: case OP_ATOMIC_OR_I8: { LLVMValueRef args [2]; LLVMTypeRef t; if (ins->type == STACK_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); g_assert (ins->inst_offset == 0); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); args [1] = convert (ctx, rhs, t); ARM64_ATOMIC_FENCE_FIX; if (ins->opcode == OP_ATOMIC_ADD_I4 || ins->opcode == OP_ATOMIC_ADD_I8) // Interlocked.Add returns new value (that's why we emit additional Add here) // see https://github.com/dotnet/runtime/pull/33102 values [ins->dreg] = LLVMBuildAdd (builder, mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_ADD, args [0], args [1]), args [1], dname); else if (ins->opcode == OP_ATOMIC_AND_I4 || ins->opcode == OP_ATOMIC_AND_I8) values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_AND, args [0], args [1]); else if (ins->opcode == OP_ATOMIC_OR_I4 || ins->opcode == OP_ATOMIC_OR_I8) values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_OR, args [0], args [1]); else g_assert_not_reached (); ARM64_ATOMIC_FENCE_FIX; break; } case OP_ATOMIC_CAS_I4: case OP_ATOMIC_CAS_I8: { LLVMValueRef args [3], val; LLVMTypeRef t; if (ins->opcode == OP_ATOMIC_CAS_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); /* comparand */ args [1] = convert (ctx, values [ins->sreg3], t); /* new value */ args [2] = convert (ctx, values [ins->sreg2], t); ARM64_ATOMIC_FENCE_FIX; val = mono_llvm_build_cmpxchg (builder, args [0], args [1], args [2]); ARM64_ATOMIC_FENCE_FIX; /* cmpxchg returns a pair */ values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, ""); break; } case OP_MEMORY_BARRIER: { mono_llvm_build_fence (builder, (BarrierKind) ins->backend.memory_barrier_kind); break; } case OP_ATOMIC_LOAD_I1: case OP_ATOMIC_LOAD_I2: case OP_ATOMIC_LOAD_I4: case OP_ATOMIC_LOAD_I8: case OP_ATOMIC_LOAD_U1: case OP_ATOMIC_LOAD_U2: case OP_ATOMIC_LOAD_U4: case OP_ATOMIC_LOAD_U8: case OP_ATOMIC_LOAD_R4: case OP_ATOMIC_LOAD_R8: { int size; gboolean sext, zext; LLVMTypeRef t; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind; LLVMValueRef index, addr; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); if (sext || zext) dname = (char *)""; if (ins->inst_offset != 0) { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, lhs, LLVMPointerType (t, 0)), &index, 1, ""); } else { addr = lhs; } addr = convert (ctx, addr, LLVMPointerType (t, 0)); ARM64_ATOMIC_FENCE_FIX; values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, lhs, dname, is_faulting, is_volatile, barrier); ARM64_ATOMIC_FENCE_FIX; if (sext) values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (zext) values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname); break; } case OP_ATOMIC_STORE_I1: case OP_ATOMIC_STORE_I2: case OP_ATOMIC_STORE_I4: case OP_ATOMIC_STORE_I8: case OP_ATOMIC_STORE_U1: case OP_ATOMIC_STORE_U2: case OP_ATOMIC_STORE_U4: case OP_ATOMIC_STORE_U8: case OP_ATOMIC_STORE_R4: case OP_ATOMIC_STORE_R8: { int size; gboolean sext, zext; LLVMTypeRef t; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind; LLVMValueRef index, addr, value, base; if (!values [ins->inst_destbasereg]) { set_failure (ctx, "inst_destbasereg"); break; } t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); value = convert (ctx, values [ins->sreg1], t); ARM64_ATOMIC_FENCE_FIX; emit_store_general (ctx, bb, &builder, size, value, addr, base, is_faulting, is_volatile, barrier); ARM64_ATOMIC_FENCE_FIX; break; } case OP_RELAXED_NOP: { #if defined(TARGET_AMD64) || defined(TARGET_X86) call_intrins (ctx, INTRINS_SSE_PAUSE, NULL, ""); break; #else break; #endif } case OP_TLS_GET: { #if (defined(TARGET_AMD64) || defined(TARGET_X86)) && defined(__linux__) #ifdef TARGET_AMD64 // 257 == FS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 257); #else // 256 == GS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256); #endif // FIXME: XEN values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), ins->inst_offset, TRUE), ptrtype, ""), ""); #elif defined(TARGET_AMD64) && defined(TARGET_OSX) /* See mono_amd64_emit_tls_get () */ int offset = mono_amd64_get_tls_gs_offset () + (ins->inst_offset * 8); // 256 == GS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256); values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), offset, TRUE), ptrtype, ""), ""); #else set_failure (ctx, "opcode tls-get"); break; #endif break; } case OP_GC_SAFE_POINT: { LLVMValueRef val, cmp, callee, call; LLVMBasicBlockRef poll_bb, cont_bb; LLVMValueRef args [2]; static LLVMTypeRef sig; const char *icall_name = "mono_threads_state_poll"; /* * Create the cold wrapper around the icall, along with a managed method for it so * unwinding works. */ if (!cfg->compile_aot && !ctx->module->gc_poll_cold_wrapper_compiled) { ERROR_DECL (error); /* Compiling a method here is a bit ugly, but it works */ MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL); ctx->module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error); mono_error_assert_ok (error); } if (!sig) sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); /* * if (!*sreg1) * mono_threads_state_poll (); */ val = mono_llvm_build_load (builder, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), "", TRUE); cmp = LLVMBuildICmp (builder, LLVMIntEQ, val, LLVMConstNull (LLVMTypeOf (val)), ""); poll_bb = gen_bb (ctx, "POLL_BB"); cont_bb = gen_bb (ctx, "CONT_BB"); args [0] = cmp; args [1] = LLVMConstInt (LLVMInt1Type (), 1, FALSE); cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, ""); mono_llvm_build_weighted_branch (builder, cmp, cont_bb, poll_bb, 1000, 1); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, poll_bb); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_threads_state_poll)); call = LLVMBuildCall (builder, callee, NULL, 0, ""); } else { callee = get_jit_callee (ctx, icall_name, sig, MONO_PATCH_INFO_ABS, ctx->module->gc_poll_cold_wrapper_compiled); call = LLVMBuildCall (builder, callee, NULL, 0, ""); set_call_cold_cconv (call); } LLVMBuildBr (builder, cont_bb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, cont_bb); ctx->bblocks [bb->block_num].end_bblock = cont_bb; break; } /* * Overflow opcodes. */ case OP_IADD_OVF: case OP_IADD_OVF_UN: case OP_ISUB_OVF: case OP_ISUB_OVF_UN: case OP_IMUL_OVF: case OP_IMUL_OVF_UN: case OP_LADD_OVF: case OP_LADD_OVF_UN: case OP_LSUB_OVF: case OP_LSUB_OVF_UN: case OP_LMUL_OVF: case OP_LMUL_OVF_UN: { LLVMValueRef args [2], val, ovf; IntrinsicId intrins; args [0] = convert (ctx, lhs, op_to_llvm_type (ins->opcode)); args [1] = convert (ctx, rhs, op_to_llvm_type (ins->opcode)); intrins = ovf_op_to_intrins (ins->opcode); val = call_intrins (ctx, intrins, args, ""); values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, dname); ovf = LLVMBuildExtractValue (builder, val, 1, ""); emit_cond_system_exception (ctx, bb, ins->inst_exc_name, ovf, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; break; } /* * Valuetypes. * We currently model them using arrays. Promotion to local vregs is * disabled for them in mono_handle_global_vregs () in the LLVM case, * so we always have an entry in cfg->varinfo for them. * FIXME: Is this needed ? */ case OP_VZERO: { MonoClass *klass = ins->klass; if (!klass) { // FIXME: set_failure (ctx, "!klass"); break; } if (!addresses [ins->dreg]) addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (klass), "vzero"); LLVMValueRef ptr = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); emit_memset (ctx, builder, ptr, const_int32 (mono_class_value_size (klass, NULL)), 0); break; } case OP_DUMMY_VZERO: break; case OP_STOREV_MEMBASE: case OP_LOADV_MEMBASE: case OP_VMOVE: { MonoClass *klass = ins->klass; LLVMValueRef src = NULL, dst, args [5]; gboolean done = FALSE; gboolean is_volatile = FALSE; if (!klass) { // FIXME: set_failure (ctx, "!klass"); break; } if (mini_is_gsharedvt_klass (klass)) { // FIXME: set_failure (ctx, "gsharedvt"); break; } switch (ins->opcode) { case OP_STOREV_MEMBASE: if (cfg->gen_write_barriers && m_class_has_references (klass) && ins->inst_destbasereg != cfg->frame_reg && LLVMGetInstructionOpcode (values [ins->inst_destbasereg]) != LLVMAlloca) { /* Decomposed earlier */ g_assert_not_reached (); break; } if (!addresses [ins->sreg1]) { /* SIMD */ g_assert (values [ins->sreg1]); dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (klass)), 0)); LLVMBuildStore (builder, values [ins->sreg1], dst); done = TRUE; } else { src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), ""); dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0)); } break; case OP_LOADV_MEMBASE: if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass)); src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0)); dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); break; case OP_VMOVE: if (!addresses [ins->sreg1]) addresses [ins->sreg1] = build_alloca (ctx, m_class_get_byval_arg (klass)); if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass)); src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), ""); dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); break; default: g_assert_not_reached (); } if (!ctx_ok (ctx)) break; if (done) break; #ifdef TARGET_WASM is_volatile = m_class_has_references (klass); #endif int aindex = 0; args [aindex ++] = dst; args [aindex ++] = src; args [aindex ++] = LLVMConstInt (LLVMInt32Type (), mono_class_value_size (klass, NULL), FALSE); args [aindex ++] = LLVMConstInt (LLVMInt1Type (), is_volatile ? 1 : 0, FALSE); call_intrins (ctx, INTRINS_MEMCPY, args, ""); break; } case OP_LLVM_OUTARG_VT: { LLVMArgInfo *ainfo = (LLVMArgInfo*)ins->inst_p0; MonoType *t = mini_get_underlying_type (ins->inst_vtype); if (ainfo->storage == LLVMArgGsharedvtVariable) { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1); if (var && var->opcode == OP_GSHAREDVT_LOCAL) { addresses [ins->dreg] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), LLVMPointerType (IntPtrType (), 0)); } else { g_assert (addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } } else if (ainfo->storage == LLVMArgGsharedvtFixed) { if (!addresses [ins->sreg1]) { addresses [ins->sreg1] = build_alloca (ctx, t); g_assert (values [ins->sreg1]); } LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], LLVMGetElementType (LLVMTypeOf (addresses [ins->sreg1]))), addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } else { if (!addresses [ins->sreg1]) { addresses [ins->sreg1] = build_named_alloca (ctx, t, "llvm_outarg_vt"); g_assert (values [ins->sreg1]); LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], type_to_llvm_type (ctx, t)), addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } else if (ainfo->storage == LLVMArgVtypeAddr || values [ins->sreg1] == addresses [ins->sreg1]) { /* LLVMArgVtypeByRef/LLVMArgVtypeAddr, have to make a copy */ addresses [ins->dreg] = build_alloca (ctx, t); LLVMValueRef v = LLVMBuildLoad (builder, addresses [ins->sreg1], "llvm_outarg_vt_copy"); LLVMBuildStore (builder, convert (ctx, v, type_to_llvm_type (ctx, t)), addresses [ins->dreg]); } else { if (values [ins->sreg1]) { LLVMTypeRef src_t = LLVMTypeOf (values [ins->sreg1]); LLVMValueRef dst = convert (ctx, addresses [ins->sreg1], LLVMPointerType (src_t, 0)); LLVMBuildStore (builder, values [ins->sreg1], dst); } addresses [ins->dreg] = addresses [ins->sreg1]; } } break; } case OP_OBJC_GET_SELECTOR: { const char *name = (const char*)ins->inst_p0; LLVMValueRef var; if (!ctx->module->objc_selector_to_var) { ctx->module->objc_selector_to_var = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, NULL); LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), 8), "@OBJC_IMAGE_INFO"); int32_t objc_imageinfo [] = { 0, 16 }; LLVMSetInitializer (info_var, mono_llvm_create_constant_data_array ((uint8_t *) &objc_imageinfo, 8)); LLVMSetLinkage (info_var, LLVMPrivateLinkage); LLVMSetExternallyInitialized (info_var, TRUE); LLVMSetSection (info_var, "__DATA, __objc_imageinfo,regular,no_dead_strip"); LLVMSetAlignment (info_var, sizeof (target_mgreg_t)); mark_as_used (ctx->module, info_var); } var = (LLVMValueRef)g_hash_table_lookup (ctx->module->objc_selector_to_var, name); if (!var) { LLVMValueRef indexes [16]; LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), strlen (name) + 1), "@OBJC_METH_VAR_NAME_"); LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((const uint8_t*)name, strlen (name) + 1)); LLVMSetLinkage (name_var, LLVMPrivateLinkage); LLVMSetSection (name_var, "__TEXT,__objc_methname,cstring_literals"); mark_as_used (ctx->module, name_var); LLVMValueRef ref_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (LLVMInt8Type (), 0), "@OBJC_SELECTOR_REFERENCES_"); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, 0); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, 0); LLVMSetInitializer (ref_var, LLVMConstGEP (name_var, indexes, 2)); LLVMSetLinkage (ref_var, LLVMPrivateLinkage); LLVMSetExternallyInitialized (ref_var, TRUE); LLVMSetSection (ref_var, "__DATA, __objc_selrefs, literal_pointers, no_dead_strip"); LLVMSetAlignment (ref_var, sizeof (target_mgreg_t)); mark_as_used (ctx->module, ref_var); g_hash_table_insert (ctx->module->objc_selector_to_var, g_strdup (name), ref_var); var = ref_var; } values [ins->dreg] = LLVMBuildLoad (builder, var, ""); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM) case OP_EXTRACTX_U2: case OP_XEXTRACT_I1: case OP_XEXTRACT_I2: case OP_XEXTRACT_I4: case OP_XEXTRACT_I8: case OP_XEXTRACT_R4: case OP_XEXTRACT_R8: case OP_EXTRACT_I1: case OP_EXTRACT_I2: case OP_EXTRACT_I4: case OP_EXTRACT_I8: case OP_EXTRACT_R4: case OP_EXTRACT_R8: { MonoTypeEnum mono_elt_t = inst_c1_type (ins); LLVMTypeRef elt_t = primitive_type_to_llvm_type (mono_elt_t); gboolean sext = FALSE; gboolean zext = FALSE; switch (mono_elt_t) { case MONO_TYPE_I1: case MONO_TYPE_I2: sext = TRUE; break; case MONO_TYPE_U1: case MONO_TYPE_U2: zext = TRUE; break; } LLVMValueRef element_ix = NULL; switch (ins->opcode) { case OP_XEXTRACT_I1: case OP_XEXTRACT_I2: case OP_XEXTRACT_I4: case OP_XEXTRACT_R4: case OP_XEXTRACT_R8: case OP_XEXTRACT_I8: element_ix = rhs; break; default: element_ix = const_int32 (ins->inst_c0); } LLVMTypeRef lhs_t = LLVMTypeOf (lhs); int vec_width = mono_llvm_get_prim_size_bits (lhs_t); int elem_width = mono_llvm_get_prim_size_bits (elt_t); int elements = vec_width / elem_width; element_ix = LLVMBuildAnd (builder, element_ix, const_int32 (elements - 1), "extract"); LLVMTypeRef ret_t = LLVMVectorType (elt_t, elements); LLVMValueRef src = LLVMBuildBitCast (builder, lhs, ret_t, "extract"); LLVMValueRef result = LLVMBuildExtractElement (builder, src, element_ix, "extract"); if (zext) result = LLVMBuildZExt (builder, result, i4_t, "extract_zext"); else if (sext) result = LLVMBuildSExt (builder, result, i4_t, "extract_sext"); values [ins->dreg] = result; break; } case OP_XINSERT_I1: case OP_XINSERT_I2: case OP_XINSERT_I4: case OP_XINSERT_I8: case OP_XINSERT_R4: case OP_XINSERT_R8: { MonoTypeEnum primty = inst_c1_type (ins); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); int elements = LLVMGetVectorSize (ret_t); LLVMValueRef element_ix = LLVMBuildAnd (builder, arg3, const_int32 (elements - 1), "xinsert"); LLVMValueRef vec = convert (ctx, lhs, ret_t); LLVMValueRef val = convert_full (ctx, rhs, elem_t, primitive_type_is_unsigned (primty)); LLVMValueRef result = LLVMBuildInsertElement (builder, vec, val, element_ix, "xinsert"); values [ins->dreg] = result; break; } case OP_EXPAND_I1: case OP_EXPAND_I2: case OP_EXPAND_I4: case OP_EXPAND_I8: case OP_EXPAND_R4: case OP_EXPAND_R8: { LLVMTypeRef t; LLVMValueRef mask [MAX_VECTOR_ELEMS], v; int i; t = simd_class_to_llvm_type (ctx, ins->klass); for (i = 0; i < MAX_VECTOR_ELEMS; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); v = convert (ctx, values [ins->sreg1], LLVMGetElementType (t)); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (t), v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->dreg], LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), ""); break; } case OP_XZERO: { values [ins->dreg] = LLVMConstNull (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass))); break; } case OP_LOADX_MEMBASE: { LLVMTypeRef t = type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)); LLVMValueRef src; src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0)); values [ins->dreg] = mono_llvm_build_aligned_load (builder, src, "", FALSE, 1); break; } case OP_STOREX_MEMBASE: { LLVMTypeRef t = LLVMTypeOf (values [ins->sreg1]); LLVMValueRef dest; dest = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0)); mono_llvm_build_aligned_store (builder, values [ins->sreg1], dest, FALSE, 1); break; } case OP_XBINOP: case OP_XBINOP_SCALAR: case OP_XBINOP_BYSCALAR: { gboolean scalar = ins->opcode == OP_XBINOP_SCALAR; gboolean byscalar = ins->opcode == OP_XBINOP_BYSCALAR; LLVMValueRef result = NULL; LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); if (byscalar) { LLVMTypeRef t = LLVMTypeOf (args [0]); unsigned int elems = LLVMGetVectorSize (t); args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems); } LLVMValueRef l = args [0]; LLVMValueRef r = args [1]; switch (ins->inst_c0) { case OP_IADD: result = LLVMBuildAdd (builder, l, r, ""); break; case OP_ISUB: result = LLVMBuildSub (builder, l, r, ""); break; case OP_IMUL: result = LLVMBuildMul (builder, l, r, ""); break; case OP_IAND: result = LLVMBuildAnd (builder, l, r, ""); break; case OP_IOR: result = LLVMBuildOr (builder, l, r, ""); break; case OP_IXOR: result = LLVMBuildXor (builder, l, r, ""); break; case OP_FADD: result = LLVMBuildFAdd (builder, l, r, ""); break; case OP_FSUB: result = LLVMBuildFSub (builder, l, r, ""); break; case OP_FMUL: result = LLVMBuildFMul (builder, l, r, ""); break; case OP_FDIV: result = LLVMBuildFDiv (builder, l, r, ""); break; case OP_FMAX: case OP_FMIN: { #if defined(TARGET_X86) || defined(TARGET_AMD64) LLVMValueRef args [] = { l, r }; LLVMTypeRef t = LLVMTypeOf (l); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); unsigned int v_size = elems * elem_bits; if (v_size == 128) { gboolean is_r4 = ins->inst_c1 == MONO_TYPE_R4; int iid = -1; if (ins->inst_c0 == OP_FMAX) { if (elems == 1) iid = is_r4 ? INTRINS_SSE_MAXSS : INTRINS_SSE_MAXSD; else iid = is_r4 ? INTRINS_SSE_MAXPS : INTRINS_SSE_MAXPD; } else { if (elems == 1) iid = is_r4 ? INTRINS_SSE_MINSS : INTRINS_SSE_MINSD; else iid = is_r4 ? INTRINS_SSE_MINPS : INTRINS_SSE_MINPD; } result = call_intrins (ctx, iid, args, dname); } else { LLVMRealPredicate op = ins->inst_c0 == OP_FMAX ? LLVMRealUGE : LLVMRealULE; LLVMValueRef cmp = LLVMBuildFCmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); } #elif defined(TARGET_ARM64) LLVMValueRef args [] = { l, r }; IntrinsicId iid = ins->inst_c0 == OP_FMAX ? INTRINS_AARCH64_ADV_SIMD_FMAX : INTRINS_AARCH64_ADV_SIMD_FMIN; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); #else NOT_IMPLEMENTED; #endif break; } case OP_IMAX: case OP_IMIN: { gboolean is_unsigned = ins->inst_c1 == MONO_TYPE_U1 || ins->inst_c1 == MONO_TYPE_U2 || ins->inst_c1 == MONO_TYPE_U4 || ins->inst_c1 == MONO_TYPE_U8; LLVMIntPredicate op; switch (ins->inst_c0) { case OP_IMAX: op = is_unsigned ? LLVMIntUGT : LLVMIntSGT; break; case OP_IMIN: op = is_unsigned ? LLVMIntULT : LLVMIntSLT; break; default: g_assert_not_reached (); } #if defined(TARGET_ARM64) if ((ins->inst_c1 == MONO_TYPE_U8) || (ins->inst_c1 == MONO_TYPE_I8)) { LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); } else { IntrinsicId iid; switch (ins->inst_c0) { case OP_IMAX: iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMAX : INTRINS_AARCH64_ADV_SIMD_SMAX; break; case OP_IMIN: iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMIN : INTRINS_AARCH64_ADV_SIMD_SMIN; break; default: g_assert_not_reached (); } LLVMValueRef args [] = { l, r }; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); } #else LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); #endif break; } default: g_assert_not_reached (); } if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_XBINOP_FORCEINT: { LLVMTypeRef t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef intermediate_elem_t = LLVMIntType (elem_bits); LLVMTypeRef intermediate_t = LLVMVectorType (intermediate_elem_t, elems); LLVMValueRef lhs_int = convert (ctx, lhs, intermediate_t); LLVMValueRef rhs_int = convert (ctx, rhs, intermediate_t); LLVMValueRef result = NULL; switch (ins->inst_c0) { case XBINOP_FORCEINT_AND: result = LLVMBuildAnd (builder, lhs_int, rhs_int, ""); break; case XBINOP_FORCEINT_OR: result = LLVMBuildOr (builder, lhs_int, rhs_int, ""); break; case XBINOP_FORCEINT_ORNOT: result = LLVMBuildNot (builder, rhs_int, ""); result = LLVMBuildOr (builder, result, lhs_int, ""); break; case XBINOP_FORCEINT_XOR: result = LLVMBuildXor (builder, lhs_int, rhs_int, ""); break; } values [ins->dreg] = LLVMBuildBitCast (builder, result, t, ""); break; } case OP_CREATE_SCALAR: case OP_CREATE_SCALAR_UNSAFE: { MonoTypeEnum primty = inst_c1_type (ins); LLVMTypeRef type = simd_class_to_llvm_type (ctx, ins->klass); // use undef vector (most likely empty but may contain garbage values) for OP_CREATE_SCALAR_UNSAFE // and zero one for OP_CREATE_SCALAR LLVMValueRef vector = (ins->opcode == OP_CREATE_SCALAR) ? LLVMConstNull (type) : LLVMGetUndef (type); LLVMValueRef val = convert_full (ctx, lhs, primitive_type_to_llvm_type (primty), primitive_type_is_unsigned (primty)); values [ins->dreg] = LLVMBuildInsertElement (builder, vector, val, const_int32 (0), ""); break; } case OP_INSERT_I1: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt8Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I2: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt16Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I4: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I8: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt64Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_R4: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMFloatType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_R8: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMDoubleType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_XCAST: { LLVMTypeRef t = simd_class_to_llvm_type (ctx, ins->klass); values [ins->dreg] = LLVMBuildBitCast (builder, lhs, t, ""); break; } case OP_XCONCAT: { values [ins->dreg] = concatenate_vectors (ctx, lhs, rhs); break; } case OP_XINSERT_LOWER: case OP_XINSERT_UPPER: { const char *oname = ins->opcode == OP_XINSERT_LOWER ? "xinsert_lower" : "xinsert_upper"; int ix = ins->opcode == OP_XINSERT_LOWER ? 0 : 1; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int width = mono_llvm_get_prim_size_bits (src_t); LLVMTypeRef int_t = LLVMIntType (width / 2); LLVMTypeRef intvec_t = LLVMVectorType (int_t, 2); LLVMValueRef insval = LLVMBuildBitCast (builder, rhs, int_t, oname); LLVMValueRef val = LLVMBuildBitCast (builder, lhs, intvec_t, oname); val = LLVMBuildInsertElement (builder, val, insval, const_int32 (ix), oname); val = LLVMBuildBitCast (builder, val, src_t, oname); values [ins->dreg] = val; break; } case OP_XLOWER: case OP_XUPPER: { const char *oname = ins->opcode == OP_XLOWER ? "xlower" : "xupper"; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (src_t); g_assert (elems >= 2 && elems <= MAX_VECTOR_ELEMS); unsigned int ret_elems = elems / 2; int startix = ins->opcode == OP_XLOWER ? 0 : ret_elems; LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (src_t), create_const_vector_i32 (&mask_0_incr_1 [startix], ret_elems), oname); values [ins->dreg] = val; break; } case OP_XWIDEN: case OP_XWIDEN_UNSAFE: { const char *oname = ins->opcode == OP_XWIDEN ? "xwiden" : "xwiden_unsafe"; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (src_t); g_assert (elems <= MAX_VECTOR_ELEMS / 2); unsigned int ret_elems = elems * 2; LLVMValueRef upper = ins->opcode == OP_XWIDEN ? LLVMConstNull (src_t) : LLVMGetUndef (src_t); LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, upper, create_const_vector_i32 (mask_0_incr_1, ret_elems), oname); values [ins->dreg] = val; break; } #endif // defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM) #if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM) case OP_PADDB: case OP_PADDW: case OP_PADDD: case OP_PADDQ: values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, ""); break; case OP_ADDPD: case OP_ADDPS: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, ""); break; case OP_PSUBB: case OP_PSUBW: case OP_PSUBD: case OP_PSUBQ: values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, ""); break; case OP_SUBPD: case OP_SUBPS: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, ""); break; case OP_MULPD: case OP_MULPS: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, ""); break; case OP_DIVPD: case OP_DIVPS: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, ""); break; case OP_PAND: values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, ""); break; case OP_POR: values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, ""); break; case OP_PXOR: values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, ""); break; case OP_PMULW: case OP_PMULD: values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, ""); break; case OP_ANDPS: case OP_ANDNPS: case OP_ORPS: case OP_XORPS: case OP_ANDPD: case OP_ANDNPD: case OP_ORPD: case OP_XORPD: { LLVMTypeRef t, rt; LLVMValueRef v = NULL; switch (ins->opcode) { case OP_ANDPS: case OP_ANDNPS: case OP_ORPS: case OP_XORPS: t = LLVMVectorType (LLVMInt32Type (), 4); rt = LLVMVectorType (LLVMFloatType (), 4); break; case OP_ANDPD: case OP_ANDNPD: case OP_ORPD: case OP_XORPD: t = LLVMVectorType (LLVMInt64Type (), 2); rt = LLVMVectorType (LLVMDoubleType (), 2); break; default: t = LLVMInt32Type (); rt = LLVMInt32Type (); g_assert_not_reached (); } lhs = LLVMBuildBitCast (builder, lhs, t, ""); rhs = LLVMBuildBitCast (builder, rhs, t, ""); switch (ins->opcode) { case OP_ANDPS: case OP_ANDPD: v = LLVMBuildAnd (builder, lhs, rhs, ""); break; case OP_ORPS: case OP_ORPD: v = LLVMBuildOr (builder, lhs, rhs, ""); break; case OP_XORPS: case OP_XORPD: v = LLVMBuildXor (builder, lhs, rhs, ""); break; case OP_ANDNPS: case OP_ANDNPD: v = LLVMBuildAnd (builder, rhs, LLVMBuildNot (builder, lhs, ""), ""); break; } values [ins->dreg] = LLVMBuildBitCast (builder, v, rt, ""); break; } case OP_PMIND_UN: case OP_PMINW_UN: case OP_PMINB_UN: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntULT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMAXD_UN: case OP_PMAXW_UN: case OP_PMAXB_UN: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntUGT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMINW: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSLT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMAXW: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PAVGB_UN: case OP_PAVGW_UN: { LLVMValueRef ones_vec; LLVMValueRef ones [MAX_VECTOR_ELEMS]; int vector_size = LLVMGetVectorSize (LLVMTypeOf (lhs)); LLVMTypeRef ext_elem_type = vector_size == 16 ? LLVMInt16Type () : LLVMInt32Type (); for (int i = 0; i < MAX_VECTOR_ELEMS; ++i) ones [i] = LLVMConstInt (ext_elem_type, 1, FALSE); ones_vec = LLVMConstVector (ones, vector_size); LLVMValueRef val; LLVMTypeRef ext_type = LLVMVectorType (ext_elem_type, vector_size); /* Have to increase the vector element size to prevent overflows */ /* res = trunc ((zext (lhs) + zext (rhs) + 1) >> 1) */ val = LLVMBuildAdd (builder, LLVMBuildZExt (builder, lhs, ext_type, ""), LLVMBuildZExt (builder, rhs, ext_type, ""), ""); val = LLVMBuildAdd (builder, val, ones_vec, ""); val = LLVMBuildLShr (builder, val, ones_vec, ""); values [ins->dreg] = LLVMBuildTrunc (builder, val, LLVMTypeOf (lhs), ""); break; } case OP_PCMPEQB: case OP_PCMPEQW: case OP_PCMPEQD: case OP_PCMPEQQ: case OP_PCMPGTB: { LLVMValueRef pcmp; LLVMTypeRef retType; LLVMIntPredicate cmpOp; if (ins->opcode == OP_PCMPGTB) cmpOp = LLVMIntSGT; else cmpOp = LLVMIntEQ; if (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) { pcmp = LLVMBuildICmp (builder, cmpOp, lhs, rhs, ""); retType = LLVMTypeOf (lhs); } else { LLVMTypeRef flatType = LLVMVectorType (LLVMInt8Type (), 16); LLVMValueRef flatRHS = convert (ctx, rhs, flatType); LLVMValueRef flatLHS = convert (ctx, lhs, flatType); pcmp = LLVMBuildICmp (builder, cmpOp, flatLHS, flatRHS, ""); retType = flatType; } values [ins->dreg] = LLVMBuildSExt (builder, pcmp, retType, ""); break; } case OP_CVTDQ2PS: { LLVMValueRef i4 = LLVMBuildBitCast (builder, lhs, sse_i4_t, ""); values [ins->dreg] = LLVMBuildSIToFP (builder, i4, sse_r4_t, dname); break; } case OP_CVTDQ2PD: { LLVMValueRef indexes [16]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMValueRef mask = LLVMConstVector (indexes, 2); LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, ""); values [ins->dreg] = LLVMBuildSIToFP (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname); break; } case OP_SSE2_CVTSS2SD: { LLVMValueRef rhs_elem = LLVMBuildExtractElement (builder, rhs, const_int32 (0), ""); LLVMValueRef fpext = LLVMBuildFPExt (builder, rhs_elem, LLVMDoubleType (), dname); values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fpext, const_int32 (0), ""); break; } case OP_CVTPS2PD: { LLVMValueRef indexes [16]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMValueRef mask = LLVMConstVector (indexes, 2); LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, ""); values [ins->dreg] = LLVMBuildFPExt (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname); break; } case OP_CVTTPS2DQ: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMVectorType (LLVMInt32Type (), 4), dname); break; case OP_CVTPD2DQ: case OP_CVTPS2DQ: case OP_CVTPD2PS: case OP_CVTTPD2DQ: { LLVMValueRef v; v = convert (ctx, values [ins->sreg1], simd_op_to_llvm_type (ins->opcode)); values [ins->dreg] = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &v, dname); break; } case OP_COMPPS: case OP_COMPPD: { LLVMRealPredicate op; switch (ins->inst_c0) { case SIMD_COMP_EQ: op = LLVMRealOEQ; break; case SIMD_COMP_LT: op = LLVMRealOLT; break; case SIMD_COMP_LE: op = LLVMRealOLE; break; case SIMD_COMP_UNORD: op = LLVMRealUNO; break; case SIMD_COMP_NEQ: op = LLVMRealUNE; break; case SIMD_COMP_NLT: op = LLVMRealUGE; break; case SIMD_COMP_NLE: op = LLVMRealUGT; break; case SIMD_COMP_ORD: op = LLVMRealORD; break; default: g_assert_not_reached (); } LLVMValueRef cmp = LLVMBuildFCmp (builder, op, lhs, rhs, ""); if (ins->opcode == OP_COMPPD) values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), 2), ""), LLVMTypeOf (lhs), ""); else values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), 4), ""), LLVMTypeOf (lhs), ""); break; } case OP_ICONV_TO_X: /* This is only used for implementing shifts by non-immediate */ values [ins->dreg] = lhs; break; case OP_SHUFPS: case OP_SHUFPD: case OP_PSHUFLED: case OP_PSHUFLEW_LOW: case OP_PSHUFLEW_HIGH: { int mask [16]; LLVMValueRef v1 = NULL, v2 = NULL, mask_values [16]; int i, mask_size = 0; int imask = ins->inst_c0; /* Convert the x86 shuffle mask to LLVM's */ switch (ins->opcode) { case OP_SHUFPS: mask_size = 4; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3) + 4; mask [3] = ((imask >> 6) & 3) + 4; v1 = values [ins->sreg1]; v2 = values [ins->sreg2]; break; case OP_SHUFPD: mask_size = 2; mask [0] = ((imask >> 0) & 1); mask [1] = ((imask >> 1) & 1) + 2; v1 = values [ins->sreg1]; v2 = values [ins->sreg2]; break; case OP_PSHUFLEW_LOW: mask_size = 8; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3); mask [3] = ((imask >> 6) & 3); mask [4] = 4 + 0; mask [5] = 4 + 1; mask [6] = 4 + 2; mask [7] = 4 + 3; v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; case OP_PSHUFLEW_HIGH: mask_size = 8; mask [0] = 0; mask [1] = 1; mask [2] = 2; mask [3] = 3; mask [4] = 4 + ((imask >> 0) & 3); mask [5] = 4 + ((imask >> 2) & 3); mask [6] = 4 + ((imask >> 4) & 3); mask [7] = 4 + ((imask >> 6) & 3); v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; case OP_PSHUFLED: mask_size = 4; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3); mask [3] = ((imask >> 6) & 3); v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; default: g_assert_not_reached (); } for (i = 0; i < mask_size; ++i) mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE); values [ins->dreg] = LLVMBuildShuffleVector (builder, v1, v2, LLVMConstVector (mask_values, mask_size), dname); break; } case OP_UNPACK_LOWB: case OP_UNPACK_LOWW: case OP_UNPACK_LOWD: case OP_UNPACK_LOWQ: case OP_UNPACK_LOWPS: case OP_UNPACK_LOWPD: case OP_UNPACK_HIGHB: case OP_UNPACK_HIGHW: case OP_UNPACK_HIGHD: case OP_UNPACK_HIGHQ: case OP_UNPACK_HIGHPS: case OP_UNPACK_HIGHPD: { int mask [16]; LLVMValueRef mask_values [16]; int i, mask_size = 0; gboolean low = FALSE; switch (ins->opcode) { case OP_UNPACK_LOWB: mask_size = 16; low = TRUE; break; case OP_UNPACK_LOWW: mask_size = 8; low = TRUE; break; case OP_UNPACK_LOWD: case OP_UNPACK_LOWPS: mask_size = 4; low = TRUE; break; case OP_UNPACK_LOWQ: case OP_UNPACK_LOWPD: mask_size = 2; low = TRUE; break; case OP_UNPACK_HIGHB: mask_size = 16; break; case OP_UNPACK_HIGHW: mask_size = 8; break; case OP_UNPACK_HIGHD: case OP_UNPACK_HIGHPS: mask_size = 4; break; case OP_UNPACK_HIGHQ: case OP_UNPACK_HIGHPD: mask_size = 2; break; default: g_assert_not_reached (); } if (low) { for (i = 0; i < (mask_size / 2); ++i) { mask [(i * 2)] = i; mask [(i * 2) + 1] = mask_size + i; } } else { for (i = 0; i < (mask_size / 2); ++i) { mask [(i * 2)] = (mask_size / 2) + i; mask [(i * 2) + 1] = mask_size + (mask_size / 2) + i; } } for (i = 0; i < mask_size; ++i) mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE); values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->sreg1], values [ins->sreg2], LLVMConstVector (mask_values, mask_size), dname); break; } case OP_DUPPD: { LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode); LLVMValueRef v, val; v = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMConstNull (t); val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 1, FALSE), dname); values [ins->dreg] = val; break; } case OP_DUPPS_LOW: case OP_DUPPS_HIGH: { LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode); LLVMValueRef v1, v2, val; if (ins->opcode == OP_DUPPS_LOW) { v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 2, FALSE), ""); } else { v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 3, FALSE), ""); } val = LLVMConstNull (t); val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 2, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 3, FALSE), ""); values [ins->dreg] = val; break; } case OP_FCONV_TO_R8_X: { values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r8_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_FCONV_TO_R4_X: { values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r4_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_SSE_MOVMSK: { LLVMValueRef args [1]; if (ins->inst_c1 == MONO_TYPE_R4) { args [0] = lhs; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PS, args, dname); } else if (ins->inst_c1 == MONO_TYPE_R8) { args [0] = lhs; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PD, args, dname); } else { args [0] = convert (ctx, lhs, sse_i1_t); values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PMOVMSKB, args, dname); } break; } case OP_SSE_MOVS: case OP_SSE_MOVS2: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_4_i32 (0, 5, 6, 7), ""); else if (ins->inst_c1 == MONO_TYPE_R8) values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_2_i32 (0, 3), ""); else if (ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, LLVMConstInt (LLVMInt64Type (), 0, FALSE), LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); else g_assert_not_reached (); // will be needed for other types later break; } case OP_SSE_MOVEHL: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (6, 7, 2, 3), ""); else g_assert_not_reached (); break; } case OP_SSE_MOVELH: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 1, 4, 5), ""); else g_assert_not_reached (); break; } case OP_SSE_UNPACKLO: { if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (0, 2), ""); } else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 4, 1, 5), ""); } else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) { const int mask_values [] = { 0, 8, 1, 9, 2, 10, 3, 11 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i2_t), convert (ctx, rhs, sse_i2_t), create_const_vector_i32 (mask_values, 8), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) { const int mask_values [] = { 0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), create_const_vector_i32 (mask_values, 16), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else { g_assert_not_reached (); } break; } case OP_SSE_UNPACKHI: { if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (1, 3), ""); } else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (2, 6, 3, 7), ""); } else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) { const int mask_values [] = { 4, 12, 5, 13, 6, 14, 7, 15 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i2_t), convert (ctx, rhs, sse_i2_t), create_const_vector_i32 (mask_values, 8), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) { const int mask_values [] = { 8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), create_const_vector_i32 (mask_values, 16), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else { g_assert_not_reached (); } break; } case OP_SSE_LOADU: { LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0)); LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), ""); values [ins->dreg] = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, ins->inst_c0); // inst_c0 is alignment break; } case OP_SSE_MOVSS: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (type_to_sse_type (ins->inst_c1)), val, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_SSE_MOVSS_STORE: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE2_MOVD: case OP_SSE2_MOVQ: case OP_SSE2_MOVUPD: { LLVMTypeRef rty = NULL; switch (ins->opcode) { case OP_SSE2_MOVD: rty = sse_i4_t; break; case OP_SSE2_MOVQ: rty = sse_i8_t; break; case OP_SSE2_MOVUPD: rty = sse_r8_t; break; } LLVMTypeRef srcty = LLVMGetElementType (rty); LLVMValueRef zero = LLVMConstNull (rty); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (srcty, 0)); LLVMValueRef val = mono_llvm_build_aligned_load (builder, addr, "", FALSE, 1); values [ins->dreg] = LLVMBuildInsertElement (builder, zero, val, const_int32 (0), dname); break; } case OP_SSE_MOVLPS_LOAD: case OP_SSE_MOVHPS_LOAD: { LLVMTypeRef t = LLVMFloatType (); int size = 4; gboolean high = ins->opcode == OP_SSE_MOVHPS_LOAD; /* Load two floats from rhs and store them in the low/high part of lhs */ LLVMValueRef addr = rhs; LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (t, 0)); LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), size, FALSE), IntPtrType ()), ""), LLVMPointerType (t, 0)); LLVMValueRef val1 = mono_llvm_build_load (builder, addr1, "", FALSE); LLVMValueRef val2 = mono_llvm_build_load (builder, addr2, "", FALSE); int index1, index2; index1 = high ? 2: 0; index2 = high ? 3 : 1; values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMBuildInsertElement (builder, lhs, val1, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""), val2, LLVMConstInt (LLVMInt32Type (), index2, FALSE), ""); break; } case OP_SSE2_MOVLPD_LOAD: case OP_SSE2_MOVHPD_LOAD: { LLVMTypeRef t = LLVMDoubleType (); LLVMValueRef addr = convert (ctx, rhs, LLVMPointerType (t, 0)); LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE); int index = ins->opcode == OP_SSE2_MOVHPD_LOAD ? 1 : 0; values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, val, const_int32 (index), ""); break; } case OP_SSE_MOVLPS_STORE: case OP_SSE_MOVHPS_STORE: { /* Store two floats from the low/hight part of rhs into lhs */ LLVMValueRef addr = lhs; LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), 4, FALSE), IntPtrType ()), ""), LLVMPointerType (LLVMFloatType (), 0)); int index1 = ins->opcode == OP_SSE_MOVLPS_STORE ? 0 : 2; int index2 = ins->opcode == OP_SSE_MOVLPS_STORE ? 1 : 3; LLVMValueRef val1 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""); LLVMValueRef val2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index2, FALSE), ""); mono_llvm_build_store (builder, val1, addr1, FALSE, LLVM_BARRIER_NONE); mono_llvm_build_store (builder, val2, addr2, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE2_MOVLPD_STORE: case OP_SSE2_MOVHPD_STORE: { LLVMTypeRef t = LLVMDoubleType (); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (t, 0)); int index = ins->opcode == OP_SSE2_MOVHPD_STORE ? 1 : 0; LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, const_int32 (index), ""); mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE_STORE: { LLVMValueRef dst_vec = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0)); mono_llvm_build_aligned_store (builder, rhs, dst_vec, FALSE, ins->inst_c0); break; } case OP_SSE_STORES: { LLVMValueRef first_elem = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef dst = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (first_elem), 0)); mono_llvm_build_aligned_store (builder, first_elem, dst, FALSE, 1); break; } case OP_SSE_MOVNTPS: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0)); LLVMValueRef store = mono_llvm_build_aligned_store (builder, rhs, addr, FALSE, ins->inst_c0); set_nontemporal_flag (store); break; } case OP_SSE_PREFETCHT0: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (3), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHT1: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (2), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHT2: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (1), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHNTA: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (0), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_OR: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildOr (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_XOR: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildXor (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_AND: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_ANDN: { LLVMValueRef minus_one [2]; minus_one [0] = LLVMConstInt (LLVMInt64Type (), -1, FALSE); minus_one [1] = LLVMConstInt (LLVMInt64Type (), -1, FALSE); LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_xor = LLVMBuildXor (builder, vec_lhs_i64, LLVMConstVector (minus_one, 2), ""); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_rhs_i64, vec_xor, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_ADDSS: case OP_SSE_SUBSS: case OP_SSE_DIVSS: case OP_SSE_MULSS: case OP_SSE2_ADDSD: case OP_SSE2_SUBSD: case OP_SSE2_DIVSD: case OP_SSE2_MULSD: { LLVMValueRef v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef v2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef v = NULL; switch (ins->opcode) { case OP_SSE_ADDSS: case OP_SSE2_ADDSD: v = LLVMBuildFAdd (builder, v1, v2, ""); break; case OP_SSE_SUBSS: case OP_SSE2_SUBSD: v = LLVMBuildFSub (builder, v1, v2, ""); break; case OP_SSE_DIVSS: case OP_SSE2_DIVSD: v = LLVMBuildFDiv (builder, v1, v2, ""); break; case OP_SSE_MULSS: case OP_SSE2_MULSD: v = LLVMBuildFMul (builder, v1, v2, ""); break; default: g_assert_not_reached (); } values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_SSE_CMPSS: case OP_SSE2_CMPSD: { int imm = -1; gboolean swap = FALSE; switch (ins->inst_c0) { case CMP_EQ: imm = SSE_eq_ord_nosignal; break; case CMP_GT: imm = SSE_lt_ord_signal; swap = TRUE; break; case CMP_GE: imm = SSE_le_ord_signal; swap = TRUE; break; case CMP_LT: imm = SSE_lt_ord_signal; break; case CMP_LE: imm = SSE_le_ord_signal; break; case CMP_GT_UN: imm = SSE_nle_unord_signal; break; case CMP_GE_UN: imm = SSE_nlt_unord_signal; break; case CMP_LT_UN: imm = SSE_nle_unord_signal; swap = TRUE; break; case CMP_LE_UN: imm = SSE_nlt_unord_signal; swap = TRUE; break; case CMP_NE: imm = SSE_neq_unord_nosignal; break; case CMP_ORD: imm = SSE_ord_nosignal; break; case CMP_UNORD: imm = SSE_unord_nosignal; break; default: g_assert_not_reached (); break; } LLVMValueRef cmp = LLVMConstInt (LLVMInt8Type (), imm, FALSE); LLVMValueRef args [] = { lhs, rhs, cmp }; if (swap) { args [0] = rhs; args [1] = lhs; } IntrinsicId id = (IntrinsicId) 0; switch (ins->opcode) { case OP_SSE_CMPSS: id = INTRINS_SSE_CMPSS; break; case OP_SSE2_CMPSD: id = INTRINS_SSE_CMPSD; break; default: g_assert_not_reached (); break; } int elements = LLVMGetVectorSize (LLVMTypeOf (lhs)); int mask_values [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 1; i < elements; ++i) { mask_values [i] = elements + i; } LLVMValueRef result = call_intrins (ctx, id, args, ""); result = LLVMBuildShuffleVector (builder, result, lhs, create_const_vector_i32 (mask_values, elements), ""); values [ins->dreg] = result; break; } case OP_SSE_COMISS: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_COMIEQ_SS; break; case CMP_GT: id = INTRINS_SSE_COMIGT_SS; break; case CMP_GE: id = INTRINS_SSE_COMIGE_SS; break; case CMP_LT: id = INTRINS_SSE_COMILT_SS; break; case CMP_LE: id = INTRINS_SSE_COMILE_SS; break; case CMP_NE: id = INTRINS_SSE_COMINEQ_SS; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_UCOMISS: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SS; break; case CMP_GT: id = INTRINS_SSE_UCOMIGT_SS; break; case CMP_GE: id = INTRINS_SSE_UCOMIGE_SS; break; case CMP_LT: id = INTRINS_SSE_UCOMILT_SS; break; case CMP_LE: id = INTRINS_SSE_UCOMILE_SS; break; case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SS; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE2_COMISD: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_COMIEQ_SD; break; case CMP_GT: id = INTRINS_SSE_COMIGT_SD; break; case CMP_GE: id = INTRINS_SSE_COMIGE_SD; break; case CMP_LT: id = INTRINS_SSE_COMILT_SD; break; case CMP_LE: id = INTRINS_SSE_COMILE_SD; break; case CMP_NE: id = INTRINS_SSE_COMINEQ_SD; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE2_UCOMISD: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SD; break; case CMP_GT: id = INTRINS_SSE_UCOMIGT_SD; break; case CMP_GE: id = INTRINS_SSE_UCOMIGE_SD; break; case CMP_LT: id = INTRINS_SSE_UCOMILT_SD; break; case CMP_LE: id = INTRINS_SSE_UCOMILE_SD; break; case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SD; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_CVTSI2SS: case OP_SSE_CVTSI2SS64: case OP_SSE2_CVTSI2SD: case OP_SSE2_CVTSI2SD64: { LLVMTypeRef ty = LLVMFloatType (); switch (ins->opcode) { case OP_SSE2_CVTSI2SD: case OP_SSE2_CVTSI2SD64: ty = LLVMDoubleType (); break; } LLVMValueRef fp = LLVMBuildSIToFP (builder, rhs, ty, ""); values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fp, const_int32 (0), dname); break; } case OP_SSE2_PMULUDQ: { LLVMValueRef i32_max = LLVMConstInt (LLVMInt64Type (), UINT32_MAX, FALSE); LLVMValueRef maskvals [] = { i32_max, i32_max }; LLVMValueRef mask = LLVMConstVector (maskvals, 2); LLVMValueRef l = LLVMBuildAnd (builder, convert (ctx, lhs, sse_i8_t), mask, ""); LLVMValueRef r = LLVMBuildAnd (builder, convert (ctx, rhs, sse_i8_t), mask, ""); values [ins->dreg] = LLVMBuildNUWMul (builder, l, r, dname); break; } case OP_SSE_SQRTSS: case OP_SSE2_SQRTSD: { LLVMValueRef upper = values [ins->sreg1]; LLVMValueRef lower = values [ins->sreg2]; LLVMValueRef scalar = LLVMBuildExtractElement (builder, lower, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &scalar, dname); values [ins->dreg] = LLVMBuildInsertElement (builder, upper, result, const_int32 (0), ""); break; } case OP_SSE_RCPSS: case OP_SSE_RSQRTSS: { IntrinsicId id = (IntrinsicId)0; switch (ins->opcode) { case OP_SSE_RCPSS: id = INTRINS_SSE_RCP_SS; break; case OP_SSE_RSQRTSS: id = INTRINS_SSE_RSQRT_SS; break; default: g_assert_not_reached (); break; }; LLVMValueRef result = call_intrins (ctx, id, &rhs, dname); const int mask[] = { 0, 5, 6, 7 }; LLVMValueRef shufmask = create_const_vector_i32 (mask, 4); values [ins->dreg] = LLVMBuildShuffleVector (builder, result, lhs, shufmask, ""); break; } case OP_XOP: { IntrinsicId id = (IntrinsicId)ins->inst_c0; call_intrins (ctx, id, NULL, ""); break; } case OP_XOP_X_I: case OP_XOP_X_X: case OP_XOP_I4_X: case OP_XOP_I8_X: case OP_XOP_X_X_X: case OP_XOP_X_X_I4: case OP_XOP_X_X_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_XOP_I4_X_X: { gboolean to_i8_t = FALSE; gboolean ret_bool = FALSE; IntrinsicId id = (IntrinsicId)ins->inst_c0; switch (ins->inst_c0) { case INTRINS_SSE_TESTC: to_i8_t = TRUE; ret_bool = TRUE; break; case INTRINS_SSE_TESTZ: to_i8_t = TRUE; ret_bool = TRUE; break; case INTRINS_SSE_TESTNZ: to_i8_t = TRUE; ret_bool = TRUE; break; default: g_assert_not_reached (); break; } LLVMValueRef args [] = { lhs, rhs }; if (to_i8_t) { args [0] = convert (ctx, args [0], sse_i8_t); args [1] = convert (ctx, args [1], sse_i8_t); } LLVMValueRef call = call_intrins (ctx, id, args, ""); if (ret_bool) { // if return type is bool (it's still i32) we need to normalize it to 1/0 LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, call, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), ""); } else { values [ins->dreg] = call; } break; } case OP_SSE2_MASKMOVDQU: { LLVMTypeRef i8ptr = LLVMPointerType (LLVMInt8Type (), 0); LLVMValueRef dstaddr = convert (ctx, values [ins->sreg3], i8ptr); LLVMValueRef src = convert (ctx, lhs, sse_i1_t); LLVMValueRef mask = convert (ctx, rhs, sse_i1_t); LLVMValueRef args[] = { src, mask, dstaddr }; call_intrins (ctx, INTRINS_SSE_MASKMOVDQU, args, ""); break; } case OP_PADDB_SAT: case OP_PADDW_SAT: case OP_PSUBB_SAT: case OP_PSUBW_SAT: case OP_PADDB_SAT_UN: case OP_PADDW_SAT_UN: case OP_PSUBB_SAT_UN: case OP_PSUBW_SAT_UN: case OP_SSE2_ADDS: case OP_SSE2_SUBS: { IntrinsicId id = (IntrinsicId)0; int type = 0; gboolean is_add = TRUE; switch (ins->opcode) { case OP_PADDB_SAT: type = MONO_TYPE_I1; break; case OP_PADDW_SAT: type = MONO_TYPE_I2; break; case OP_PSUBB_SAT: type = MONO_TYPE_I1; is_add = FALSE; break; case OP_PSUBW_SAT: type = MONO_TYPE_I2; is_add = FALSE; break; case OP_PADDB_SAT_UN: type = MONO_TYPE_U1; break; case OP_PADDW_SAT_UN: type = MONO_TYPE_U2; break; case OP_PSUBB_SAT_UN: type = MONO_TYPE_U1; is_add = FALSE; break; case OP_PSUBW_SAT_UN: type = MONO_TYPE_U2; is_add = FALSE; break; case OP_SSE2_ADDS: type = ins->inst_c1; break; case OP_SSE2_SUBS: type = ins->inst_c1; is_add = FALSE; break; default: g_assert_not_reached (); } if (is_add) { switch (type) { case MONO_TYPE_I1: id = INTRINS_SSE_SADD_SATI8; break; case MONO_TYPE_U1: id = INTRINS_SSE_UADD_SATI8; break; case MONO_TYPE_I2: id = INTRINS_SSE_SADD_SATI16; break; case MONO_TYPE_U2: id = INTRINS_SSE_UADD_SATI16; break; default: g_assert_not_reached (); break; } } else { switch (type) { case MONO_TYPE_I1: id = INTRINS_SSE_SSUB_SATI8; break; case MONO_TYPE_U1: id = INTRINS_SSE_USUB_SATI8; break; case MONO_TYPE_I2: id = INTRINS_SSE_SSUB_SATI16; break; case MONO_TYPE_U2: id = INTRINS_SSE_USUB_SATI16; break; default: g_assert_not_reached (); break; } } LLVMTypeRef vecty = type_to_sse_type (type); LLVMValueRef args [] = { convert (ctx, lhs, vecty), convert (ctx, rhs, vecty) }; LLVMValueRef result = call_intrins (ctx, id, args, dname); values [ins->dreg] = convert (ctx, result, vecty); break; } case OP_SSE2_PACKUS: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, sse_i2_t); args [1] = convert (ctx, rhs, sse_i2_t); values [ins->dreg] = convert (ctx, call_intrins (ctx, INTRINS_SSE_PACKUSWB, args, dname), type_to_sse_type (ins->inst_c1)); break; } case OP_SSE2_SRLI: { LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = convert (ctx, call_intrins (ctx, INTRINS_SSE_PSRLI_W, args, dname), type_to_sse_type (ins->inst_c1)); break; } case OP_SSE2_PSLLDQ: case OP_SSE2_PSRLDQ: { LLVMBasicBlockRef bbs [16 + 1]; LLVMValueRef switch_ins; LLVMValueRef value = lhs; LLVMValueRef index = rhs; LLVMValueRef phi_values [16 + 1]; LLVMTypeRef t = sse_i1_t; int nelems = 16; int i; gboolean shift_right = (ins->opcode == OP_SSE2_PSRLDQ); value = convert (ctx, value, t); // No corresponding LLVM intrinsics // FIXME: Optimize const count for (i = 0; i < nelems; ++i) bbs [i] = gen_bb (ctx, "PSLLDQ_CASE_BB"); bbs [nelems] = gen_bb (ctx, "PSLLDQ_DEF_BB"); cbb = gen_bb (ctx, "PSLLDQ_COND_BB"); switch_ins = LLVMBuildSwitch (builder, index, bbs [nelems], 0); for (i = 0; i < nelems; ++i) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); LLVMPositionBuilderAtEnd (builder, bbs [i]); int mask_values [16]; // Implement shift using a shuffle if (shift_right) { for (int j = 0; j < nelems - i; ++j) mask_values [j] = i + j; for (int j = nelems -i ; j < nelems; ++j) mask_values [j] = nelems; } else { for (int j = 0; j < i; ++j) mask_values [j] = nelems; for (int j = 0; j < nelems - i; ++j) mask_values [j + i] = j; } phi_values [i] = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (t), create_const_vector_i32 (mask_values, nelems), ""); LLVMBuildBr (builder, cbb); } /* Default case */ LLVMPositionBuilderAtEnd (builder, bbs [nelems]); phi_values [nelems] = LLVMConstNull (t); LLVMBuildBr (builder, cbb); LLVMPositionBuilderAtEnd (builder, cbb); values [ins->dreg] = LLVMBuildPhi (builder, LLVMTypeOf (phi_values [0]), ""); LLVMAddIncoming (values [ins->dreg], phi_values, bbs, nelems + 1); values [ins->dreg] = convert (ctx, values [ins->dreg], type_to_sse_type (ins->inst_c1)); ctx->bblocks [bb->block_num].end_bblock = cbb; break; } case OP_SSE2_PSRAW_IMM: case OP_SSE2_PSRAD_IMM: case OP_SSE2_PSRLW_IMM: case OP_SSE2_PSRLD_IMM: case OP_SSE2_PSRLQ_IMM: { LLVMValueRef value = lhs; LLVMValueRef index = rhs; IntrinsicId id; // FIXME: Optimize const index case /* Use the non-immediate version */ switch (ins->opcode) { case OP_SSE2_PSRAW_IMM: id = INTRINS_SSE_PSRA_W; break; case OP_SSE2_PSRAD_IMM: id = INTRINS_SSE_PSRA_D; break; case OP_SSE2_PSRLW_IMM: id = INTRINS_SSE_PSRL_W; break; case OP_SSE2_PSRLD_IMM: id = INTRINS_SSE_PSRL_D; break; case OP_SSE2_PSRLQ_IMM: id = INTRINS_SSE_PSRL_Q; break; default: g_assert_not_reached (); break; } LLVMTypeRef t = LLVMTypeOf (value); LLVMValueRef index_vect = LLVMBuildInsertElement (builder, LLVMConstNull (t), convert (ctx, index, LLVMGetElementType (t)), const_int32 (0), ""); LLVMValueRef args [] = { value, index_vect }; values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_SHUFPS: case OP_SSE2_SHUFPD: case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef l = lhs; LLVMValueRef r = rhs; LLVMValueRef ctl = arg3; const char *oname = ""; int ncases = 0; switch (ins->opcode) { case OP_SSE_SHUFPS: ncases = 256; break; case OP_SSE2_SHUFPD: ncases = 4; break; case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: ncases = 256; r = lhs; ctl = rhs; break; } switch (ins->opcode) { case OP_SSE_SHUFPS: oname = "sse_shufps"; break; case OP_SSE2_SHUFPD: oname = "sse2_shufpd"; break; case OP_SSE2_PSHUFD: oname = "sse2_pshufd"; break; case OP_SSE2_PSHUFHW: oname = "sse2_pshufhw"; break; case OP_SSE2_PSHUFLW: oname = "sse2_pshuflw"; break; } ctl = LLVMBuildAnd (builder, ctl, const_int32 (ncases - 1), ""); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, ncases, ctl, ret_t, oname); int mask_values [8]; int mask_len = 0; int i = 0; while (immediate_unroll_next (&ictx, &i)) { switch (ins->opcode) { case OP_SSE_SHUFPS: mask_len = 4; mask_values [0] = ((i >> 0) & 0x3) + 0; // take two elements from lhs mask_values [1] = ((i >> 2) & 0x3) + 0; mask_values [2] = ((i >> 4) & 0x3) + 4; // and two from rhs mask_values [3] = ((i >> 6) & 0x3) + 4; break; case OP_SSE2_SHUFPD: mask_len = 2; mask_values [0] = ((i >> 0) & 0x1) + 0; mask_values [1] = ((i >> 1) & 0x1) + 2; break; case OP_SSE2_PSHUFD: /* * Each 2 bits in mask selects 1 dword from the the source and copies it to the * destination. */ mask_len = 4; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j] = windex; } break; case OP_SSE2_PSHUFHW: /* * Each 2 bits in mask selects 1 word from the high quadword of the source and copies it to the * high quadword of the destination. */ mask_len = 8; /* The low quadword stays the same */ for (int j = 0; j < 4; ++j) mask_values [j] = j; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j + 4] = 4 + windex; } break; case OP_SSE2_PSHUFLW: mask_len = 8; /* The high quadword stays the same */ for (int j = 0; j < 4; ++j) mask_values [j + 4] = j + 4; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j] = windex; } break; } LLVMValueRef mask = create_const_vector_i32 (mask_values, mask_len); LLVMValueRef result = LLVMBuildShuffleVector (builder, l, r, mask, oname); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE3_MOVDDUP: { int mask [] = { 0, 0 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 2), ""); break; } case OP_SSE3_MOVDDUP_MEM: { LLVMValueRef undef = LLVMGetUndef (v128_r8_t); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (r8_t, 0)); LLVMValueRef elem = mono_llvm_build_aligned_load (builder, addr, "sse3_movddup_mem", FALSE, 1); LLVMValueRef val = LLVMBuildInsertElement (builder, undef, elem, const_int32 (0), "sse3_movddup_mem"); values [ins->dreg] = LLVMBuildShuffleVector (builder, val, undef, LLVMConstNull (LLVMVectorType (i4_t, 2)), "sse3_movddup_mem"); break; } case OP_SSE3_MOVSHDUP: { int mask [] = { 1, 1, 3, 3 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), ""); break; } case OP_SSE3_MOVSLDUP: { int mask [] = { 0, 0, 2, 2 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), ""); break; } case OP_SSSE3_SHUFFLE: { LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PSHUFB, args, dname); break; } case OP_SSSE3_ABS: { // %sub = sub <16 x i8> zeroinitializer, %arg // %cmp = icmp sgt <16 x i8> %arg, zeroinitializer // %abs = select <16 x i1> %cmp, <16 x i8> %arg, <16 x i8> %sub LLVMTypeRef typ = type_to_sse_type (ins->inst_c1); LLVMValueRef sub = LLVMBuildSub(builder, LLVMConstNull(typ), lhs, ""); LLVMValueRef cmp = LLVMBuildICmp(builder, LLVMIntSGT, lhs, LLVMConstNull(typ), ""); LLVMValueRef abs = LLVMBuildSelect (builder, cmp, lhs, sub, ""); values [ins->dreg] = convert (ctx, abs, typ); break; } case OP_SSSE3_ALIGNR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef zero = LLVMConstNull (v128_i1_t); LLVMValueRef hivec = convert (ctx, lhs, v128_i1_t); LLVMValueRef lovec = convert (ctx, rhs, v128_i1_t); LLVMValueRef rshift_amount = convert (ctx, arg3, i1_t); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 32, rshift_amount, v128_i1_t, "ssse3_alignr"); LLVMValueRef mask_values [16]; // 128-bit vector, 8-bit elements, 16 total elements int i = 0; while (immediate_unroll_next (&ictx, &i)) { LLVMValueRef hi = NULL; LLVMValueRef lo = NULL; if (i <= 16) { for (int j = 0; j < 16; j++) mask_values [j] = const_int32 (i + j); lo = lovec; hi = hivec; } else { for (int j = 0; j < 16; j++) mask_values [j] = const_int32 (i + j - 16); lo = hivec; hi = zero; } LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, lo, hi, LLVMConstVector (mask_values, 16), "ssse3_alignr"); immediate_unroll_commit (&ictx, i, shuffled); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, zero); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = convert (ctx, result, ret_t); break; } case OP_SSE41_ROUNDP: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE) }; values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDPS : INTRINS_SSE_ROUNDPD, args, dname); break; } case OP_SSE41_ROUNDS: { LLVMValueRef args [3]; args [0] = lhs; args [1] = rhs; args [2] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE); values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDSS : INTRINS_SSE_ROUNDSD, args, dname); break; } case OP_SSE41_DPPS: case OP_SSE41_DPPD: { /* Bits 0, 1, 4, 5 are meaningful for the control mask * in dppd; all bits are meaningful for dpps. */ LLVMTypeRef ret_t = NULL; LLVMValueRef mask = NULL; int mask_bits = 0; int high_shift = 0; int low_mask = 0; IntrinsicId iid = (IntrinsicId) 0; const char *oname = ""; switch (ins->opcode) { case OP_SSE41_DPPS: ret_t = v128_r4_t; mask = const_int8 (0xff); // 0b11111111 mask_bits = 8; high_shift = 4; low_mask = 0xf; iid = INTRINS_SSE_DPPS; oname = "sse41_dpps"; break; case OP_SSE41_DPPD: ret_t = v128_r8_t; mask = const_int8 (0x33); // 0b00110011 mask_bits = 4; high_shift = 2; low_mask = 0x3; iid = INTRINS_SSE_DPPD; oname = "sse41_dppd"; break; } LLVMValueRef args [] = { lhs, rhs, NULL }; LLVMValueRef index = LLVMBuildAnd (builder, convert (ctx, arg3, i1_t), mask, oname); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << mask_bits, index, ret_t, oname); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int imm = ((i >> high_shift) << 4) | (i & low_mask); args [2] = const_int8 (imm); LLVMValueRef result = call_intrins (ctx, iid, args, dname); immediate_unroll_commit (&ictx, imm, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_MPSADBW: { LLVMValueRef args [] = { convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), NULL, }; LLVMValueRef ctl = convert (ctx, arg3, i1_t); // Only 3 bits (bits 0-2) are used by mpsadbw and llvm.x86.sse41.mpsadbw int used_bits = 0x7; ctl = LLVMBuildAnd (builder, ctl, const_int8 (used_bits), "sse41_mpsadbw"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, used_bits + 1, ctl, v128_i2_t, "sse41_mpsadbw"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [2] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_MPSADBW, args, "sse41_mpsadbw"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_INSERTPS: { LLVMValueRef ctl = convert (ctx, arg3, i1_t); LLVMValueRef args [] = { lhs, rhs, NULL }; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, ctl, v128_r4_t, "sse41_insertps"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [2] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_INSERTPS, args, dname); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_BLEND: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); int nelem = LLVMGetVectorSize (ret_t); g_assert (nelem >= 2 && nelem <= 8); // I2, U2, R4, R8 int unique_ctl_patterns = 1 << nelem; int ctlmask = unique_ctl_patterns - 1; LLVMValueRef ctl = convert (ctx, arg3, i1_t); ctl = LLVMBuildAnd (builder, ctl, const_int8 (ctlmask), "sse41_blend"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, unique_ctl_patterns, ctl, ret_t, "sse41_blend"); int i = 0; int mask_values [MAX_VECTOR_ELEMS] = { 0 }; while (immediate_unroll_next (&ictx, &i)) { for (int lane = 0; lane < nelem; ++lane) { // n-bit in inst_c0 (control byte) is set to 1 gboolean bit_set = (i & (1 << lane)) >> lane; mask_values [lane] = lane + (bit_set ? nelem : 0); } LLVMValueRef mask = create_const_vector_i32 (mask_values, nelem); LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "sse41_blend"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_BLENDV: { LLVMValueRef args [] = { lhs, rhs, values [ins->sreg3] }; if (ins->inst_c1 == MONO_TYPE_R4) { values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPS, args, dname); } else if (ins->inst_c1 == MONO_TYPE_R8) { values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPD, args, dname); } else { // for other non-fp type just convert to <16 x i8> and pass to @llvm.x86.sse41.pblendvb args [0] = LLVMBuildBitCast (ctx->builder, args [0], sse_i1_t, ""); args [1] = LLVMBuildBitCast (ctx->builder, args [1], sse_i1_t, ""); args [2] = LLVMBuildBitCast (ctx->builder, args [2], sse_i1_t, ""); values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PBLENDVB, args, dname); } break; } case OP_SSE_CVTII: { gboolean is_signed = (ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_I4); LLVMTypeRef vec_type; if ((ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_U1)) vec_type = sse_i1_t; else if ((ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_U2)) vec_type = sse_i2_t; else vec_type = sse_i4_t; LLVMValueRef value; if (LLVMGetTypeKind (LLVMTypeOf (lhs)) != LLVMVectorTypeKind) { LLVMValueRef bitcasted = LLVMBuildBitCast (ctx->builder, lhs, LLVMPointerType (vec_type, 0), ""); value = mono_llvm_build_aligned_load (builder, bitcasted, "", FALSE, 1); } else { value = LLVMBuildBitCast (ctx->builder, lhs, vec_type, ""); } LLVMValueRef mask_vec; LLVMTypeRef dst_type; if (ins->inst_c0 == MONO_TYPE_I2) { mask_vec = create_const_vector_i32 (mask_0_incr_1, 8); dst_type = sse_i2_t; } else if (ins->inst_c0 == MONO_TYPE_I4) { mask_vec = create_const_vector_i32 (mask_0_incr_1, 4); dst_type = sse_i4_t; } else { g_assert (ins->inst_c0 == MONO_TYPE_I8); mask_vec = create_const_vector_i32 (mask_0_incr_1, 2); dst_type = sse_i8_t; } LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (vec_type), mask_vec, ""); if (is_signed) values [ins->dreg] = LLVMBuildSExt (ctx->builder, shuffled, dst_type, ""); else values [ins->dreg] = LLVMBuildZExt (ctx->builder, shuffled, dst_type, ""); break; } case OP_SSE41_LOADANT: { LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0)); LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), ""); LLVMValueRef load = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, 16); set_nontemporal_flag (load); values [ins->dreg] = load; break; } case OP_SSE41_MUL: { const int shift_vals [] = { 32, 32 }; const LLVMValueRef args [] = { convert (ctx, lhs, sse_i8_t), convert (ctx, rhs, sse_i8_t), }; LLVMValueRef mul_args [2] = { 0 }; LLVMValueRef shift_vec = create_const_vector (LLVMInt64Type (), shift_vals, 2); for (int i = 0; i < 2; ++i) { LLVMValueRef padded = LLVMBuildShl (builder, args [i], shift_vec, ""); mul_args[i] = mono_llvm_build_exact_ashr (builder, padded, shift_vec); } values [ins->dreg] = LLVMBuildNSWMul (builder, mul_args [0], mul_args [1], dname); break; } case OP_SSE41_MULLO: { values [ins->dreg] = LLVMBuildMul (ctx->builder, lhs, rhs, ""); break; } case OP_SSE42_CRC32: case OP_SSE42_CRC64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = convert (ctx, rhs, primitive_type_to_llvm_type (ins->inst_c0)); IntrinsicId id; switch (ins->inst_c0) { case MONO_TYPE_U1: id = INTRINS_SSE_CRC32_32_8; break; case MONO_TYPE_U2: id = INTRINS_SSE_CRC32_32_16; break; case MONO_TYPE_U4: id = INTRINS_SSE_CRC32_32_32; break; case MONO_TYPE_U8: id = INTRINS_SSE_CRC32_64_64; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_PCLMULQDQ: { LLVMValueRef args [] = { lhs, rhs, NULL }; LLVMValueRef ctl = convert (ctx, arg3, i1_t); // Only bits 0 and 4 of the immediate operand are used by PCLMULQDQ. ctl = LLVMBuildAnd (builder, ctl, const_int8 (0x11), "pclmulqdq"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << 2, ctl, v128_i8_t, "pclmulqdq"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int imm = ((i & 0x2) << 3) | (i & 0x1); args [2] = const_int8 (imm); LLVMValueRef result = call_intrins (ctx, INTRINS_PCLMULQDQ, args, "pclmulqdq"); immediate_unroll_commit (&ictx, imm, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_AES_KEYGENASSIST: { LLVMValueRef roundconstant = convert (ctx, rhs, i1_t); LLVMValueRef args [] = { convert (ctx, lhs, v128_i8_t), NULL }; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, roundconstant, v128_i8_t, "aes_keygenassist"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [1] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_AESNI_AESKEYGENASSIST, args, "aes_keygenassist"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = convert (ctx, result, v128_i1_t); break; } #endif case OP_XCOMPARE_FP: { LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0]; LLVMValueRef cmp = LLVMBuildFCmp (builder, pred, lhs, rhs, ""); int nelems = LLVMGetVectorSize (LLVMTypeOf (cmp)); g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); if (ins->inst_c1 == MONO_TYPE_R8) values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), nelems), ""), LLVMTypeOf (lhs), ""); else values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), nelems), ""), LLVMTypeOf (lhs), ""); break; } case OP_XCOMPARE: { LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0]; LLVMValueRef cmp = LLVMBuildICmp (builder, pred, lhs, rhs, ""); g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); values [ins->dreg] = LLVMBuildSExt (builder, cmp, LLVMTypeOf (lhs), ""); break; } case OP_POPCNT32: values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I32, &lhs, ""); break; case OP_POPCNT64: values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I64, &lhs, ""); break; case OP_CTTZ32: case OP_CTTZ64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_CTTZ32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64, args, ""); break; } case OP_BMI1_BEXTR32: case OP_BMI1_BEXTR64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = convert (ctx, rhs, ins->opcode == OP_BMI1_BEXTR32 ? i4_t : i8_t); // cast ushort to u32/u64 values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BMI1_BEXTR32 ? INTRINS_BEXTR_I32 : INTRINS_BEXTR_I64, args, ""); break; } case OP_BZHI32: case OP_BZHI64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BZHI32 ? INTRINS_BZHI_I32 : INTRINS_BZHI_I64, args, ""); break; } case OP_MULX_H32: case OP_MULX_H64: case OP_MULX_HL32: case OP_MULX_HL64: { gboolean is_64 = ins->opcode == OP_MULX_H64 || ins->opcode == OP_MULX_HL64; gboolean only_high = ins->opcode == OP_MULX_H32 || ins->opcode == OP_MULX_H64; LLVMValueRef lx = LLVMBuildZExt (ctx->builder, lhs, LLVMInt128Type (), ""); LLVMValueRef rx = LLVMBuildZExt (ctx->builder, rhs, LLVMInt128Type (), ""); LLVMValueRef mulx = LLVMBuildMul (ctx->builder, lx, rx, ""); if (!only_high) { LLVMValueRef addr = convert (ctx, arg3, LLVMPointerType (is_64 ? i8_t : i4_t, 0)); LLVMValueRef lowx = LLVMBuildTrunc (ctx->builder, mulx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), ""); LLVMBuildStore (ctx->builder, lowx, addr); } LLVMValueRef shift = LLVMConstInt (LLVMInt128Type (), is_64 ? 64 : 32, FALSE); LLVMValueRef highx = LLVMBuildLShr (ctx->builder, mulx, shift, ""); values [ins->dreg] = LLVMBuildTrunc (ctx->builder, highx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), ""); break; } case OP_PEXT32: case OP_PEXT64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PEXT32 ? INTRINS_PEXT_I32 : INTRINS_PEXT_I64, args, ""); break; } case OP_PDEP32: case OP_PDEP64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PDEP32 ? INTRINS_PDEP_I32 : INTRINS_PDEP_I64, args, ""); break; } #endif /* defined(TARGET_X86) || defined(TARGET_AMD64) */ // Shared between ARM64 and X86 #if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) case OP_LZCNT32: case OP_LZCNT64: { IntrinsicId iid = ins->opcode == OP_LZCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64; LLVMValueRef args [] = { lhs, const_int1 (FALSE) }; values [ins->dreg] = call_intrins (ctx, iid, args, ""); break; } #endif #if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM) case OP_XEQUAL: { LLVMTypeRef t; LLVMValueRef cmp, mask [MAX_VECTOR_ELEMS], shuffle; int nelems; #if defined(TARGET_WASM) /* The wasm code generator doesn't understand the shuffle/and code sequence below */ LLVMValueRef val; if (LLVMIsNull (lhs) || LLVMIsNull (rhs)) { val = LLVMIsNull (lhs) ? rhs : lhs; nelems = LLVMGetVectorSize (LLVMTypeOf (lhs)); IntrinsicId intrins = (IntrinsicId)0; switch (nelems) { case 16: intrins = INTRINS_WASM_ANYTRUE_V16; break; case 8: intrins = INTRINS_WASM_ANYTRUE_V8; break; case 4: intrins = INTRINS_WASM_ANYTRUE_V4; break; case 2: intrins = INTRINS_WASM_ANYTRUE_V2; break; default: g_assert_not_reached (); } /* res = !wasm.anytrue (val) */ values [ins->dreg] = call_intrins (ctx, intrins, &val, ""); values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildICmp (builder, LLVMIntEQ, values [ins->dreg], LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""), LLVMInt32Type (), dname); break; } #endif LLVMTypeRef srcelemt = LLVMGetElementType (LLVMTypeOf (lhs)); //%c = icmp sgt <16 x i8> %a0, %a1 if (srcelemt == LLVMDoubleType () || srcelemt == LLVMFloatType ()) cmp = LLVMBuildFCmp (builder, LLVMRealOEQ, lhs, rhs, ""); else cmp = LLVMBuildICmp (builder, LLVMIntEQ, lhs, rhs, ""); nelems = LLVMGetVectorSize (LLVMTypeOf (cmp)); LLVMTypeRef elemt; if (srcelemt == LLVMDoubleType ()) elemt = LLVMInt64Type (); else if (srcelemt == LLVMFloatType ()) elemt = LLVMInt32Type (); else elemt = srcelemt; t = LLVMVectorType (elemt, nelems); cmp = LLVMBuildSExt (builder, cmp, t, ""); // cmp is a <nelems x elemt> vector, each element is either 0xff... or 0 int half = nelems / 2; while (half >= 1) { // AND the top and bottom halfes into the bottom half for (int i = 0; i < half; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), half + i, FALSE); for (int i = half; i < nelems; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); shuffle = LLVMBuildShuffleVector (builder, cmp, LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), ""); cmp = LLVMBuildAnd (builder, cmp, shuffle, ""); half = half / 2; } // Extract [0] LLVMValueRef first_elem = LLVMBuildExtractElement (builder, cmp, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); // convert to 0/1 LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, first_elem, LLVMConstInt (elemt, 0, FALSE), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), ""); break; } #endif #if defined(TARGET_ARM64) case OP_XOP_I4_I4: case OP_XOP_I8_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; values [ins->dreg] = call_intrins (ctx, id, &lhs, ""); break; } case OP_XOP_X_X_X: case OP_XOP_I4_I4_I4: case OP_XOP_I4_I4_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; gboolean zext_last = FALSE, bitcast_result = FALSE, getElement = FALSE; int element_idx = -1; switch (id) { case INTRINS_AARCH64_PMULL64: getElement = TRUE; bitcast_result = TRUE; element_idx = ins->inst_c1; break; case INTRINS_AARCH64_CRC32B: case INTRINS_AARCH64_CRC32H: case INTRINS_AARCH64_CRC32W: case INTRINS_AARCH64_CRC32CB: case INTRINS_AARCH64_CRC32CH: case INTRINS_AARCH64_CRC32CW: zext_last = TRUE; break; default: break; } LLVMValueRef arg1 = rhs; if (zext_last) arg1 = LLVMBuildZExt (ctx->builder, arg1, LLVMInt32Type (), ""); LLVMValueRef args [] = { lhs, arg1 }; if (getElement) { args [0] = LLVMBuildExtractElement (ctx->builder, args [0], const_int32 (element_idx), ""); args [1] = LLVMBuildExtractElement (ctx->builder, args [1], const_int32 (element_idx), ""); } values [ins->dreg] = call_intrins (ctx, id, args, ""); if (bitcast_result) values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMVectorType (LLVMInt64Type (), 2)); break; } case OP_XOP_X_X_X_X: { IntrinsicId id = (IntrinsicId)ins->inst_c0; gboolean getLowerElement = FALSE; int arg_idx = -1; switch (id) { case INTRINS_AARCH64_SHA1C: case INTRINS_AARCH64_SHA1M: case INTRINS_AARCH64_SHA1P: getLowerElement = TRUE; arg_idx = 1; break; default: break; } LLVMValueRef args [] = { lhs, rhs, arg3 }; if (getLowerElement) args [arg_idx] = LLVMBuildExtractElement (ctx->builder, args [arg_idx], const_int32 (0), ""); values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_XOP_X_X: { IntrinsicId id = (IntrinsicId)ins->inst_c0; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean getLowerElement = FALSE; switch (id) { case INTRINS_AARCH64_SHA1H: getLowerElement = TRUE; break; default: break; } LLVMValueRef arg0 = lhs; if (getLowerElement) arg0 = LLVMBuildExtractElement (ctx->builder, arg0, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, id, &arg0, ""); if (getLowerElement) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_XCOMPARE_FP_SCALAR: case OP_XCOMPARE_FP: { g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); gboolean scalar = ins->opcode == OP_XCOMPARE_FP_SCALAR; LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0]; LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMTypeRef reti_t = to_integral_vector_type (ret_t); LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); LLVMValueRef result = LLVMBuildFCmp (builder, pred, args [0], args [1], "xcompare_fp"); if (scalar) result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (reti_t)), result); result = LLVMBuildSExt (builder, result, reti_t, ""); result = LLVMBuildBitCast (builder, result, ret_t, ""); values [ins->dreg] = result; break; } case OP_XCOMPARE_SCALAR: case OP_XCOMPARE: { g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); gboolean scalar = ins->opcode == OP_XCOMPARE_SCALAR; LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0]; LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); LLVMValueRef result = LLVMBuildICmp (builder, pred, args [0], args [1], "xcompare"); if (scalar) result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (ret_t)), result); values [ins->dreg] = LLVMBuildSExt (builder, result, ret_t, ""); break; } case OP_ARM64_EXT: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (ret_t); g_assert (elems <= ARM64_MAX_VECTOR_ELEMS); LLVMValueRef index = arg3; LLVMValueRef default_value = lhs; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, elems, index, ret_t, "arm64_ext"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { LLVMValueRef mask = create_const_vector_i32 (&mask_0_incr_1 [i], elems); LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "arm64_ext"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, default_value); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_ARM64_MVN: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef result = bitcast_to_integral (ctx, lhs); result = LLVMBuildNot (builder, result, "arm64_mvn"); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_BIC: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef result = bitcast_to_integral (ctx, lhs); LLVMValueRef mask = bitcast_to_integral (ctx, rhs); mask = LLVMBuildNot (builder, mask, ""); result = LLVMBuildAnd (builder, mask, result, "arm64_bic"); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_BSL: { LLVMTypeRef ret_t = LLVMTypeOf (rhs); LLVMValueRef select = bitcast_to_integral (ctx, lhs); LLVMValueRef left = bitcast_to_integral (ctx, rhs); LLVMValueRef right = bitcast_to_integral (ctx, arg3); LLVMValueRef result1 = LLVMBuildAnd (builder, select, left, "arm64_bsl"); LLVMValueRef result2 = LLVMBuildAnd (builder, LLVMBuildNot (builder, select, ""), right, ""); LLVMValueRef result = LLVMBuildOr (builder, result1, result2, ""); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_CMTST: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef l = bitcast_to_integral (ctx, lhs); LLVMValueRef r = bitcast_to_integral (ctx, rhs); LLVMValueRef result = LLVMBuildAnd (builder, l, r, "arm64_cmtst"); LLVMTypeRef t = LLVMTypeOf (l); result = LLVMBuildICmp (builder, LLVMIntNE, result, LLVMConstNull (t), ""); result = LLVMBuildSExt (builder, result, t, ""); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_FCVTL: case OP_ARM64_FCVTL2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean high = ins->opcode == OP_ARM64_FCVTL2; LLVMValueRef result = lhs; if (high) result = extract_high_elements (ctx, result); result = LLVMBuildFPExt (builder, result, ret_t, "arm64_fcvtl"); values [ins->dreg] = result; break; } case OP_ARM64_FCVTXN: case OP_ARM64_FCVTXN2: case OP_ARM64_FCVTN: case OP_ARM64_FCVTN2: { gboolean high = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_FCVTXN2: high = TRUE; case OP_ARM64_FCVTXN: iid = INTRINS_AARCH64_ADV_SIMD_FCVTXN; break; case OP_ARM64_FCVTN2: high = TRUE; break; } LLVMValueRef result = lhs; if (high) result = rhs; if (iid) result = call_intrins (ctx, iid, &result, ""); else result = LLVMBuildFPTrunc (builder, result, v64_r4_t, ""); if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_UCVTF: case OP_ARM64_SCVTF: case OP_ARM64_UCVTF_SCALAR: case OP_ARM64_SCVTF_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean scalar = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_UCVTF_SCALAR: scalar = TRUE; case OP_ARM64_UCVTF: is_unsigned = TRUE; break; case OP_ARM64_SCVTF_SCALAR: scalar = TRUE; break; } LLVMValueRef result = lhs; LLVMTypeRef cvt_t = ret_t; if (scalar) { result = scalar_from_vector (ctx, result); cvt_t = LLVMGetElementType (ret_t); } if (is_unsigned) result = LLVMBuildUIToFP (builder, result, cvt_t, "arm64_ucvtf"); else result = LLVMBuildSIToFP (builder, result, cvt_t, "arm64_scvtf"); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_FCVTZS: case OP_ARM64_FCVTZS_SCALAR: case OP_ARM64_FCVTZU: case OP_ARM64_FCVTZU_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean scalar = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_FCVTZU_SCALAR: scalar = TRUE; case OP_ARM64_FCVTZU: is_unsigned = TRUE; break; case OP_ARM64_FCVTZS_SCALAR: scalar = TRUE; break; } LLVMValueRef result = lhs; LLVMTypeRef cvt_t = ret_t; if (scalar) { result = scalar_from_vector (ctx, result); cvt_t = LLVMGetElementType (ret_t); } if (is_unsigned) result = LLVMBuildFPToUI (builder, result, cvt_t, "arm64_fcvtzu"); else result = LLVMBuildFPToSI (builder, result, cvt_t, "arm64_fcvtzs"); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_SELECT_SCALAR: { LLVMValueRef result = LLVMBuildExtractElement (builder, lhs, rhs, ""); LLVMTypeRef elem_t = LLVMTypeOf (result); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef t = LLVMVectorType (elem_t, 64 / elem_bits); result = vector_from_scalar (ctx, t, result); values [ins->dreg] = result; break; } case OP_ARM64_SELECT_QUAD: { LLVMTypeRef src_type = simd_class_to_llvm_type (ctx, ins->data.op [1].klass); LLVMTypeRef ret_type = simd_class_to_llvm_type (ctx, ins->klass); unsigned int src_type_bits = mono_llvm_get_prim_size_bits (src_type); unsigned int ret_type_bits = mono_llvm_get_prim_size_bits (ret_type); unsigned int src_intermediate_elems = src_type_bits / 32; unsigned int ret_intermediate_elems = ret_type_bits / 32; LLVMTypeRef intermediate_type = LLVMVectorType (i4_t, src_intermediate_elems); LLVMValueRef result = LLVMBuildBitCast (builder, lhs, intermediate_type, "arm64_select_quad"); result = LLVMBuildExtractElement (builder, result, rhs, "arm64_select_quad"); result = broadcast_element (ctx, result, ret_intermediate_elems); result = LLVMBuildBitCast (builder, result, ret_type, "arm64_select_quad"); values [ins->dreg] = result; break; } case OP_LSCNT32: case OP_LSCNT64: { // %shr = ashr i32 %x, 31 // %xor = xor i32 %shr, %x // %mul = shl i32 %xor, 1 // %add = or i32 %mul, 1 // %0 = tail call i32 @llvm.ctlz.i32(i32 %add, i1 false) LLVMValueRef shr = LLVMBuildAShr (builder, lhs, ins->opcode == OP_LSCNT32 ? LLVMConstInt (LLVMInt32Type (), 31, FALSE) : LLVMConstInt (LLVMInt64Type (), 63, FALSE), ""); LLVMValueRef one = ins->opcode == OP_LSCNT32 ? LLVMConstInt (LLVMInt32Type (), 1, FALSE) : LLVMConstInt (LLVMInt64Type (), 1, FALSE); LLVMValueRef xor = LLVMBuildXor (builder, shr, lhs, ""); LLVMValueRef mul = LLVMBuildShl (builder, xor, one, ""); LLVMValueRef add = LLVMBuildOr (builder, mul, one, ""); LLVMValueRef args [2]; args [0] = add; args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); values [ins->dreg] = LLVMBuildCall (builder, get_intrins (ctx, ins->opcode == OP_LSCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64), args, 2, ""); break; } case OP_ARM64_SQRDMLAH: case OP_ARM64_SQRDMLAH_BYSCALAR: case OP_ARM64_SQRDMLAH_SCALAR: case OP_ARM64_SQRDMLSH: case OP_ARM64_SQRDMLSH_BYSCALAR: case OP_ARM64_SQRDMLSH_SCALAR: { gboolean byscalar = FALSE; gboolean scalar = FALSE; gboolean subtract = FALSE; switch (ins->opcode) { case OP_ARM64_SQRDMLAH_BYSCALAR: byscalar = TRUE; break; case OP_ARM64_SQRDMLAH_SCALAR: scalar = TRUE; break; case OP_ARM64_SQRDMLSH: subtract = TRUE; break; case OP_ARM64_SQRDMLSH_BYSCALAR: subtract = TRUE; byscalar = TRUE; break; case OP_ARM64_SQRDMLSH_SCALAR: subtract = TRUE; scalar = TRUE; break; } int acc_iid = subtract ? INTRINS_AARCH64_ADV_SIMD_SQSUB : INTRINS_AARCH64_ADV_SIMD_SQADD; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t); ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins); LLVMValueRef args [] = { lhs, rhs, arg3 }; if (byscalar) { unsigned int elems = LLVMGetVectorSize (ret_t); args [2] = broadcast_element (ctx, scalar_from_vector (ctx, args [2]), elems); } if (scalar) { ovr_tag = sctx.ovr_tag; scalar_op_from_vector_op_process_args (&sctx, args, 3); } LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQRDMULH, ovr_tag, &args [1], "arm64_sqrdmlxh"); args [1] = result; result = call_overloaded_intrins (ctx, acc_iid, ovr_tag, &args [0], "arm64_sqrdmlxh"); if (scalar) result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } case OP_ARM64_SMULH: case OP_ARM64_UMULH: { LLVMValueRef op1, op2; if (ins->opcode == OP_ARM64_SMULH) { op1 = LLVMBuildSExt (builder, lhs, LLVMInt128Type (), ""); op2 = LLVMBuildSExt (builder, rhs, LLVMInt128Type (), ""); } else { op1 = LLVMBuildZExt (builder, lhs, LLVMInt128Type (), ""); op2 = LLVMBuildZExt (builder, rhs, LLVMInt128Type (), ""); } LLVMValueRef mul = LLVMBuildMul (builder, op1, op2, ""); LLVMValueRef hi64 = LLVMBuildLShr (builder, mul, LLVMConstInt (LLVMInt128Type (), 64, FALSE), ""); values [ins->dreg] = LLVMBuildTrunc (builder, hi64, LLVMInt64Type (), ""); break; } case OP_ARM64_XNARROW_SCALAR: { // Unfortunately, @llvm.aarch64.neon.scalar.sqxtun isn't available for i8 or i16. LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); LLVMValueRef result = NULL; int iid = ins->inst_c0; int scalar_iid = 0; switch (iid) { case INTRINS_AARCH64_ADV_SIMD_SQXTUN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTUN; break; case INTRINS_AARCH64_ADV_SIMD_SQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTN; break; case INTRINS_AARCH64_ADV_SIMD_UQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_UQXTN; break; default: g_assert_not_reached (); } if (elem_t == i4_t) { LLVMValueRef arg = scalar_from_vector (ctx, lhs); result = call_intrins (ctx, scalar_iid, &arg, "arm64_xnarrow_scalar"); result = vector_from_scalar (ctx, ret_t, result); } else { LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef argelem_t = LLVMGetElementType (arg_t); unsigned int argelems = LLVMGetVectorSize (arg_t); LLVMValueRef arg = keep_lowest_element (ctx, LLVMVectorType (argelem_t, argelems * 2), lhs); result = call_overloaded_intrins (ctx, iid, ovr_tag, &arg, "arm64_xnarrow_scalar"); result = keep_lowest_element (ctx, LLVMTypeOf (result), result); } values [ins->dreg] = result; break; } case OP_ARM64_SQXTUN2: case OP_ARM64_UQXTN2: case OP_ARM64_SQXTN2: case OP_ARM64_XTN: case OP_ARM64_XTN2: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean high = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_SQXTUN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTUN; break; case OP_ARM64_UQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_UQXTN; break; case OP_ARM64_SQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTN; break; case OP_ARM64_XTN2: high = TRUE; break; } LLVMValueRef result = lhs; if (high) { result = rhs; ovr_tag = ovr_tag_smaller_vector (ovr_tag); } LLVMTypeRef t = LLVMTypeOf (result); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits / 2), elems); if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, &result, ""); else result = LLVMBuildTrunc (builder, result, result_t, "arm64_xtn"); if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_CLZ: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, const_int1 (0) }; LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_CLZ, ovr_tag, args, ""); values [ins->dreg] = result; break; } case OP_ARM64_FMSUB: case OP_ARM64_FMSUB_BYSCALAR: case OP_ARM64_FMSUB_SCALAR: case OP_ARM64_FNMSUB_SCALAR: case OP_ARM64_FMADD: case OP_ARM64_FMADD_BYSCALAR: case OP_ARM64_FMADD_SCALAR: case OP_ARM64_FNMADD_SCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean scalar = FALSE; gboolean negate = FALSE; gboolean subtract = FALSE; gboolean byscalar = FALSE; switch (ins->opcode) { case OP_ARM64_FMSUB: subtract = TRUE; break; case OP_ARM64_FMSUB_BYSCALAR: subtract = TRUE; byscalar = TRUE; break; case OP_ARM64_FMSUB_SCALAR: subtract = TRUE; scalar = TRUE; break; case OP_ARM64_FNMSUB_SCALAR: subtract = TRUE; scalar = TRUE; negate = TRUE; break; case OP_ARM64_FMADD: break; case OP_ARM64_FMADD_BYSCALAR: byscalar = TRUE; break; case OP_ARM64_FMADD_SCALAR: scalar = TRUE; break; case OP_ARM64_FNMADD_SCALAR: scalar = TRUE; negate = TRUE; break; } // llvm.fma argument order: mulop1, mulop2, addend LLVMValueRef args [] = { rhs, arg3, lhs }; if (byscalar) { unsigned int elems = LLVMGetVectorSize (LLVMTypeOf (args [0])); args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems); } if (scalar) { ovr_tag = ovr_tag_force_scalar (ovr_tag); for (int i = 0; i < 3; ++i) args [i] = scalar_from_vector (ctx, args [i]); } if (subtract) args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_sub"); if (negate) { args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_negate"); args [2] = LLVMBuildFNeg (builder, args [2], "arm64_fma_negate"); } LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_FMA, ovr_tag, args, "arm64_fma"); if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_SQDMULL: case OP_ARM64_SQDMULL_BYSCALAR: case OP_ARM64_SQDMULL2: case OP_ARM64_SQDMULL2_BYSCALAR: case OP_ARM64_SQDMLAL: case OP_ARM64_SQDMLAL_BYSCALAR: case OP_ARM64_SQDMLAL2: case OP_ARM64_SQDMLAL2_BYSCALAR: case OP_ARM64_SQDMLSL: case OP_ARM64_SQDMLSL_BYSCALAR: case OP_ARM64_SQDMLSL2: case OP_ARM64_SQDMLSL2_BYSCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean scalar = FALSE; gboolean add = FALSE; gboolean subtract = FALSE; gboolean high = FALSE; switch (ins->opcode) { case OP_ARM64_SQDMULL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL: break; case OP_ARM64_SQDMULL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL2: high = TRUE; break; case OP_ARM64_SQDMLAL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL: add = TRUE; break; case OP_ARM64_SQDMLAL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL2: high = TRUE; add = TRUE; break; case OP_ARM64_SQDMLSL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL: subtract = TRUE; break; case OP_ARM64_SQDMLSL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL2: high = TRUE; subtract = TRUE; break; } int iid = 0; if (add) iid = INTRINS_AARCH64_ADV_SIMD_SQADD; else if (subtract) iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; LLVMValueRef mul1 = lhs; LLVMValueRef mul2 = rhs; if (iid != 0) { mul1 = rhs; mul2 = arg3; } if (scalar) { LLVMTypeRef t = LLVMTypeOf (mul1); unsigned int elems = LLVMGetVectorSize (t); mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems); } LLVMValueRef args [] = { mul1, mul2 }; if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQDMULL, ovr_tag, args, ""); LLVMValueRef args2 [] = { lhs, result }; if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, args2, ""); values [ins->dreg] = result; break; } case OP_ARM64_SQDMULL_SCALAR: case OP_ARM64_SQDMLAL_SCALAR: case OP_ARM64_SQDMLSL_SCALAR: { /* * define dso_local i32 @__vqdmlslh_lane_s16(i32, i16, <4 x i16>, i32) local_unnamed_addr #0 { * %5 = insertelement <4 x i16> undef, i16 %1, i64 0 * %6 = shufflevector <4 x i16> %2, <4 x i16> undef, <4 x i32> <i32 3, i32 undef, i32 undef, i32 undef> * %7 = tail call <4 x i32> @llvm.aarch64.neon.sqdmull.v4i32(<4 x i16> %5, <4 x i16> %6) * %8 = extractelement <4 x i32> %7, i64 0 * %9 = tail call i32 @llvm.aarch64.neon.sqsub.i32(i32 %0, i32 %8) * ret i32 %9 * } * * define dso_local i64 @__vqdmlals_s32(i64, i32, i32) local_unnamed_addr #0 { * %4 = tail call i64 @llvm.aarch64.neon.sqdmulls.scalar(i32 %1, i32 %2) #2 * %5 = tail call i64 @llvm.aarch64.neon.sqadd.i64(i64 %0, i64 %4) #2 * ret i64 %5 * } */ int mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL; int iid = 0; gboolean scalar_mul_result = FALSE; gboolean scalar_acc_result = FALSE; switch (ins->opcode) { case OP_ARM64_SQDMLAL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQADD; break; case OP_ARM64_SQDMLSL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; break; } LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef mularg = lhs; LLVMValueRef selected_scalar = rhs; if (iid != 0) { mularg = rhs; selected_scalar = arg3; } llvm_ovr_tag_t multag = ovr_tag_smaller_elements (ovr_tag_from_llvm_type (ret_t)); llvm_ovr_tag_t iidtag = ovr_tag_force_scalar (ovr_tag_from_llvm_type (ret_t)); LLVMTypeRef mularg_t = ovr_tag_to_llvm_type (multag); if (multag & INTRIN_int32) { /* The (i32, i32) -> i64 variant of aarch64_neon_sqdmull has * a unique, non-overloaded name. */ mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL_SCALAR; multag = 0; iidtag = INTRIN_int64 | INTRIN_scalar; scalar_mul_result = TRUE; scalar_acc_result = TRUE; } else if (multag & INTRIN_int16) { /* We were passed a (<4 x i16>, <4 x i16>) but the * widening multiplication intrinsic will yield a <4 x i32>. */ multag = INTRIN_int32 | INTRIN_vector128; } else g_assert_not_reached (); if (scalar_mul_result) { mularg = scalar_from_vector (ctx, mularg); selected_scalar = scalar_from_vector (ctx, selected_scalar); } else { mularg = keep_lowest_element (ctx, mularg_t, mularg); selected_scalar = keep_lowest_element (ctx, mularg_t, selected_scalar); } LLVMValueRef mulargs [] = { mularg, selected_scalar }; LLVMValueRef result = call_overloaded_intrins (ctx, mulid, multag, mulargs, "arm64_sqdmull_scalar"); if (iid != 0) { LLVMValueRef acc = scalar_from_vector (ctx, lhs); if (!scalar_mul_result) result = scalar_from_vector (ctx, result); LLVMValueRef subargs [] = { acc, result }; result = call_overloaded_intrins (ctx, iid, iidtag, subargs, "arm64_sqdmlxl_scalar"); scalar_acc_result = TRUE; } if (scalar_acc_result) result = vector_from_scalar (ctx, ret_t, result); else result = keep_lowest_element (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_FMUL_SEL: { LLVMValueRef mul2 = LLVMBuildExtractElement (builder, rhs, arg3, ""); LLVMValueRef mul1 = scalar_from_vector (ctx, lhs); LLVMValueRef result = LLVMBuildFMul (builder, mul1, mul2, "arm64_fmul_sel"); result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_MLA: case OP_ARM64_MLA_SCALAR: case OP_ARM64_MLS: case OP_ARM64_MLS_SCALAR: { gboolean scalar = FALSE; gboolean add = FALSE; switch (ins->opcode) { case OP_ARM64_MLA_SCALAR: scalar = TRUE; case OP_ARM64_MLA: add = TRUE; break; case OP_ARM64_MLS_SCALAR: scalar = TRUE; case OP_ARM64_MLS: break; } LLVMTypeRef mul_t = LLVMTypeOf (rhs); unsigned int elems = LLVMGetVectorSize (mul_t); LLVMValueRef mul2 = arg3; if (scalar) mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems); LLVMValueRef result = LLVMBuildMul (builder, rhs, mul2, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, ""); else result = LLVMBuildSub (builder, lhs, result, ""); values [ins->dreg] = result; break; } case OP_ARM64_SMULL: case OP_ARM64_SMULL_SCALAR: case OP_ARM64_SMULL2: case OP_ARM64_SMULL2_SCALAR: case OP_ARM64_UMULL: case OP_ARM64_UMULL_SCALAR: case OP_ARM64_UMULL2: case OP_ARM64_UMULL2_SCALAR: case OP_ARM64_SMLAL: case OP_ARM64_SMLAL_SCALAR: case OP_ARM64_SMLAL2: case OP_ARM64_SMLAL2_SCALAR: case OP_ARM64_UMLAL: case OP_ARM64_UMLAL_SCALAR: case OP_ARM64_UMLAL2: case OP_ARM64_UMLAL2_SCALAR: case OP_ARM64_SMLSL: case OP_ARM64_SMLSL_SCALAR: case OP_ARM64_SMLSL2: case OP_ARM64_SMLSL2_SCALAR: case OP_ARM64_UMLSL: case OP_ARM64_UMLSL_SCALAR: case OP_ARM64_UMLSL2: case OP_ARM64_UMLSL2_SCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean add = FALSE; gboolean subtract = FALSE; gboolean scalar = FALSE; int opcode = ins->opcode; switch (opcode) { case OP_ARM64_SMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL; break; case OP_ARM64_UMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL; break; case OP_ARM64_SMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL; break; case OP_ARM64_UMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL; break; case OP_ARM64_SMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL; break; case OP_ARM64_UMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL; break; case OP_ARM64_SMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL2; break; case OP_ARM64_UMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL2; break; case OP_ARM64_SMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL2; break; case OP_ARM64_UMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL2; break; case OP_ARM64_SMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL2; break; case OP_ARM64_UMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL2; break; } switch (opcode) { case OP_ARM64_SMULL2: high = TRUE; case OP_ARM64_SMULL: break; case OP_ARM64_UMULL2: high = TRUE; case OP_ARM64_UMULL: is_unsigned = TRUE; break; case OP_ARM64_SMLAL2: high = TRUE; case OP_ARM64_SMLAL: add = TRUE; break; case OP_ARM64_UMLAL2: high = TRUE; case OP_ARM64_UMLAL: add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SMLSL2: high = TRUE; case OP_ARM64_SMLSL: subtract = TRUE; break; case OP_ARM64_UMLSL2: high = TRUE; case OP_ARM64_UMLSL: subtract = TRUE; is_unsigned = TRUE; break; } int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMULL : INTRINS_AARCH64_ADV_SIMD_SMULL; LLVMValueRef intrin_args [] = { lhs, rhs }; if (add || subtract) { intrin_args [0] = rhs; intrin_args [1] = arg3; } if (scalar) { LLVMValueRef sarg = intrin_args [1]; LLVMTypeRef t = LLVMTypeOf (intrin_args [0]); unsigned int elems = LLVMGetVectorSize (t); sarg = broadcast_element (ctx, scalar_from_vector (ctx, sarg), elems); intrin_args [1] = sarg; } if (high) for (int i = 0; i < 2; ++i) intrin_args [i] = extract_high_elements (ctx, intrin_args [i]); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, ""); if (subtract) result = LLVMBuildSub (builder, lhs, result, ""); values [ins->dreg] = result; break; } case OP_ARM64_XNEG: case OP_ARM64_XNEG_SCALAR: { gboolean scalar = ins->opcode == OP_ARM64_XNEG_SCALAR; gboolean is_float = FALSE; switch (inst_c1_type (ins)) { case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE; } LLVMValueRef result = lhs; if (scalar) result = scalar_from_vector (ctx, result); if (is_float) result = LLVMBuildFNeg (builder, result, "arm64_xneg"); else result = LLVMBuildNeg (builder, result, "arm64_xneg"); if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_PMULL: case OP_ARM64_PMULL2: { gboolean high = ins->opcode == OP_ARM64_PMULL2; LLVMValueRef args [] = { lhs, rhs }; if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); LLVMValueRef result = call_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_PMULL, args, "arm64_pmull"); values [ins->dreg] = result; break; } case OP_ARM64_REVN: { LLVMTypeRef t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int group_bits = mono_llvm_get_prim_size_bits (elem_t); unsigned int vec_bits = mono_llvm_get_prim_size_bits (t); unsigned int tmp_bits = ins->inst_c0; unsigned int tmp_elements = vec_bits / tmp_bits; const int cycle8 [] = { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 }; const int cycle4 [] = { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 }; const int cycle2 [] = { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 }; const int *cycle = NULL; switch (group_bits / tmp_bits) { case 2: cycle = cycle2; break; case 4: cycle = cycle4; break; case 8: cycle = cycle8; break; default: g_assert_not_reached (); } g_assert (tmp_elements <= ARM64_MAX_VECTOR_ELEMS); LLVMTypeRef tmp_t = LLVMVectorType (LLVMIntType (tmp_bits), tmp_elements); LLVMValueRef tmp = LLVMBuildBitCast (builder, lhs, tmp_t, "arm64_revn"); LLVMValueRef result = LLVMBuildShuffleVector (builder, tmp, LLVMGetUndef (tmp_t), create_const_vector_i32 (cycle, tmp_elements), ""); result = LLVMBuildBitCast (builder, result, t, ""); values [ins->dreg] = result; break; } case OP_ARM64_SHL: case OP_ARM64_SSHR: case OP_ARM64_SSRA: case OP_ARM64_USHR: case OP_ARM64_USRA: { gboolean right = FALSE; gboolean add = FALSE; gboolean arith = FALSE; switch (ins->opcode) { case OP_ARM64_USHR: right = TRUE; break; case OP_ARM64_USRA: right = TRUE; add = TRUE; break; case OP_ARM64_SSHR: arith = TRUE; break; case OP_ARM64_SSRA: arith = TRUE; add = TRUE; break; } LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; if (add) { shiftarg = rhs; shift = arg3; } shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef result = NULL; if (right) result = LLVMBuildLShr (builder, shiftarg, shift, ""); else if (arith) result = LLVMBuildAShr (builder, shiftarg, shift, ""); else result = LLVMBuildShl (builder, shiftarg, shift, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, "arm64_usra"); values [ins->dreg] = result; break; } case OP_ARM64_SHRN: case OP_ARM64_SHRN2: { LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; gboolean high = ins->opcode == OP_ARM64_SHRN2; if (high) { shiftarg = rhs; shift = arg3; } LLVMTypeRef arg_t = LLVMTypeOf (shiftarg); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); unsigned int elems = LLVMGetVectorSize (arg_t); unsigned int bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef trunc_t = LLVMVectorType (LLVMIntType (bits / 2), elems); shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef result = LLVMBuildLShr (builder, shiftarg, shift, "shrn"); result = LLVMBuildTrunc (builder, result, trunc_t, ""); if (high) { result = concatenate_vectors (ctx, lhs, result); } values [ins->dreg] = result; break; } case OP_ARM64_SRSHR: case OP_ARM64_SRSRA: case OP_ARM64_URSHR: case OP_ARM64_URSRA: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; gboolean right = FALSE; gboolean add = FALSE; switch (ins->opcode) { case OP_ARM64_URSRA: add = TRUE; case OP_ARM64_URSHR: right = TRUE; break; case OP_ARM64_SRSRA: add = TRUE; case OP_ARM64_SRSHR: right = TRUE; break; } int iid = 0; switch (ins->opcode) { case OP_ARM64_URSRA: case OP_ARM64_URSHR: iid = INTRINS_AARCH64_ADV_SIMD_URSHL; break; case OP_ARM64_SRSRA: case OP_ARM64_SRSHR: iid = INTRINS_AARCH64_ADV_SIMD_SRSHL; break; } if (add) { shiftarg = rhs; shift = arg3; } if (right) shift = LLVMBuildNeg (builder, shift, ""); shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef args [] = { shiftarg, shift }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); if (add) result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_XNSHIFT_SCALAR: case OP_ARM64_XNSHIFT: case OP_ARM64_XNSHIFT2: { LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); LLVMValueRef shift_arg = lhs; LLVMValueRef shift_amount = rhs; gboolean high = FALSE; gboolean scalar = FALSE; int iid = ins->inst_c0; switch (ins->opcode) { case OP_ARM64_XNSHIFT_SCALAR: scalar = TRUE; break; case OP_ARM64_XNSHIFT2: high = TRUE; break; } if (high) { shift_arg = rhs; shift_amount = arg3; ovr_tag = ovr_tag_smaller_vector (ovr_tag); intrin_result_t = ovr_tag_to_llvm_type (ovr_tag); } LLVMTypeRef shift_arg_t = LLVMTypeOf (shift_arg); LLVMTypeRef shift_arg_elem_t = LLVMGetElementType (shift_arg_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (shift_arg_elem_t); int range_min = 1; int range_max = element_bits / 2; if (scalar) { unsigned int elems = LLVMGetVectorSize (shift_arg_t); LLVMValueRef lo = scalar_from_vector (ctx, shift_arg); shift_arg = vector_from_scalar (ctx, LLVMVectorType (shift_arg_elem_t, elems * 2), lo); } int max_index = range_max - range_min + 1; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, shift_amount, intrin_result_t, "arm64_xnshift"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i + range_min; LLVMValueRef intrin_args [] = { shift_arg, const_int32 (shift_const) }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit (&ictx, shift_const, result); } { immediate_unroll_default (&ictx); LLVMValueRef intrin_args [] = { shift_arg, const_int32 (range_max) }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit_default (&ictx, result); } LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); if (high) result = concatenate_vectors (ctx, lhs, result); if (scalar) result = keep_lowest_element (ctx, LLVMTypeOf (result), result); values [ins->dreg] = result; break; } case OP_ARM64_SQSHLU: case OP_ARM64_SQSHLU_SCALAR: { gboolean scalar = ins->opcode == OP_ARM64_SQSHLU_SCALAR; LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (intrin_result_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (elem_t); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); int max_index = element_bits; ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, intrin_result_t, ins); intrin_result_t = scalar ? sctx.intermediate_type : intrin_result_t; ovr_tag = scalar ? sctx.ovr_tag : ovr_tag; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, rhs, intrin_result_t, "arm64_sqshlu"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i; LLVMValueRef args [2] = { lhs, create_shift_vector (ctx, lhs, const_int32 (shift_const)) }; if (scalar) scalar_op_from_vector_op_process_args (&sctx, args, 2); LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQSHLU, ovr_tag, args, ""); immediate_unroll_commit (&ictx, shift_const, result); } { immediate_unroll_default (&ictx); LLVMValueRef srcarg = lhs; if (scalar) scalar_op_from_vector_op_process_args (&sctx, &srcarg, 1); immediate_unroll_commit_default (&ictx, srcarg); } LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); if (scalar) result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } case OP_ARM64_SSHLL: case OP_ARM64_SSHLL2: case OP_ARM64_USHLL: case OP_ARM64_USHLL2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean high = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_SSHLL2: high = TRUE; break; case OP_ARM64_USHLL2: high = TRUE; case OP_ARM64_USHLL: is_unsigned = TRUE; break; } LLVMValueRef result = lhs; if (high) result = extract_high_elements (ctx, result); if (is_unsigned) result = LLVMBuildZExt (builder, result, ret_t, "arm64_ushll"); else result = LLVMBuildSExt (builder, result, ret_t, "arm64_ushll"); result = LLVMBuildShl (builder, result, create_shift_vector (ctx, result, rhs), ""); values [ins->dreg] = result; break; } case OP_ARM64_SLI: case OP_ARM64_SRI: { LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (LLVMGetElementType (intrin_result_t)); int range_min = 0; int range_max = element_bits - 1; if (ins->opcode == OP_ARM64_SRI) { ++range_min; ++range_max; } int iid = ins->opcode == OP_ARM64_SRI ? INTRINS_AARCH64_ADV_SIMD_SRI : INTRINS_AARCH64_ADV_SIMD_SLI; int max_index = range_max - range_min + 1; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, arg3, intrin_result_t, "arm64_ext"); LLVMValueRef intrin_args [3] = { lhs, rhs, arg3 }; int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i + range_min; intrin_args [2] = const_int32 (shift_const); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit (&ictx, shift_const, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, lhs); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = result; break; } case OP_ARM64_SQRT_SCALAR: { int iid = ins->inst_c0 == MONO_TYPE_R8 ? INTRINS_SQRT : INTRINS_SQRTF; LLVMTypeRef t = LLVMTypeOf (lhs); LLVMValueRef scalar = LLVMBuildExtractElement (builder, lhs, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, iid, &scalar, "arm64_sqrt_scalar"); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMGetUndef (t), result, const_int32 (0), ""); break; } case OP_ARM64_STP: case OP_ARM64_STP_SCALAR: case OP_ARM64_STNP: case OP_ARM64_STNP_SCALAR: { gboolean nontemporal = FALSE; gboolean scalar = FALSE; switch (ins->opcode) { case OP_ARM64_STNP: nontemporal = TRUE; break; case OP_ARM64_STNP_SCALAR: nontemporal = TRUE; scalar = TRUE; break; case OP_ARM64_STP_SCALAR: scalar = TRUE; break; } LLVMTypeRef rhs_t = LLVMTypeOf (rhs); LLVMValueRef val = NULL; LLVMTypeRef dst_t = LLVMPointerType (rhs_t, 0); if (scalar) val = LLVMBuildShuffleVector (builder, rhs, arg3, create_const_vector_2_i32 (0, 2), ""); else { unsigned int rhs_elems = LLVMGetVectorSize (rhs_t); LLVMTypeRef rhs_elt_t = LLVMGetElementType (rhs_t); dst_t = LLVMPointerType (LLVMVectorType (rhs_elt_t, rhs_elems * 2), 0); val = concatenate_vectors (ctx, rhs, arg3); } LLVMValueRef address = convert (ctx, lhs, dst_t); LLVMValueRef store = mono_llvm_build_store (builder, val, address, FALSE, LLVM_BARRIER_NONE); if (nontemporal) set_nontemporal_flag (store); break; } case OP_ARM64_LD1_INSERT: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); LLVMValueRef address = convert (ctx, arg3, LLVMPointerType (elem_t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8; LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1_insert", FALSE, alignment); result = LLVMBuildInsertElement (builder, lhs, result, rhs, "arm64_ld1_insert"); values [ins->dreg] = result; break; } case OP_ARM64_LD1R: case OP_ARM64_LD1: { gboolean replicate = ins->opcode == OP_ARM64_LD1R; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8; LLVMValueRef address = lhs; LLVMTypeRef address_t = LLVMPointerType (ret_t, 0); if (replicate) { LLVMTypeRef elem_t = LLVMGetElementType (ret_t); address_t = LLVMPointerType (elem_t, 0); } address = convert (ctx, address, address_t); LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1", FALSE, alignment); if (replicate) { unsigned int elems = LLVMGetVectorSize (ret_t); result = broadcast_element (ctx, result, elems); } values [ins->dreg] = result; break; } case OP_ARM64_LDNP: case OP_ARM64_LDNP_SCALAR: case OP_ARM64_LDP: case OP_ARM64_LDP_SCALAR: { const char *oname = NULL; gboolean nontemporal = FALSE; gboolean scalar = FALSE; switch (ins->opcode) { case OP_ARM64_LDNP: oname = "arm64_ldnp"; nontemporal = TRUE; break; case OP_ARM64_LDNP_SCALAR: oname = "arm64_ldnp_scalar"; nontemporal = TRUE; scalar = TRUE; break; case OP_ARM64_LDP: oname = "arm64_ldp"; break; case OP_ARM64_LDP_SCALAR: oname = "arm64_ldp_scalar"; scalar = TRUE; break; } if (!addresses [ins->dreg]) addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (ins->klass), oname); LLVMTypeRef ret_t = simd_valuetuple_to_llvm_type (ctx, ins->klass); LLVMTypeRef vec_t = LLVMGetElementType (ret_t); LLVMValueRef ix = const_int32 (1); LLVMTypeRef src_t = LLVMPointerType (scalar ? LLVMGetElementType (vec_t) : vec_t, 0); LLVMValueRef src0 = convert (ctx, lhs, src_t); LLVMValueRef src1 = LLVMBuildGEP (builder, src0, &ix, 1, oname); LLVMValueRef vals [] = { src0, src1 }; for (int i = 0; i < 2; ++i) { vals [i] = LLVMBuildLoad (builder, vals [i], oname); if (nontemporal) set_nontemporal_flag (vals [i]); } unsigned int vec_sz = mono_llvm_get_prim_size_bits (vec_t); if (scalar) { g_assert (vec_sz == 64); LLVMValueRef undef = LLVMGetUndef (vec_t); for (int i = 0; i < 2; ++i) vals [i] = LLVMBuildInsertElement (builder, undef, vals [i], const_int32 (0), oname); } LLVMValueRef val = LLVMGetUndef (ret_t); for (int i = 0; i < 2; ++i) val = LLVMBuildInsertValue (builder, val, vals [i], i, oname); LLVMTypeRef retptr_t = LLVMPointerType (ret_t, 0); LLVMValueRef dst = convert (ctx, addresses [ins->dreg], retptr_t); LLVMBuildStore (builder, val, dst); values [ins->dreg] = vec_sz == 64 ? val : NULL; break; } case OP_ARM64_ST1: { LLVMTypeRef t = LLVMTypeOf (rhs); LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8; mono_llvm_build_aligned_store (builder, rhs, address, FALSE, alignment); break; } case OP_ARM64_ST1_SCALAR: { LLVMTypeRef t = LLVMGetElementType (LLVMTypeOf (rhs)); LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, arg3, "arm64_st1_scalar"); LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8; mono_llvm_build_aligned_store (builder, val, address, FALSE, alignment); break; } case OP_ARM64_ADDHN: case OP_ARM64_ADDHN2: case OP_ARM64_SUBHN: case OP_ARM64_SUBHN2: case OP_ARM64_RADDHN: case OP_ARM64_RADDHN2: case OP_ARM64_RSUBHN: case OP_ARM64_RSUBHN2: { LLVMValueRef args [2] = { lhs, rhs }; gboolean high = FALSE; gboolean subtract = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_ADDHN2: high = TRUE; case OP_ARM64_ADDHN: break; case OP_ARM64_SUBHN2: high = TRUE; case OP_ARM64_SUBHN: subtract = TRUE; break; case OP_ARM64_RSUBHN2: high = TRUE; case OP_ARM64_RSUBHN: iid = INTRINS_AARCH64_ADV_SIMD_RSUBHN; break; case OP_ARM64_RADDHN2: high = TRUE; case OP_ARM64_RADDHN: iid = INTRINS_AARCH64_ADV_SIMD_RADDHN; break; } llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); if (high) { args [0] = rhs; args [1] = arg3; ovr_tag = ovr_tag_smaller_vector (ovr_tag); } LLVMValueRef result = NULL; if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); else { LLVMTypeRef t = LLVMTypeOf (args [0]); LLVMTypeRef elt_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elt_t); if (subtract) result = LLVMBuildSub (builder, args [0], args [1], ""); else result = LLVMBuildAdd (builder, args [0], args [1], ""); result = LLVMBuildLShr (builder, result, broadcast_constant (elem_bits / 2, elt_t, elems), ""); result = LLVMBuildTrunc (builder, result, LLVMVectorType (LLVMIntType (elem_bits / 2), elems), ""); } if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_SADD: case OP_ARM64_UADD: case OP_ARM64_SADD2: case OP_ARM64_UADD2: case OP_ARM64_SSUB: case OP_ARM64_USUB: case OP_ARM64_SSUB2: case OP_ARM64_USUB2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean subtract = FALSE; switch (ins->opcode) { case OP_ARM64_SADD2: high = TRUE; case OP_ARM64_SADD: break; case OP_ARM64_UADD2: high = TRUE; case OP_ARM64_UADD: is_unsigned = TRUE; break; case OP_ARM64_SSUB2: high = TRUE; case OP_ARM64_SSUB: subtract = TRUE; break; case OP_ARM64_USUB2: high = TRUE; case OP_ARM64_USUB: subtract = TRUE; is_unsigned = TRUE; break; } LLVMValueRef args [] = { lhs, rhs }; for (int i = 0; i < 2; ++i) { LLVMValueRef arg = args [i]; LLVMTypeRef arg_t = LLVMTypeOf (arg); if (high && arg_t != ret_t) arg = extract_high_elements (ctx, arg); if (is_unsigned) arg = LLVMBuildZExt (builder, arg, ret_t, ""); else arg = LLVMBuildSExt (builder, arg, ret_t, ""); args [i] = arg; } LLVMValueRef result = NULL; if (subtract) result = LLVMBuildSub (builder, args [0], args [1], "arm64_sub"); else result = LLVMBuildAdd (builder, args [0], args [1], "arm64_add"); values [ins->dreg] = result; break; } case OP_ARM64_SABAL: case OP_ARM64_SABAL2: case OP_ARM64_UABAL: case OP_ARM64_UABAL2: case OP_ARM64_SABDL: case OP_ARM64_SABDL2: case OP_ARM64_UABDL: case OP_ARM64_UABDL2: case OP_ARM64_SABA: case OP_ARM64_UABA: case OP_ARM64_SABD: case OP_ARM64_UABD: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean add = FALSE; gboolean widen = FALSE; switch (ins->opcode) { case OP_ARM64_SABAL2: high = TRUE; case OP_ARM64_SABAL: widen = TRUE; add = TRUE; break; case OP_ARM64_UABAL2: high = TRUE; case OP_ARM64_UABAL: widen = TRUE; add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SABDL2: high = TRUE; case OP_ARM64_SABDL: widen = TRUE; break; case OP_ARM64_UABDL2: high = TRUE; case OP_ARM64_UABDL: widen = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SABA: add = TRUE; break; case OP_ARM64_UABA: add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_UABD: is_unsigned = TRUE; break; } LLVMValueRef args [] = { lhs, rhs }; if (add) { args [0] = rhs; args [1] = arg3; } if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UABD : INTRINS_AARCH64_ADV_SIMD_SABD; llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (LLVMTypeOf (args [0])); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); if (widen) result = LLVMBuildZExt (builder, result, ret_t, ""); if (add) result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_XHORIZ: { gboolean truncate = FALSE; LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t); if (elem_t == i1_t || elem_t == i2_t) truncate = TRUE; LLVMValueRef result = call_overloaded_intrins (ctx, ins->inst_c0, ovr_tag, &lhs, ""); if (truncate) { // @llvm.aarch64.neon.saddv.i32.v8i16 ought to return an i16, but doesn't in LLVM 9. result = LLVMBuildTrunc (builder, result, elem_t, ""); } result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_SADDLV: case OP_ARM64_UADDLV: { LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t); gboolean truncate = elem_t == i1_t; int iid = ins->opcode == OP_ARM64_UADDLV ? INTRINS_AARCH64_ADV_SIMD_UADDLV : INTRINS_AARCH64_ADV_SIMD_SADDLV; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, ""); if (truncate) { // @llvm.aarch64.neon.saddlv.i32.v16i8 ought to return an i16, but doesn't in LLVM 9. result = LLVMBuildTrunc (builder, result, i2_t, ""); } result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_UADALP: case OP_ARM64_SADALP: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); int iid = ins->opcode == OP_ARM64_UADALP ? INTRINS_AARCH64_ADV_SIMD_UADDLP : INTRINS_AARCH64_ADV_SIMD_SADDLP; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &rhs, ""); result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_ADDP_SCALAR: { llvm_ovr_tag_t ovr_tag = INTRIN_vector128 | INTRIN_int64; LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_UADDV, ovr_tag, &lhs, "arm64_addp_scalar"); result = LLVMBuildInsertElement (builder, LLVMConstNull (v64_i8_t), result, const_int32 (0), ""); values [ins->dreg] = result; break; } case OP_ARM64_FADDP_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef hi = LLVMBuildExtractElement (builder, lhs, const_int32 (0), ""); LLVMValueRef lo = LLVMBuildExtractElement (builder, lhs, const_int32 (1), ""); LLVMValueRef result = LLVMBuildFAdd (builder, hi, lo, "arm64_faddp_scalar"); result = LLVMBuildInsertElement (builder, LLVMConstNull (ret_t), result, const_int32 (0), ""); values [ins->dreg] = result; break; } case OP_ARM64_SXTL: case OP_ARM64_SXTL2: case OP_ARM64_UXTL: case OP_ARM64_UXTL2: { gboolean high = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_SXTL2: high = TRUE; break; case OP_ARM64_UXTL2: high = TRUE; case OP_ARM64_UXTL: is_unsigned = TRUE; break; } LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int elem_bits = LLVMGetIntTypeWidth (LLVMGetElementType (t)); unsigned int src_elems = LLVMGetVectorSize (t); unsigned int dst_elems = src_elems; LLVMValueRef arg = lhs; if (high) { arg = extract_high_elements (ctx, lhs); dst_elems = LLVMGetVectorSize (LLVMTypeOf (arg)); } LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits * 2), dst_elems); LLVMValueRef result = NULL; if (is_unsigned) result = LLVMBuildZExt (builder, arg, result_t, "arm64_uxtl"); else result = LLVMBuildSExt (builder, arg, result_t, "arm64_sxtl"); values [ins->dreg] = result; break; } case OP_ARM64_TRN1: case OP_ARM64_TRN2: { gboolean high = ins->opcode == OP_ARM64_TRN2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? 1 : 0; for (unsigned int i = 0; i < src_elems; i += 2) { mask [i] = laneix; mask [i + 1] = laneix + src_elems; laneix += 2; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp"); break; } case OP_ARM64_UZP1: case OP_ARM64_UZP2: { gboolean high = ins->opcode == OP_ARM64_UZP2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? 1 : 0; for (unsigned int i = 0; i < src_elems; ++i) { mask [i] = laneix; laneix += 2; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp"); break; } case OP_ARM64_ZIP1: case OP_ARM64_ZIP2: { gboolean high = ins->opcode == OP_ARM64_ZIP2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? src_elems / 2 : 0; for (unsigned int i = 0; i < src_elems; i += 2) { mask [i] = laneix; mask [i + 1] = laneix + src_elems; ++laneix; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_zip"); break; } case OP_ARM64_ABSCOMPARE: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; gboolean scalar = ins->inst_c1; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); ovr_tag = ovr_tag_corresponding_integer (ovr_tag); LLVMValueRef args [] = { lhs, rhs }; LLVMTypeRef result_t = ret_t; if (scalar) { ovr_tag = ovr_tag_force_scalar (ovr_tag); result_t = elem_t; for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); } LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); result = LLVMBuildBitCast (builder, result, result_t, ""); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_XOP_OVR_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, ""); break; } case OP_XOP_OVR_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_X_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, rhs, arg3 }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_BYSCALAR_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (t); LLVMValueRef arg2 = broadcast_element (ctx, scalar_from_vector (ctx, rhs), elems); LLVMValueRef args [] = { lhs, arg2 }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_SCALAR_X_X: case OP_XOP_OVR_SCALAR_X_X_X: case OP_XOP_OVR_SCALAR_X_X_X_X: { int num_args = 0; IntrinsicId iid = (IntrinsicId) ins->inst_c0; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); switch (ins->opcode) { case OP_XOP_OVR_SCALAR_X_X: num_args = 1; break; case OP_XOP_OVR_SCALAR_X_X_X: num_args = 2; break; case OP_XOP_OVR_SCALAR_X_X_X_X: num_args = 3; break; } /* LLVM 9 NEON intrinsic functions have scalar overloads. Unfortunately * only overloads for 32 and 64-bit integers and floating point types are * supported. 8 and 16-bit integers are unsupported, and will fail during * instruction selection. This is worked around by using a vector * operation and then explicitly clearing the upper bits of the register. */ ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins); LLVMValueRef args [3] = { lhs, rhs, arg3 }; scalar_op_from_vector_op_process_args (&sctx, args, num_args); LLVMValueRef result = call_overloaded_intrins (ctx, iid, sctx.ovr_tag, args, ""); result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } #endif case OP_DUMMY_USE: break; /* * EXCEPTION HANDLING */ case OP_IMPLICIT_EXCEPTION: /* This marks a place where an implicit exception can happen */ if (bb->region != -1) set_failure (ctx, "implicit-exception"); break; case OP_THROW: case OP_RETHROW: { gboolean rethrow = (ins->opcode == OP_RETHROW); if (ctx->llvm_only) { emit_llvmonly_throw (ctx, bb, rethrow, lhs); has_terminator = TRUE; ctx->unreachable [bb->block_num] = TRUE; } else { emit_throw (ctx, bb, rethrow, lhs); builder = ctx->builder; } break; } case OP_CALL_HANDLER: { /* * We don't 'call' handlers, but instead simply branch to them. * The code generated by ENDFINALLY will branch back to us. */ LLVMBasicBlockRef noex_bb; GSList *bb_list; BBInfo *info = &bblocks [ins->inst_target_bb->block_num]; bb_list = info->call_handler_return_bbs; /* * Set the indicator variable for the finally clause. */ lhs = info->finally_ind; g_assert (lhs); LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), g_slist_length (bb_list) + 1, FALSE), lhs); /* Branch to the finally clause */ LLVMBuildBr (builder, info->call_handler_target_bb); noex_bb = gen_bb (ctx, "CALL_HANDLER_CONT_BB"); info->call_handler_return_bbs = g_slist_append_mempool (cfg->mempool, info->call_handler_return_bbs, noex_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); bblocks [bb->block_num].end_bblock = noex_bb; break; } case OP_START_HANDLER: { break; } case OP_ENDFINALLY: { LLVMBasicBlockRef resume_bb; MonoBasicBlock *handler_bb; LLVMValueRef val, switch_ins, callee; GSList *bb_list; BBInfo *info; gboolean is_fault = MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FAULT; /* * Fault clauses are like finally clauses, but they are only called if an exception is thrown. */ if (!is_fault) { handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region))); g_assert (handler_bb); info = &bblocks [handler_bb->block_num]; lhs = info->finally_ind; g_assert (lhs); bb_list = info->call_handler_return_bbs; resume_bb = gen_bb (ctx, "ENDFINALLY_RESUME_BB"); /* Load the finally variable */ val = LLVMBuildLoad (builder, lhs, ""); /* Reset the variable */ LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), lhs); /* Branch to either resume_bb, or to the bblocks in bb_list */ switch_ins = LLVMBuildSwitch (builder, val, resume_bb, g_slist_length (bb_list)); /* * The other targets are added at the end to handle OP_CALL_HANDLER * opcodes processed later. */ info->endfinally_switch_ins_list = g_slist_append_mempool (cfg->mempool, info->endfinally_switch_ins_list, switch_ins); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, resume_bb); } if (ctx->llvm_only) { if (!cfg->deopt) { emit_resume_eh (ctx, bb); } else { /* Not needed */ LLVMBuildUnreachable (builder); } } else { LLVMTypeRef icall_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline)); } else { callee = get_jit_callee (ctx, "llvm_resume_unwind_trampoline", icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline)); } LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildUnreachable (builder); } has_terminator = TRUE; break; } case OP_ENDFILTER: { g_assert (cfg->llvm_only && cfg->deopt); LLVMBuildUnreachable (builder); has_terminator = TRUE; break; } case OP_IL_SEQ_POINT: break; default: { char reason [128]; sprintf (reason, "opcode %s", mono_inst_name (ins->opcode)); set_failure (ctx, reason); break; } } if (!ctx_ok (ctx)) break; /* Convert the value to the type required by phi nodes */ if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins) && ctx->vreg_types [ins->dreg]) { if (ctx->is_vphi [ins->dreg]) /* vtypes */ values [ins->dreg] = addresses [ins->dreg]; else values [ins->dreg] = convert (ctx, values [ins->dreg], ctx->vreg_types [ins->dreg]); } /* Add stores for volatile/ref variables */ if (spec [MONO_INST_DEST] != ' ' && spec [MONO_INST_DEST] != 'v' && !MONO_IS_STORE_MEMBASE (ins)) { if (!skip_volatile_store) emit_volatile_store (ctx, ins->dreg); #ifdef TARGET_WASM if (vreg_is_ref (cfg, ins->dreg) && ctx->values [ins->dreg]) emit_gc_pin (ctx, builder, ins->dreg); #endif } } if (!ctx_ok (ctx)) return; if (!has_terminator && bb->next_bb && (bb == cfg->bb_entry || bb->in_count > 0)) { LLVMBuildBr (builder, get_bb (ctx, bb->next_bb)); } if (bb == cfg->bb_exit && sig->ret->type == MONO_TYPE_VOID) { emit_dbg_loc (ctx, builder, cfg->header->code + cfg->header->code_size - 1); LLVMBuildRetVoid (builder); } if (bb == cfg->bb_entry) ctx->last_alloca = LLVMGetLastInstruction (get_bb (ctx, cfg->bb_entry)); } /* * mono_llvm_check_method_supported: * * Do some quick checks to decide whenever cfg->method can be compiled by LLVM, to avoid * compiling a method twice. */ void mono_llvm_check_method_supported (MonoCompile *cfg) { int i, j; #ifdef TARGET_WASM if (mono_method_signature_internal (cfg->method)->call_convention == MONO_CALL_VARARG) { cfg->exception_message = g_strdup ("vararg callconv"); cfg->disable_llvm = TRUE; return; } #endif if (cfg->llvm_only) return; if (cfg->method->save_lmf) { cfg->exception_message = g_strdup ("lmf"); cfg->disable_llvm = TRUE; } if (cfg->disable_llvm) return; /* * Nested clauses where one of the clauses is a finally clause is * not supported, because LLVM can't figure out the control flow, * probably because we resume exception handling by calling our * own function instead of using the 'resume' llvm instruction. */ for (i = 0; i < cfg->header->num_clauses; ++i) { for (j = 0; j < cfg->header->num_clauses; ++j) { MonoExceptionClause *clause1 = &cfg->header->clauses [i]; MonoExceptionClause *clause2 = &cfg->header->clauses [j]; // FIXME: Nested try clauses fail in some cases too, i.e. #37273 if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) { //(clause1->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause2->flags == MONO_EXCEPTION_CLAUSE_FINALLY)) { cfg->exception_message = g_strdup ("nested clauses"); cfg->disable_llvm = TRUE; break; } } } if (cfg->disable_llvm) return; /* FIXME: */ if (cfg->method->dynamic) { cfg->exception_message = g_strdup ("dynamic."); cfg->disable_llvm = TRUE; } if (cfg->disable_llvm) return; } static LLVMCallInfo* get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig) { LLVMCallInfo *linfo; int i; if (cfg->gsharedvt && cfg->llvm_only && mini_is_gsharedvt_variable_signature (sig)) { int i, n, pindex; /* * Gsharedvt methods have the following calling convention: * - all arguments are passed by ref, even non generic ones * - the return value is returned by ref too, using a vret * argument passed after 'this'. */ n = sig->param_count + sig->hasthis; linfo = (LLVMCallInfo*)mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMCallInfo) + (sizeof (LLVMArgInfo) * n)); pindex = 0; if (sig->hasthis) linfo->args [pindex ++].storage = LLVMArgNormal; if (sig->ret->type != MONO_TYPE_VOID) { if (mini_is_gsharedvt_variable_type (sig->ret)) linfo->ret.storage = LLVMArgGsharedvtVariable; else if (mini_type_is_vtype (sig->ret)) linfo->ret.storage = LLVMArgGsharedvtFixedVtype; else linfo->ret.storage = LLVMArgGsharedvtFixed; linfo->vret_arg_index = pindex; } else { linfo->ret.storage = LLVMArgNone; } for (i = 0; i < sig->param_count; ++i) { if (m_type_is_byref (sig->params [i])) linfo->args [pindex].storage = LLVMArgNormal; else if (mini_is_gsharedvt_variable_type (sig->params [i])) linfo->args [pindex].storage = LLVMArgGsharedvtVariable; else if (mini_type_is_vtype (sig->params [i])) linfo->args [pindex].storage = LLVMArgGsharedvtFixedVtype; else linfo->args [pindex].storage = LLVMArgGsharedvtFixed; linfo->args [pindex].type = sig->params [i]; pindex ++; } return linfo; } linfo = mono_arch_get_llvm_call_info (cfg, sig); linfo->dummy_arg_pindex = -1; for (i = 0; i < sig->param_count; ++i) linfo->args [i + sig->hasthis].type = sig->params [i]; return linfo; } static void emit_method_inner (EmitContext *ctx); static void free_ctx (EmitContext *ctx) { GSList *l; g_free (ctx->values); g_free (ctx->addresses); g_free (ctx->vreg_types); g_free (ctx->is_vphi); g_free (ctx->vreg_cli_types); g_free (ctx->is_dead); g_free (ctx->unreachable); g_free (ctx->gc_var_indexes); g_ptr_array_free (ctx->phi_values, TRUE); g_free (ctx->bblocks); g_hash_table_destroy (ctx->region_to_handler); g_hash_table_destroy (ctx->clause_to_handler); g_hash_table_destroy (ctx->jit_callees); g_ptr_array_free (ctx->callsite_list, TRUE); g_free (ctx->method_name); g_ptr_array_free (ctx->bblock_list, TRUE); for (l = ctx->builders; l; l = l->next) { LLVMBuilderRef builder = (LLVMBuilderRef)l->data; LLVMDisposeBuilder (builder); } g_free (ctx); } static gboolean is_linkonce_method (MonoMethod *method) { #ifdef TARGET_WASM /* * Under wasm, linkonce works, so use it instead of the dedup pass for wrappers at least. * FIXME: Use for everything, i.e. can_dedup (). * FIXME: Fails System.Core tests * -> amodule->sorted_methods contains duplicates, screwing up jit tables. */ // FIXME: This works, but the aot data for the methods is still kept, so size still increases #if 0 if (method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG) return TRUE; } #endif #endif return FALSE; } /* * mono_llvm_emit_method: * * Emit LLVM IL from the mono IL, and compile it to native code using LLVM. */ void mono_llvm_emit_method (MonoCompile *cfg) { EmitContext *ctx; char *method_name; gboolean is_linkonce = FALSE; int i; if (cfg->skip) return; /* The code below might acquire the loader lock, so use it for global locking */ mono_loader_lock (); ctx = g_new0 (EmitContext, 1); ctx->cfg = cfg; ctx->mempool = cfg->mempool; /* * This maps vregs to the LLVM instruction defining them */ ctx->values = g_new0 (LLVMValueRef, cfg->next_vreg); /* * This maps vregs for volatile variables to the LLVM instruction defining their * address. */ ctx->addresses = g_new0 (LLVMValueRef, cfg->next_vreg); ctx->vreg_types = g_new0 (LLVMTypeRef, cfg->next_vreg); ctx->is_vphi = g_new0 (gboolean, cfg->next_vreg); ctx->vreg_cli_types = g_new0 (MonoType*, cfg->next_vreg); ctx->phi_values = g_ptr_array_sized_new (256); /* * This signals whenever the vreg was defined by a phi node with no input vars * (i.e. all its input bblocks end with NOT_REACHABLE). */ ctx->is_dead = g_new0 (gboolean, cfg->next_vreg); /* Whenever the bblock is unreachable */ ctx->unreachable = g_new0 (gboolean, cfg->max_block_num); ctx->bblock_list = g_ptr_array_sized_new (256); ctx->region_to_handler = g_hash_table_new (NULL, NULL); ctx->clause_to_handler = g_hash_table_new (NULL, NULL); ctx->callsite_list = g_ptr_array_new (); ctx->jit_callees = g_hash_table_new (NULL, NULL); if (cfg->compile_aot) { ctx->module = &aot_module; /* * Allow the linker to discard duplicate copies of wrappers, generic instances etc. by using the 'linkonce' * linkage for them. This requires the following: * - the method needs to have a unique mangled name * - llvmonly mode, since the code in aot-runtime.c would initialize got slots in the wrong aot image etc. */ if (ctx->module->llvm_only && ctx->module->static_link && is_linkonce_method (cfg->method)) is_linkonce = TRUE; if (is_linkonce || mono_aot_is_externally_callable (cfg->method)) method_name = mono_aot_get_mangled_method_name (cfg->method); else method_name = mono_aot_get_method_name (cfg); cfg->llvm_method_name = g_strdup (method_name); } else { ctx->module = init_jit_module (); method_name = mono_method_full_name (cfg->method, TRUE); } ctx->method_name = method_name; ctx->is_linkonce = is_linkonce; if (cfg->compile_aot) { ctx->lmodule = ctx->module->lmodule; } else { ctx->lmodule = LLVMModuleCreateWithName (g_strdup_printf ("jit-module-%s", cfg->method->name)); } ctx->llvm_only = ctx->module->llvm_only; #ifdef TARGET_WASM ctx->emit_dummy_arg = TRUE; #endif emit_method_inner (ctx); if (!ctx_ok (ctx)) { if (ctx->lmethod) { /* Need to add unused phi nodes as they can be referenced by other values */ LLVMBasicBlockRef phi_bb = LLVMAppendBasicBlock (ctx->lmethod, "PHI_BB"); LLVMBuilderRef builder; builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, phi_bb); for (i = 0; i < ctx->phi_values->len; ++i) { LLVMValueRef v = (LLVMValueRef)g_ptr_array_index (ctx->phi_values, i); if (LLVMGetInstructionParent (v) == NULL) LLVMInsertIntoBuilder (builder, v); } if (ctx->module->llvm_only && ctx->module->static_link && cfg->interp) { /* The caller will retry compilation */ LLVMDeleteFunction (ctx->lmethod); } else if (ctx->module->llvm_only && ctx->module->static_link) { // Keep a stub for the function since it might be called directly int nbbs = LLVMCountBasicBlocks (ctx->lmethod); LLVMBasicBlockRef *bblocks = g_new0 (LLVMBasicBlockRef, nbbs); LLVMGetBasicBlocks (ctx->lmethod, bblocks); for (int i = 0; i < nbbs; ++i) LLVMRemoveBasicBlockFromParent (bblocks [i]); LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (ctx->lmethod, "ENTRY"); builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, entry_bb); ctx->builder = builder; LLVMTypeRef sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception)); LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildUnreachable (builder); /* Clean references to instructions inside the method */ for (int i = 0; i < ctx->callsite_list->len; ++i) { CallSite *callsite = (CallSite*)g_ptr_array_index (ctx->callsite_list, i); if (callsite->lmethod == ctx->lmethod) callsite->load = NULL; } } else { LLVMDeleteFunction (ctx->lmethod); } } } free_ctx (ctx); mono_loader_unlock (); } static void emit_method_inner (EmitContext *ctx) { MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig; MonoBasicBlock *bb; LLVMTypeRef method_type; LLVMValueRef method = NULL; LLVMValueRef *values = ctx->values; int i, max_block_num, bb_index; gboolean llvmonly_fail = FALSE; LLVMCallInfo *linfo; LLVMModuleRef lmodule = ctx->lmodule; BBInfo *bblocks; GPtrArray *bblock_list = ctx->bblock_list; MonoMethodHeader *header; MonoExceptionClause *clause; char **names; LLVMBuilderRef entry_builder = NULL; LLVMBasicBlockRef entry_bb = NULL; if (cfg->gsharedvt && !cfg->llvm_only) { set_failure (ctx, "gsharedvt"); return; } #if 0 { static int count = 0; count ++; char *llvm_count_str = g_getenv ("LLVM_COUNT"); if (llvm_count_str) { int lcount = atoi (llvm_count_str); g_free (llvm_count_str); if (count == lcount) { printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE)); fflush (stdout); } if (count > lcount) { set_failure (ctx, "count"); return; } } } #endif // If we come upon one of the init_method wrappers, we need to find // the method that we have already emitted and tell LLVM that this // managed method info for the wrapper is associated with this method // we constructed ourselves from LLVM IR. // // This is necessary to unwind through the init_method, in the case that // it has to run a static cctor that throws an exception if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method); if (info->subtype == WRAPPER_SUBTYPE_AOT_INIT) { method = get_init_func (ctx->module, info->d.aot_init.subtype); ctx->lmethod = method; ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); const char *init_name = mono_marshal_get_aot_init_wrapper_name (info->d.aot_init.subtype); ctx->method_name = g_strdup_printf ("%s_%s", ctx->module->global_prefix, init_name); ctx->cfg->asm_symbol = g_strdup (ctx->method_name); if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } /* Not looked up at runtime */ g_hash_table_insert (ctx->module->no_method_table_lmethods, method, method); goto after_codegen; } else if (info->subtype == WRAPPER_SUBTYPE_LLVM_FUNC) { g_assert (info->d.llvm_func.subtype == LLVM_FUNC_WRAPPER_GC_POLL); if (cfg->compile_aot) { method = ctx->module->gc_poll_cold_wrapper; g_assert (method); } else { method = emit_icall_cold_wrapper (ctx->module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, FALSE); } ctx->lmethod = method; ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); ctx->method_name = g_strdup (LLVMGetValueName (method)); //g_strdup_printf ("%s_%s", ctx->module->global_prefix, LLVMGetValueName (method)); ctx->cfg->asm_symbol = g_strdup (ctx->method_name); if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } goto after_codegen; } } sig = mono_method_signature_internal (cfg->method); ctx->sig = sig; linfo = get_llvm_call_info (cfg, sig); ctx->linfo = linfo; if (!ctx_ok (ctx)) return; if (cfg->rgctx_var) linfo->rgctx_arg = TRUE; else if (needs_extra_arg (ctx, cfg->method)) linfo->dummy_arg = TRUE; ctx->method_type = method_type = sig_to_llvm_sig_full (ctx, sig, linfo); if (!ctx_ok (ctx)) return; method = LLVMAddFunction (lmodule, ctx->method_name, method_type); ctx->lmethod = method; if (!cfg->llvm_only) LLVMSetFunctionCallConv (method, LLVMMono1CallConv); /* if the method doesn't contain * (1) a call (so it's a leaf method) * (2) and no loops * we can skip the GC safepoint on method entry. */ gboolean requires_safepoint; requires_safepoint = cfg->has_calls; if (!requires_safepoint) { for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) { if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) { requires_safepoint = TRUE; } } } if (cfg->method->wrapper_type) { if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC || cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) { requires_safepoint = FALSE; } else { WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method); switch (info->subtype) { case WRAPPER_SUBTYPE_GSHAREDVT_IN: case WRAPPER_SUBTYPE_GSHAREDVT_OUT: case WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG: case WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG: /* Arguments are not used after the call */ requires_safepoint = FALSE; break; } } } ctx->has_safepoints = requires_safepoint; if (!cfg->llvm_only && mono_threads_are_safepoints_enabled () && requires_safepoint) { if (!cfg->compile_aot) { LLVMSetGC (method, "coreclr"); emit_gc_safepoint_poll (ctx->module, ctx->lmodule, cfg); } else { LLVMSetGC (method, "coreclr"); } } LLVMSetLinkage (method, LLVMPrivateLinkage); mono_llvm_add_func_attr (method, LLVM_ATTR_UW_TABLE); if (cfg->disable_omit_fp) mono_llvm_add_func_attr_nv (method, "frame-pointer", "all"); if (cfg->compile_aot) { if (mono_aot_is_externally_callable (cfg->method)) { LLVMSetLinkage (method, LLVMExternalLinkage); } else { LLVMSetLinkage (method, LLVMInternalLinkage); //all methods have internal visibility when doing llvm_only if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } } if (ctx->is_linkonce) { LLVMSetLinkage (method, LLVMLinkOnceAnyLinkage); LLVMSetVisibility (method, LLVMDefaultVisibility); } } else { LLVMSetLinkage (method, LLVMExternalLinkage); } if (cfg->method->save_lmf && !cfg->llvm_only) { set_failure (ctx, "lmf"); return; } if (sig->pinvoke && cfg->method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE && !cfg->llvm_only) { set_failure (ctx, "pinvoke signature"); return; } #ifdef TARGET_WASM if (ctx->module->interp && cfg->header->code_size > 100000 && !cfg->interp_entry_only) { /* Large methods slow down llvm too much */ set_failure (ctx, "il code too large."); return; } #endif header = cfg->header; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT && clause->flags != MONO_EXCEPTION_CLAUSE_NONE) { if (cfg->llvm_only) { if (!cfg->deopt && !cfg->interp_entry_only) llvmonly_fail = TRUE; } else { set_failure (ctx, "non-finally/catch/fault clause."); return; } } } if (header->num_clauses || (cfg->method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || cfg->no_inline) /* We can't handle inlined methods with clauses */ mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); for (int i = 0; i < cfg->header->num_clauses; i++) { MonoExceptionClause *clause = &cfg->header->clauses [i]; if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) ctx->has_catch = TRUE; } if (linfo->rgctx_arg) { ctx->rgctx_arg = LLVMGetParam (method, linfo->rgctx_arg_pindex); ctx->rgctx_arg_pindex = linfo->rgctx_arg_pindex; /* * We mark the rgctx parameter with the inreg attribute, which is mapped to * MONO_ARCH_RGCTX_REG in the Mono calling convention in llvm, i.e. * CC_X86_64_Mono in X86CallingConv.td. */ if (!ctx->llvm_only) mono_llvm_add_param_attr (ctx->rgctx_arg, LLVM_ATTR_IN_REG); LLVMSetValueName (ctx->rgctx_arg, "rgctx"); } else { ctx->rgctx_arg_pindex = -1; } if (cfg->vret_addr) { values [cfg->vret_addr->dreg] = LLVMGetParam (method, linfo->vret_arg_pindex); LLVMSetValueName (values [cfg->vret_addr->dreg], "vret"); if (linfo->ret.storage == LLVMArgVtypeByRef) { mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET); mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS); } } if (sig->hasthis) { ctx->this_arg_pindex = linfo->this_arg_pindex; ctx->this_arg = LLVMGetParam (method, linfo->this_arg_pindex); values [cfg->args [0]->dreg] = ctx->this_arg; LLVMSetValueName (values [cfg->args [0]->dreg], "this"); } if (linfo->dummy_arg) LLVMSetValueName (LLVMGetParam (method, linfo->dummy_arg_pindex), "dummy_arg"); names = g_new (char *, sig->param_count); mono_method_get_param_names (cfg->method, (const char **) names); /* Set parameter names/attributes */ for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis]; char *name; int pindex = ainfo->pindex + ainfo->ndummy_fpargs; int j; for (j = 0; j < ainfo->ndummy_fpargs; ++j) { name = g_strdup_printf ("dummy_%d_%d", i, j); LLVMSetValueName (LLVMGetParam (method, ainfo->pindex + j), name); g_free (name); } if (ainfo->storage == LLVMArgVtypeInReg && ainfo->pair_storage [0] == LLVMArgNone && ainfo->pair_storage [1] == LLVMArgNone) continue; values [cfg->args [i + sig->hasthis]->dreg] = LLVMGetParam (method, pindex); if (ainfo->storage == LLVMArgGsharedvtFixed || ainfo->storage == LLVMArgGsharedvtFixedVtype) { if (names [i] && names [i][0] != '\0') name = g_strdup_printf ("p_arg_%s", names [i]); else name = g_strdup_printf ("p_arg_%d", i); } else { if (names [i] && names [i][0] != '\0') name = g_strdup_printf ("arg_%s", names [i]); else name = g_strdup_printf ("arg_%d", i); } LLVMSetValueName (LLVMGetParam (method, pindex), name); g_free (name); if (ainfo->storage == LLVMArgVtypeByVal) mono_llvm_add_param_attr (LLVMGetParam (method, pindex), LLVM_ATTR_BY_VAL); if (ainfo->storage == LLVMArgVtypeByRef || ainfo->storage == LLVMArgVtypeAddr) { /* For OP_LDADDR */ cfg->args [i + sig->hasthis]->opcode = OP_VTARG_ADDR; } #ifdef TARGET_WASM if (ainfo->storage == LLVMArgVtypeByRef) { /* This causes llvm to make a copy of the value which is what we need */ mono_llvm_add_param_byval_attr (LLVMGetParam (method, pindex), LLVMGetElementType (LLVMTypeOf (LLVMGetParam (method, pindex)))); } #endif } g_free (names); if (ctx->module->emit_dwarf && cfg->compile_aot && mono_debug_enabled ()) { ctx->minfo = mono_debug_lookup_method (cfg->method); ctx->dbg_md = emit_dbg_subprogram (ctx, cfg, method, ctx->method_name); } max_block_num = 0; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) max_block_num = MAX (max_block_num, bb->block_num); ctx->bblocks = bblocks = g_new0 (BBInfo, max_block_num + 1); /* Add branches between non-consecutive bblocks */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) && bb->next_bb != bb->last_ins->inst_false_bb) { MonoInst *inst = (MonoInst*)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst)); inst->opcode = OP_BR; inst->inst_target_bb = bb->last_ins->inst_false_bb; mono_bblock_add_inst (bb, inst); } } /* * Make a first pass over the code to precreate PHI nodes/set INDIRECT flags. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins; LLVMBuilderRef builder; char *dname; char dname_buf[128]; builder = create_builder (ctx); for (ins = bb->code; ins; ins = ins->next) { switch (ins->opcode) { case OP_PHI: case OP_FPHI: case OP_VPHI: case OP_XPHI: { LLVMTypeRef phi_type = llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass))); if (!ctx_ok (ctx)) return; if (cfg->interp_entry_only) break; if (ins->opcode == OP_VPHI) { /* Treat valuetype PHI nodes as operating on the address itself */ g_assert (ins->klass); phi_type = LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)), 0); } /* * Have to precreate these, as they can be referenced by * earlier instructions. */ sprintf (dname_buf, "t%d", ins->dreg); dname = dname_buf; values [ins->dreg] = LLVMBuildPhi (builder, phi_type, dname); if (ins->opcode == OP_VPHI) ctx->addresses [ins->dreg] = values [ins->dreg]; g_ptr_array_add (ctx->phi_values, values [ins->dreg]); /* * Set the expected type of the incoming arguments since these have * to have the same type. */ for (i = 0; i < ins->inst_phi_args [0]; i++) { int sreg1 = ins->inst_phi_args [i + 1]; if (sreg1 != -1) { if (ins->opcode == OP_VPHI) ctx->is_vphi [sreg1] = TRUE; ctx->vreg_types [sreg1] = phi_type; } } break; } case OP_LDADDR: ((MonoInst*)ins->inst_p0)->flags |= MONO_INST_INDIRECT; break; default: break; } } } /* * Create an ordering for bblocks, use the depth first order first, then * put the exception handling bblocks last. */ for (bb_index = 0; bb_index < cfg->num_bblocks; ++bb_index) { bb = cfg->bblocks [bb_index]; if (!(bb->region != -1 && !MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))) { g_ptr_array_add (bblock_list, bb); bblocks [bb->block_num].added = TRUE; } } for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { if (!bblocks [bb->block_num].added) g_ptr_array_add (bblock_list, bb); } /* * Second pass: generate code. */ // Emit entry point entry_builder = create_builder (ctx); entry_bb = get_bb (ctx, cfg->bb_entry); LLVMPositionBuilderAtEnd (entry_builder, entry_bb); emit_entry_bb (ctx, entry_builder); if (llvmonly_fail) /* * In llvmonly mode, we want to emit an llvm method for every method even if it fails to compile, * so direct calls can be made from outside the assembly. */ goto after_codegen_1; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { int clause_index; char name [128]; if (ctx->cfg->interp_entry_only || !(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER))) continue; if (ctx->cfg->deopt && MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FILTER) continue; clause_index = MONO_REGION_CLAUSE_INDEX (bb->region); g_hash_table_insert (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)), bb); g_hash_table_insert (ctx->clause_to_handler, GINT_TO_POINTER (clause_index), bb); /* * Create a new bblock which CALL_HANDLER/landing pads can branch to, because branching to the * LLVM bblock containing a landing pad causes problems for the * LLVM optimizer passes. */ sprintf (name, "BB%d_CALL_HANDLER_TARGET", bb->block_num); ctx->bblocks [bb->block_num].call_handler_target_bb = LLVMAppendBasicBlock (ctx->lmethod, name); } // Make landing pads first ctx->exc_meta = g_hash_table_new_full (NULL, NULL, NULL, NULL); if (ctx->llvm_only && !ctx->cfg->interp_entry_only) { size_t group_index = 0; while (group_index < cfg->header->num_clauses) { if (cfg->clause_is_dead [group_index]) { group_index ++; continue; } int count = 0; size_t cursor = group_index; while (cursor < cfg->header->num_clauses && CLAUSE_START (&cfg->header->clauses [cursor]) == CLAUSE_START (&cfg->header->clauses [group_index]) && CLAUSE_END (&cfg->header->clauses [cursor]) == CLAUSE_END (&cfg->header->clauses [group_index])) { count++; cursor++; } LLVMBasicBlockRef lpad_bb = emit_landing_pad (ctx, group_index, count); intptr_t key = CLAUSE_END (&cfg->header->clauses [group_index]); g_hash_table_insert (ctx->exc_meta, (gpointer)key, lpad_bb); group_index = cursor; } } for (bb_index = 0; bb_index < bblock_list->len; ++bb_index) { bb = (MonoBasicBlock*)g_ptr_array_index (bblock_list, bb_index); // Prune unreachable mono BBs. if (!(bb == cfg->bb_entry || bb->in_count > 0)) continue; process_bb (ctx, bb); if (!ctx_ok (ctx)) return; } g_hash_table_destroy (ctx->exc_meta); mono_memory_barrier (); /* Add incoming phi values */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { GSList *l, *ins_list; ins_list = bblocks [bb->block_num].phi_nodes; for (l = ins_list; l; l = l->next) { PhiNode *node = (PhiNode*)l->data; MonoInst *phi = node->phi; int sreg1 = node->sreg; LLVMBasicBlockRef in_bb; if (sreg1 == -1) continue; in_bb = get_end_bb (ctx, node->in_bb); if (ctx->unreachable [node->in_bb->block_num]) continue; if (phi->opcode == OP_VPHI) { g_assert (LLVMTypeOf (ctx->addresses [sreg1]) == LLVMTypeOf (values [phi->dreg])); LLVMAddIncoming (values [phi->dreg], &ctx->addresses [sreg1], &in_bb, 1); } else { if (!values [sreg1]) { /* Can happen with values in EH clauses */ set_failure (ctx, "incoming phi sreg1"); return; } if (LLVMTypeOf (values [sreg1]) != LLVMTypeOf (values [phi->dreg])) { set_failure (ctx, "incoming phi arg type mismatch"); return; } g_assert (LLVMTypeOf (values [sreg1]) == LLVMTypeOf (values [phi->dreg])); LLVMAddIncoming (values [phi->dreg], &values [sreg1], &in_bb, 1); } } } /* Nullify empty phi instructions */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { GSList *l, *ins_list; ins_list = bblocks [bb->block_num].phi_nodes; for (l = ins_list; l; l = l->next) { PhiNode *node = (PhiNode*)l->data; MonoInst *phi = node->phi; LLVMValueRef phi_ins = values [phi->dreg]; if (!phi_ins) /* Already removed */ continue; if (LLVMCountIncoming (phi_ins) == 0) { mono_llvm_replace_uses_of (phi_ins, LLVMConstNull (LLVMTypeOf (phi_ins))); LLVMInstructionEraseFromParent (phi_ins); values [phi->dreg] = NULL; } } } /* Create the SWITCH statements for ENDFINALLY instructions */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { BBInfo *info = &bblocks [bb->block_num]; GSList *l; for (l = info->endfinally_switch_ins_list; l; l = l->next) { LLVMValueRef switch_ins = (LLVMValueRef)l->data; GSList *bb_list = info->call_handler_return_bbs; GSList *bb_list_iter; i = 0; for (bb_list_iter = bb_list; bb_list_iter; bb_list_iter = g_slist_next (bb_list_iter)) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i + 1, FALSE), (LLVMBasicBlockRef)bb_list_iter->data); i ++; } } } ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); after_codegen_1: if (llvmonly_fail) { /* * FIXME: Maybe fallback to interpreter */ static LLVMTypeRef sig; ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb); char *name = mono_method_get_full_name (cfg->method); int len = strlen (name); LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), len + 1); LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, type, "missing_method_name"); LLVMSetVisibility (name_var, LLVMHiddenVisibility); LLVMSetLinkage (name_var, LLVMInternalLinkage); LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((guint8*)name, len + 1)); mono_llvm_set_is_constant (name_var); g_free (name); if (!sig) sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE); LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_aot_failed_exception)); LLVMValueRef args [] = { convert (ctx, name_var, ctx->module->ptr_type) }; LLVMBuildCall (ctx->builder, callee, args, 1, ""); LLVMBuildUnreachable (ctx->builder); } /* Initialize the method if needed */ if (cfg->compile_aot) { // FIXME: Add more shared got entries ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->init_bb); // FIXME: beforefieldinit /* * NATIVE_TO_MANAGED methods might be called on a thread not attached to the runtime, so they are initialized when loaded * in load_method (). */ gboolean needs_init = ctx->cfg->got_access_count > 0; MonoMethod *cctor = NULL; if (!needs_init && (cctor = mono_class_get_cctor (cfg->method->klass))) { /* Needs init to run the cctor */ if (cfg->method->flags & METHOD_ATTRIBUTE_STATIC) needs_init = TRUE; if (cctor == cfg->method) needs_init = FALSE; // If we are a constructor, we need to init so the static // constructor gets called. if (!strcmp (cfg->method->name, ".ctor")) needs_init = TRUE; } if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) needs_init = FALSE; if (needs_init) emit_method_init (ctx); else LLVMBuildBr (ctx->builder, ctx->inited_bb); // Was observing LLVM moving field accesses into the caller's method // body before the init call (the inlined one), leading to NULL derefs // after the init_method returns (GOT is filled out though) if (needs_init) mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); } if (mini_get_debug_options ()->llvm_disable_inlining) mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); after_codegen: if (cfg->compile_aot) g_ptr_array_add (ctx->module->cfgs, cfg); if (cfg->llvm_only) { /* * Add the contents of ctx->callsite_list to module->callsite_list. * We can't do this earlier, as it contains llvm instructions which can be * freed if compilation fails. * FIXME: Get rid of this when all methods can be llvm compiled. */ for (int i = 0; i < ctx->callsite_list->len; ++i) g_ptr_array_add (ctx->module->callsite_list, g_ptr_array_index (ctx->callsite_list, i)); } if (cfg->verbose_level > 1) { g_print ("\n*** Unoptimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE)); if (cfg->compile_aot) { mono_llvm_dump_value (method); } else { mono_llvm_dump_module (ctx->lmodule); } g_print ("***\n\n"); } if (cfg->compile_aot && !cfg->llvm_only) mark_as_used (ctx->module, method); if (!cfg->llvm_only) { LLVMValueRef md_args [16]; LLVMValueRef md_node; int method_index; if (cfg->compile_aot) method_index = mono_aot_get_method_index (cfg->orig_method); else method_index = 1; md_args [0] = LLVMMDString (ctx->method_name, strlen (ctx->method_name)); md_args [1] = LLVMConstInt (LLVMInt32Type (), method_index, FALSE); md_node = LLVMMDNode (md_args, 2); LLVMAddNamedMetadataOperand (lmodule, "mono.function_indexes", md_node); //LLVMSetMetadata (method, md_kind, LLVMMDNode (&md_arg, 1)); } if (cfg->compile_aot) { /* Don't generate native code, keep the LLVM IR */ if (cfg->verbose_level) { char *name = mono_method_get_full_name (cfg->method); printf ("%s emitted as %s\n", name, ctx->method_name); g_free (name); } #if 0 int err = LLVMVerifyFunction (ctx->lmethod, LLVMPrintMessageAction); if (err != 0) LLVMDumpValue (ctx->lmethod); g_assert (err == 0); #endif } else { //LLVMVerifyFunction (method, 0); llvm_jit_finalize_method (ctx); } if (ctx->module->method_to_lmethod) g_hash_table_insert (ctx->module->method_to_lmethod, cfg->method, ctx->lmethod); if (ctx->module->idx_to_lmethod) g_hash_table_insert (ctx->module->idx_to_lmethod, GINT_TO_POINTER (cfg->method_index), ctx->lmethod); if (ctx->llvm_only && m_class_is_valuetype (cfg->orig_method->klass) && !(cfg->orig_method->flags & METHOD_ATTRIBUTE_STATIC)) emit_unbox_tramp (ctx, ctx->method_name, ctx->method_type, ctx->lmethod, cfg->method_index); } /* * mono_llvm_create_vars: * * Same as mono_arch_create_vars () for LLVM. */ void mono_llvm_create_vars (MonoCompile *cfg) { MonoMethodSignature *sig; sig = mono_method_signature_internal (cfg->method); if (cfg->gsharedvt && cfg->llvm_only) { gboolean vretaddr = FALSE; if (mini_is_gsharedvt_variable_signature (sig) && sig->ret->type != MONO_TYPE_VOID) { vretaddr = TRUE; } else { MonoMethodSignature *sig = mono_method_signature_internal (cfg->method); LLVMCallInfo *linfo; linfo = get_llvm_call_info (cfg, sig); vretaddr = (linfo->ret.storage == LLVMArgVtypeRetAddr || linfo->ret.storage == LLVMArgVtypeByRef || linfo->ret.storage == LLVMArgGsharedvtFixed || linfo->ret.storage == LLVMArgGsharedvtVariable || linfo->ret.storage == LLVMArgGsharedvtFixedVtype); } if (vretaddr) { /* * Creating vret_addr forces CEE_SETRET to store the result into it, * so we don't have to generate any code in our OP_SETRET case. */ cfg->vret_addr = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_get_intptr_class ()), OP_ARG); if (G_UNLIKELY (cfg->verbose_level > 1)) { printf ("vret_addr = "); mono_print_ins (cfg->vret_addr); } } } else { mono_arch_create_vars (cfg); } cfg->lmf_ir = TRUE; } /* * mono_llvm_emit_call: * * Same as mono_arch_emit_call () for LLVM. */ void mono_llvm_emit_call (MonoCompile *cfg, MonoCallInst *call) { MonoInst *in; MonoMethodSignature *sig; int i, n; LLVMArgInfo *ainfo; sig = call->signature; n = sig->param_count + sig->hasthis; if (sig->call_convention == MONO_CALL_VARARG) { cfg->exception_message = g_strdup ("varargs"); cfg->disable_llvm = TRUE; return; } call->cinfo = get_llvm_call_info (cfg, sig); if (cfg->disable_llvm) return; for (i = 0; i < n; ++i) { MonoInst *ins; ainfo = call->cinfo->args + i; in = call->args [i]; /* Simply remember the arguments */ switch (ainfo->storage) { case LLVMArgNormal: { MonoType *t = (sig->hasthis && i == 0) ? m_class_get_byval_arg (mono_get_intptr_class ()) : ainfo->type; int opcode; opcode = mono_type_to_regmove (cfg, t); if (opcode == OP_FMOVE) { MONO_INST_NEW (cfg, ins, OP_FMOVE); ins->dreg = mono_alloc_freg (cfg); } else if (opcode == OP_LMOVE) { MONO_INST_NEW (cfg, ins, OP_LMOVE); ins->dreg = mono_alloc_lreg (cfg); } else if (opcode == OP_RMOVE) { MONO_INST_NEW (cfg, ins, OP_RMOVE); ins->dreg = mono_alloc_freg (cfg); } else { MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = mono_alloc_ireg (cfg); } ins->sreg1 = in->dreg; break; } case LLVMArgVtypeByVal: case LLVMArgVtypeByRef: case LLVMArgVtypeInReg: case LLVMArgVtypeAddr: case LLVMArgVtypeAsScalar: case LLVMArgAsIArgs: case LLVMArgAsFpArgs: case LLVMArgGsharedvtVariable: case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: case LLVMArgWasmVtypeAsScalar: MONO_INST_NEW (cfg, ins, OP_LLVM_OUTARG_VT); ins->dreg = mono_alloc_ireg (cfg); ins->sreg1 = in->dreg; ins->inst_p0 = mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMArgInfo)); memcpy (ins->inst_p0, ainfo, sizeof (LLVMArgInfo)); ins->inst_vtype = ainfo->type; ins->klass = mono_class_from_mono_type_internal (ainfo->type); break; default: cfg->exception_message = g_strdup ("ainfo->storage"); cfg->disable_llvm = TRUE; return; } if (!cfg->disable_llvm) { MONO_ADD_INS (cfg->cbb, ins); mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, 0, FALSE); } } } static inline void add_func (LLVMModuleRef module, const char *name, LLVMTypeRef ret_type, LLVMTypeRef *param_types, int nparams) { LLVMAddFunction (module, name, LLVMFunctionType (ret_type, param_types, nparams, FALSE)); } static LLVMValueRef add_intrins (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef *params, int nparams) { return mono_llvm_register_overloaded_intrinsic (module, id, params, nparams); } static LLVMValueRef add_intrins1 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1) { return mono_llvm_register_overloaded_intrinsic (module, id, &param1, 1); } static LLVMValueRef add_intrins2 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2) { LLVMTypeRef params [] = { param1, param2 }; return mono_llvm_register_overloaded_intrinsic (module, id, params, 2); } static LLVMValueRef add_intrins3 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2, LLVMTypeRef param3) { LLVMTypeRef params [] = { param1, param2, param3 }; return mono_llvm_register_overloaded_intrinsic (module, id, params, 3); } static void add_intrinsic (LLVMModuleRef module, int id) { /* Register simple intrinsics */ LLVMValueRef intrins = mono_llvm_register_intrinsic (module, (IntrinsicId)id); if (intrins) { g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins); return; } if (intrin_arm64_ovr [id] != 0) { llvm_ovr_tag_t spec = intrin_arm64_ovr [id]; for (int vw = 0; vw < INTRIN_vectorwidths; ++vw) { for (int ew = 0; ew < INTRIN_elementwidths; ++ew) { llvm_ovr_tag_t vec_bit = INTRIN_vector128 >> ((INTRIN_vectorwidths - 1) - vw); llvm_ovr_tag_t elem_bit = INTRIN_int8 << ew; llvm_ovr_tag_t test = vec_bit | elem_bit; if ((spec & test) == test) { uint8_t kind = intrin_kind [id]; LLVMTypeRef distinguishing_type = intrin_types [vw][ew]; if (kind == INTRIN_kind_ftoi && (elem_bit & (INTRIN_int32 | INTRIN_int64))) { /* * @llvm.aarch64.neon.fcvtas.v4i32.v4f32 * @llvm.aarch64.neon.fcvtas.v2i64.v2f64 */ intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew + 2]); } else if (kind == INTRIN_kind_widen) { /* * @llvm.aarch64.neon.saddlp.v2i64.v4i32 * @llvm.aarch64.neon.saddlp.v4i16.v8i8 */ intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew - 1]); } else if (kind == INTRIN_kind_widen_across) { /* * @llvm.aarch64.neon.saddlv.i64.v4i32 * @llvm.aarch64.neon.saddlv.i32.v8i16 * @llvm.aarch64.neon.saddlv.i32.v16i8 * i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9. */ int associated_prim = MAX(ew + 1, 2); LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim]; intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type); } else if (kind == INTRIN_kind_across) { /* * @llvm.aarch64.neon.uaddv.i64.v4i64 * @llvm.aarch64.neon.uaddv.i32.v4i32 * @llvm.aarch64.neon.uaddv.i32.v8i16 * @llvm.aarch64.neon.uaddv.i32.v16i8 * i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9. */ int associated_prim = MAX(ew, 2); LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim]; intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type); } else if (kind == INTRIN_kind_arm64_dot_prod) { /* * @llvm.aarch64.neon.sdot.v2i32.v8i8 * @llvm.aarch64.neon.sdot.v4i32.v16i8 */ LLVMTypeRef associated_type = intrin_types [vw][0]; intrins = add_intrins2 (module, id, distinguishing_type, associated_type); } else intrins = add_intrins1 (module, id, distinguishing_type); int key = key_from_id_and_tag (id, test); g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (key), intrins); } } } return; } /* Register overloaded intrinsics */ switch (id) { #define INTRINS(intrin_name, llvm_id, arch) #define INTRINS_OVR(intrin_name, llvm_id, arch, llvm_type) case INTRINS_ ## intrin_name: intrins = add_intrins1(module, id, llvm_type); break; #define INTRINS_OVR_2_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2) case INTRINS_ ## intrin_name: intrins = add_intrins2(module, id, llvm_type1, llvm_type2); break; #define INTRINS_OVR_3_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2, llvm_type3) case INTRINS_ ## intrin_name: intrins = add_intrins3(module, id, llvm_type1, llvm_type2, llvm_type3); break; #define INTRINS_OVR_TAG(...) #define INTRINS_OVR_TAG_KIND(...) #include "llvm-intrinsics.h" default: g_assert_not_reached (); break; } g_assert (intrins); g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins); } static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id) { LLVMValueRef res; res = (LLVMValueRef)g_hash_table_lookup (intrins_id_to_intrins, GINT_TO_POINTER (id)); g_assert (res); return res; } static LLVMValueRef get_intrins (EmitContext *ctx, int id) { return get_intrins_from_module (ctx->lmodule, id); } static void add_intrinsics (LLVMModuleRef module) { int i; /* Emit declarations of instrinsics */ /* * It would be nicer to emit only the intrinsics actually used, but LLVM's Module * type doesn't seem to do any locking. */ for (i = 0; i < INTRINS_NUM; ++i) add_intrinsic (module, i); /* EH intrinsics */ add_func (module, "mono_personality", LLVMVoidType (), NULL, 0); add_func (module, "llvm_resume_unwind_trampoline", LLVMVoidType (), NULL, 0); } static void add_types (MonoLLVMModule *module) { module->ptr_type = LLVMPointerType (TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (), 0); } void mono_llvm_init (gboolean enable_jit) { intrin_types [0][0] = i1_t = LLVMInt8Type (); intrin_types [0][1] = i2_t = LLVMInt16Type (); intrin_types [0][2] = i4_t = LLVMInt32Type (); intrin_types [0][3] = i8_t = LLVMInt64Type (); intrin_types [0][4] = r4_t = LLVMFloatType (); intrin_types [0][5] = r8_t = LLVMDoubleType (); intrin_types [1][0] = v64_i1_t = LLVMVectorType (LLVMInt8Type (), 8); intrin_types [1][1] = v64_i2_t = LLVMVectorType (LLVMInt16Type (), 4); intrin_types [1][2] = v64_i4_t = LLVMVectorType (LLVMInt32Type (), 2); intrin_types [1][3] = v64_i8_t = LLVMVectorType (LLVMInt64Type (), 1); intrin_types [1][4] = v64_r4_t = LLVMVectorType (LLVMFloatType (), 2); intrin_types [1][5] = v64_r8_t = LLVMVectorType (LLVMDoubleType (), 1); intrin_types [2][0] = v128_i1_t = sse_i1_t = type_to_sse_type (MONO_TYPE_I1); intrin_types [2][1] = v128_i2_t = sse_i2_t = type_to_sse_type (MONO_TYPE_I2); intrin_types [2][2] = v128_i4_t = sse_i4_t = type_to_sse_type (MONO_TYPE_I4); intrin_types [2][3] = v128_i8_t = sse_i8_t = type_to_sse_type (MONO_TYPE_I8); intrin_types [2][4] = v128_r4_t = sse_r4_t = type_to_sse_type (MONO_TYPE_R4); intrin_types [2][5] = v128_r8_t = sse_r8_t = type_to_sse_type (MONO_TYPE_R8); intrins_id_to_intrins = g_hash_table_new (NULL, NULL); void_func_t = LLVMFunctionType0 (LLVMVoidType (), FALSE); if (enable_jit) mono_llvm_jit_init (); } void mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager) { MonoLLVMModule *module = (MonoLLVMModule*)mem_manager->llvm_module; int i; if (!module) return; g_hash_table_destroy (module->llvm_types); mono_llvm_dispose_ee (module->mono_ee); if (module->bb_names) { for (i = 0; i < module->bb_names_len; ++i) g_free (module->bb_names [i]); g_free (module->bb_names); } //LLVMDisposeModule (module->module); g_free (module); mem_manager->llvm_module = NULL; } void mono_llvm_create_aot_module (MonoAssembly *assembly, const char *global_prefix, int initial_got_size, LLVMModuleFlags flags) { MonoLLVMModule *module = &aot_module; gboolean emit_dwarf = (flags & LLVM_MODULE_FLAG_DWARF) ? 1 : 0; #ifdef TARGET_WIN32_MSVC gboolean emit_codeview = (flags & LLVM_MODULE_FLAG_CODEVIEW) ? 1 : 0; #endif gboolean static_link = (flags & LLVM_MODULE_FLAG_STATIC) ? 1 : 0; gboolean llvm_only = (flags & LLVM_MODULE_FLAG_LLVM_ONLY) ? 1 : 0; gboolean interp = (flags & LLVM_MODULE_FLAG_INTERP) ? 1 : 0; /* Delete previous module */ g_hash_table_destroy (module->plt_entries); if (module->lmodule) LLVMDisposeModule (module->lmodule); memset (module, 0, sizeof (aot_module)); module->lmodule = LLVMModuleCreateWithName ("aot"); module->assembly = assembly; module->global_prefix = g_strdup (global_prefix); module->eh_frame_symbol = g_strdup_printf ("%s_eh_frame", global_prefix); module->get_method_symbol = g_strdup_printf ("%s_get_method", global_prefix); module->get_unbox_tramp_symbol = g_strdup_printf ("%s_get_unbox_tramp", global_prefix); module->init_aotconst_symbol = g_strdup_printf ("%s_init_aotconst", global_prefix); module->external_symbols = TRUE; module->emit_dwarf = emit_dwarf; module->static_link = static_link; module->llvm_only = llvm_only; module->interp = interp; /* The first few entries are reserved */ module->max_got_offset = initial_got_size; module->context = LLVMGetGlobalContext (); module->cfgs = g_ptr_array_new (); module->aotconst_vars = g_hash_table_new (NULL, NULL); module->llvm_types = g_hash_table_new (NULL, NULL); module->plt_entries = g_hash_table_new (g_str_hash, g_str_equal); module->plt_entries_ji = g_hash_table_new (NULL, NULL); module->direct_callables = g_hash_table_new (g_str_hash, g_str_equal); module->idx_to_lmethod = g_hash_table_new (NULL, NULL); module->method_to_lmethod = g_hash_table_new (NULL, NULL); module->method_to_call_info = g_hash_table_new (NULL, NULL); module->idx_to_unbox_tramp = g_hash_table_new (NULL, NULL); module->no_method_table_lmethods = g_hash_table_new (NULL, NULL); module->callsite_list = g_ptr_array_new (); if (llvm_only) /* clang ignores our debug info because it has an invalid version */ module->emit_dwarf = FALSE; add_intrinsics (module->lmodule); add_types (module); #ifdef MONO_ARCH_LLVM_TARGET_LAYOUT LLVMSetDataLayout (module->lmodule, MONO_ARCH_LLVM_TARGET_LAYOUT); #else g_assert_not_reached (); #endif #ifdef MONO_ARCH_LLVM_TARGET_TRIPLE LLVMSetTarget (module->lmodule, MONO_ARCH_LLVM_TARGET_TRIPLE); #endif if (module->emit_dwarf) { char *dir, *build_info, *s, *cu_name; module->di_builder = mono_llvm_create_di_builder (module->lmodule); // FIXME: dir = g_strdup ("."); build_info = mono_get_runtime_build_info (); s = g_strdup_printf ("Mono AOT Compiler %s (LLVM)", build_info); cu_name = g_path_get_basename (assembly->image->name); module->cu = mono_llvm_di_create_compile_unit (module->di_builder, cu_name, dir, s); g_free (dir); g_free (build_info); g_free (s); } #ifdef TARGET_WIN32_MSVC if (emit_codeview) { LLVMValueRef codeview_option_args[3]; codeview_option_args[0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); codeview_option_args[1] = LLVMMDString ("CodeView", 8); codeview_option_args[2] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMAddNamedMetadataOperand (module->lmodule, "llvm.module.flags", LLVMMDNode (codeview_option_args, G_N_ELEMENTS (codeview_option_args))); } if (!static_link) { const char linker_options[] = "Linker Options"; const char *default_dynamic_lib_names[] = { "/DEFAULTLIB:msvcrt", "/DEFAULTLIB:ucrt.lib", "/DEFAULTLIB:vcruntime.lib" }; LLVMValueRef default_lib_args[G_N_ELEMENTS (default_dynamic_lib_names)]; LLVMValueRef default_lib_nodes[G_N_ELEMENTS(default_dynamic_lib_names)]; const char *default_lib_name = NULL; for (int i = 0; i < G_N_ELEMENTS (default_dynamic_lib_names); ++i) { const char *default_lib_name = default_dynamic_lib_names[i]; default_lib_args[i] = LLVMMDString (default_lib_name, strlen (default_lib_name)); default_lib_nodes[i] = LLVMMDNode (default_lib_args + i, 1); } LLVMAddNamedMetadataOperand (module->lmodule, "llvm.linker.options", LLVMMDNode (default_lib_args, G_N_ELEMENTS (default_lib_args))); } #endif { LLVMTypeRef got_type = LLVMArrayType (module->ptr_type, 16); module->dummy_got_var = LLVMAddGlobal (module->lmodule, got_type, "dummy_got"); module->got_idx_to_type = g_hash_table_new (NULL, NULL); LLVMSetInitializer (module->dummy_got_var, LLVMConstNull (got_type)); LLVMSetVisibility (module->dummy_got_var, LLVMHiddenVisibility); LLVMSetLinkage (module->dummy_got_var, LLVMInternalLinkage); } /* Add initialization array */ LLVMTypeRef inited_type = LLVMArrayType (LLVMInt8Type (), 0); module->inited_var = LLVMAddGlobal (aot_module.lmodule, inited_type, "mono_inited_tmp"); LLVMSetInitializer (module->inited_var, LLVMConstNull (inited_type)); create_aot_info_var (module); emit_gc_safepoint_poll (module, module->lmodule, NULL); emit_llvm_code_start (module); // Needs idx_to_lmethod emit_init_funcs (module); /* Add a dummy personality function */ if (!use_mono_personality_debug) { LLVMValueRef personality = LLVMAddFunction (module->lmodule, default_personality_name, LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE)); LLVMSetLinkage (personality, LLVMExternalLinkage); //EMCC chockes if the personality function is referenced in the 'used' array #ifndef TARGET_WASM mark_as_used (module, personality); #endif } /* Add a reference to the c++ exception we throw/catch */ { LLVMTypeRef exc = LLVMPointerType (LLVMInt8Type (), 0); module->sentinel_exception = LLVMAddGlobal (module->lmodule, exc, "_ZTIPi"); LLVMSetLinkage (module->sentinel_exception, LLVMExternalLinkage); mono_llvm_set_is_constant (module->sentinel_exception); } } void mono_llvm_fixup_aot_module (void) { MonoLLVMModule *module = &aot_module; MonoMethod *method; /* * Replace GOT entries for directly callable methods with the methods themselves. * It would be easier to implement this by predefining all methods before compiling * their bodies, but that couldn't handle the case when a method fails to compile * with llvm. */ GHashTable *specializable = g_hash_table_new (NULL, NULL); GHashTable *patches_to_null = g_hash_table_new (mono_patch_info_hash, mono_patch_info_equal); for (int sindex = 0; sindex < module->callsite_list->len; ++sindex) { CallSite *site = (CallSite*)g_ptr_array_index (module->callsite_list, sindex); method = site->method; LLVMValueRef lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method); LLVMValueRef placeholder = (LLVMValueRef)site->load; LLVMValueRef load; if (placeholder == NULL) /* Method failed LLVM compilation */ continue; gboolean can_direct_call = FALSE; /* Replace sharable instances with their shared version */ if (!lmethod && method->is_inflated) { if (mono_method_is_generic_sharable_full (method, FALSE, TRUE, FALSE)) { ERROR_DECL (error); MonoMethod *shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error); if (is_ok (error)) { lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, shared); if (lmethod) method = shared; } } } if (lmethod && !m_method_is_synchronized (method)) { can_direct_call = TRUE; } else if (m_method_is_wrapper (method) && !method->is_inflated) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); /* This is a call from the synchronized wrapper to the real method */ if (info->subtype == WRAPPER_SUBTYPE_SYNCHRONIZED_INNER) { method = info->d.synchronized.method; lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method); if (lmethod) can_direct_call = TRUE; } } if (can_direct_call) { mono_llvm_replace_uses_of (placeholder, lmethod); if (mono_aot_can_specialize (method)) g_hash_table_insert (specializable, lmethod, method); g_hash_table_insert (patches_to_null, site->ji, site->ji); } else { // FIXME: LLVMBuilderRef builder = LLVMCreateBuilder (); LLVMPositionBuilderBefore (builder, placeholder); load = get_aotconst_module (module, builder, site->ji->type, site->ji->data.target, site->type, NULL, NULL); LLVMReplaceAllUsesWith (placeholder, load); } g_free (site); } mono_llvm_propagate_nonnull_final (specializable, module); g_hash_table_destroy (specializable); for (int i = 0; i < module->cfgs->len; ++i) { /* * Nullify the patches pointing to direct calls. This is needed to * avoid allocating extra got slots, which is a perf problem and it * makes module->max_got_offset invalid. * It would be better to just store the patch_info in CallSite, but * cfg->patch_info is copied in aot-compiler.c. */ MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i); for (MonoJumpInfo *patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) { if (patch_info->type == MONO_PATCH_INFO_METHOD) { if (g_hash_table_lookup (patches_to_null, patch_info)) { patch_info->type = MONO_PATCH_INFO_NONE; /* Nullify the call to init_method () if possible */ g_assert (cfg->got_access_count); cfg->got_access_count --; if (cfg->got_access_count == 0) { LLVMValueRef br = (LLVMValueRef)cfg->llvmonly_init_cond; if (br) LLVMSetSuccessor (br, 0, LLVMGetSuccessor (br, 1)); } } } } } g_hash_table_destroy (patches_to_null); } static LLVMValueRef llvm_array_from_uints (LLVMTypeRef el_type, guint32 *values, int nvalues) { int i; LLVMValueRef res, *vals; vals = g_new0 (LLVMValueRef, nvalues); for (i = 0; i < nvalues; ++i) vals [i] = LLVMConstInt (LLVMInt32Type (), values [i], FALSE); res = LLVMConstArray (LLVMInt32Type (), vals, nvalues); g_free (vals); return res; } static LLVMValueRef llvm_array_from_bytes (guint8 *values, int nvalues) { int i; LLVMValueRef res, *vals; vals = g_new0 (LLVMValueRef, nvalues); for (i = 0; i < nvalues; ++i) vals [i] = LLVMConstInt (LLVMInt8Type (), values [i], FALSE); res = LLVMConstArray (LLVMInt8Type (), vals, nvalues); g_free (vals); return res; } /* * mono_llvm_emit_aot_file_info: * * Emit the MonoAotFileInfo structure. * Same as emit_aot_file_info () in aot-compiler.c. */ void mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code) { MonoLLVMModule *module = &aot_module; /* Save these for later */ memcpy (&module->aot_info, info, sizeof (MonoAotFileInfo)); module->has_jitted_code = has_jitted_code; } /* * mono_llvm_emit_aot_data: * * Emit the binary data DATA pointed to by symbol SYMBOL. * Return the LLVM variable for the data. */ gpointer mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align) { MonoLLVMModule *module = &aot_module; LLVMTypeRef type; LLVMValueRef d; type = LLVMArrayType (LLVMInt8Type (), data_len); d = LLVMAddGlobal (module->lmodule, type, symbol); LLVMSetVisibility (d, LLVMHiddenVisibility); LLVMSetLinkage (d, LLVMInternalLinkage); LLVMSetInitializer (d, mono_llvm_create_constant_data_array (data, data_len)); if (align != 1) LLVMSetAlignment (d, align); mono_llvm_set_is_constant (d); return d; } gpointer mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len) { return mono_llvm_emit_aot_data_aligned (symbol, data, data_len, 8); } /* Add a reference to a global defined in JITted code */ static LLVMValueRef AddJitGlobal (MonoLLVMModule *module, LLVMTypeRef type, const char *name) { char *s; LLVMValueRef v; s = g_strdup_printf ("%s%s", module->global_prefix, name); v = LLVMAddGlobal (module->lmodule, LLVMInt8Type (), s); LLVMSetVisibility (v, LLVMHiddenVisibility); g_free (s); return v; } #define FILE_INFO_NUM_HEADER_FIELDS 2 #define FILE_INFO_NUM_SCALAR_FIELDS 23 #define FILE_INFO_NUM_ARRAY_FIELDS 5 #define FILE_INFO_NUM_AOTID_FIELDS 1 #define FILE_INFO_NFIELDS (FILE_INFO_NUM_HEADER_FIELDS + MONO_AOT_FILE_INFO_NUM_SYMBOLS + FILE_INFO_NUM_SCALAR_FIELDS + FILE_INFO_NUM_ARRAY_FIELDS + FILE_INFO_NUM_AOTID_FIELDS) static void create_aot_info_var (MonoLLVMModule *module) { LLVMTypeRef file_info_type; LLVMTypeRef *eltypes; LLVMValueRef info_var; int i, nfields, tindex; LLVMModuleRef lmodule = module->lmodule; /* Create an LLVM type to represent MonoAotFileInfo */ nfields = FILE_INFO_NFIELDS; eltypes = g_new (LLVMTypeRef, nfields); tindex = 0; eltypes [tindex ++] = LLVMInt32Type (); eltypes [tindex ++] = LLVMInt32Type (); /* Symbols */ for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) eltypes [tindex ++] = LLVMPointerType (LLVMInt8Type (), 0); /* Scalars */ for (i = 0; i < FILE_INFO_NUM_SCALAR_FIELDS; ++i) eltypes [tindex ++] = LLVMInt32Type (); /* Arrays */ eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TABLE_NUM); for (i = 0; i < FILE_INFO_NUM_ARRAY_FIELDS - 1; ++i) eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TRAMP_NUM); eltypes [tindex ++] = LLVMArrayType (LLVMInt8Type (), 16); g_assert (tindex == nfields); file_info_type = LLVMStructCreateNamed (module->context, "MonoAotFileInfo"); LLVMStructSetBody (file_info_type, eltypes, nfields, FALSE); info_var = LLVMAddGlobal (lmodule, file_info_type, "mono_aot_file_info"); module->info_var = info_var; module->info_var_eltypes = eltypes; } static void emit_aot_file_info (MonoLLVMModule *module) { LLVMTypeRef *eltypes, eltype; LLVMValueRef info_var; LLVMValueRef *fields; int i, nfields, tindex; MonoAotFileInfo *info; LLVMModuleRef lmodule = module->lmodule; info = &module->aot_info; info_var = module->info_var; eltypes = module->info_var_eltypes; nfields = FILE_INFO_NFIELDS; if (module->static_link) { LLVMSetVisibility (info_var, LLVMHiddenVisibility); LLVMSetLinkage (info_var, LLVMInternalLinkage); } #ifdef TARGET_WIN32 if (!module->static_link) { LLVMSetDLLStorageClass (info_var, LLVMDLLExportStorageClass); } #endif fields = g_new (LLVMValueRef, nfields); tindex = 0; fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->version, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->dummy, FALSE); /* Symbols */ /* * We use LLVMGetNamedGlobal () for symbol which are defined in LLVM code, and LLVMAddGlobal () * for symbols defined in the .s file emitted by the aot compiler. */ eltype = eltypes [tindex]; if (module->llvm_only) fields [tindex ++] = LLVMConstNull (eltype); else fields [tindex ++] = AddJitGlobal (module, eltype, "jit_got"); /* llc defines this directly */ if (!module->llvm_only) { fields [tindex ++] = LLVMAddGlobal (lmodule, eltype, module->eh_frame_symbol); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = module->get_method; fields [tindex ++] = module->get_unbox_tramp ? module->get_unbox_tramp : LLVMConstNull (eltype); } fields [tindex ++] = module->init_aotconst_func; if (module->has_jitted_code) { fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_start"); fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_end"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (!module->llvm_only) fields [tindex ++] = AddJitGlobal (module, eltype, "method_addresses"); else fields [tindex ++] = LLVMConstNull (eltype); if (module->llvm_only && module->unbox_tramp_indexes) { fields [tindex ++] = module->unbox_tramp_indexes; fields [tindex ++] = module->unbox_trampolines; } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (info->flags & MONO_AOT_FILE_FLAG_SEPARATE_DATA) { for (i = 0; i < MONO_AOT_TABLE_NUM; ++i) fields [tindex ++] = LLVMConstNull (eltype); } else { fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "blob"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_name_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "ex_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "got_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "llvm_got_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "image_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "weak_field_indexes"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_flags_table"); } /* Not needed (mem_end) */ fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_guid"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "runtime_version"); if (info->trampoline_size [0]) { fields [tindex ++] = AddJitGlobal (module, eltype, "specific_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "static_rgctx_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "imt_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "gsharedvt_arg_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "ftnptr_arg_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_arbitrary_trampolines"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (module->static_link && !module->llvm_only) fields [tindex ++] = AddJitGlobal (module, eltype, "globals"); else fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_name"); if (!module->llvm_only) { fields [tindex ++] = AddJitGlobal (module, eltype, "plt"); fields [tindex ++] = AddJitGlobal (module, eltype, "plt_end"); fields [tindex ++] = AddJitGlobal (module, eltype, "unwind_info"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines_end"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampoline_addresses"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) { g_assert (fields [FILE_INFO_NUM_HEADER_FIELDS + i]); fields [FILE_INFO_NUM_HEADER_FIELDS + i] = LLVMConstBitCast (fields [FILE_INFO_NUM_HEADER_FIELDS + i], eltype); } /* Scalars */ fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_offset_base, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_info_offset_base, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->got_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->llvm_got_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nmethods, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nextra_methods, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->flags, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->opts, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->simd_opts, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->gc_name_index, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->num_rgctx_fetch_trampolines, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->double_align, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->long_align, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->generic_tramp_num, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_shift_bits, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_mask, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->tramp_page_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->call_table_entry_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nshared_got_entries, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->datafile_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_num, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_elemsize, FALSE); /* Arrays */ fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->table_offsets, MONO_AOT_TABLE_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->num_trampolines, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_got_offset_base, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_size, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->tramp_page_code_offsets, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_bytes (info->aotid, 16); g_assert (tindex == nfields); LLVMSetInitializer (info_var, LLVMConstNamedStruct (LLVMGetElementType (LLVMTypeOf (info_var)), fields, nfields)); if (module->static_link) { char *s, *p; LLVMValueRef var; s = g_strdup_printf ("mono_aot_module_%s_info", module->assembly->aname.name); /* Get rid of characters which cannot occur in symbols */ p = s; for (p = s; *p; ++p) { if (!(isalnum (*p) || *p == '_')) *p = '_'; } var = LLVMAddGlobal (module->lmodule, LLVMPointerType (LLVMInt8Type (), 0), s); g_free (s); LLVMSetInitializer (var, LLVMConstBitCast (LLVMGetNamedGlobal (module->lmodule, "mono_aot_file_info"), LLVMPointerType (LLVMInt8Type (), 0))); LLVMSetLinkage (var, LLVMExternalLinkage); } } typedef struct { LLVMValueRef lmethod; int argument; } NonnullPropWorkItem; static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params) { if (mono_aot_can_specialize (call_method)) { int num_passed = LLVMGetNumArgOperands (lcall); g_assert (num_params <= num_passed); g_assert (ctx->module->method_to_call_info); GArray *call_site_union = (GArray *) g_hash_table_lookup (ctx->module->method_to_call_info, call_method); if (!call_site_union) { call_site_union = g_array_sized_new (FALSE, TRUE, sizeof (gint32), num_params); int zero = 0; for (int i = 0; i < num_params; i++) g_array_insert_val (call_site_union, i, zero); } for (int i = 0; i < num_params; i++) { if (mono_llvm_is_nonnull (args [i])) { g_assert (i < LLVMGetNumArgOperands (lcall)); mono_llvm_set_call_nonnull_arg (lcall, i); } else { gint32 *nullable_count = &g_array_index (call_site_union, gint32, i); *nullable_count = *nullable_count + 1; } } g_hash_table_insert (ctx->module->method_to_call_info, call_method, call_site_union); } } static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module) { // When we first traverse the mini IL, we mark the things that are // nonnull (the roots). Then, for all of the methods that can be specialized, we // see if their call sites have nonnull attributes. // If so, we mark the function's param. This param has uses to propagate // the attribute to. This propagation can trigger a need to mark more attributes // non-null, and so on and so forth. GSList *queue = NULL; GHashTableIter iter; LLVMValueRef lmethod; MonoMethod *method; g_hash_table_iter_init (&iter, all_specializable); while (g_hash_table_iter_next (&iter, (void**)&lmethod, (void**)&method)) { GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, method); // Basic sanity checking if (call_site_union) g_assert (call_site_union->len == LLVMCountParams (lmethod)); // Add root to work queue for (int i = 0; call_site_union && i < call_site_union->len; i++) { if (g_array_index (call_site_union, gint32, i) == 0) { NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem)); item->lmethod = lmethod; item->argument = i; queue = g_slist_prepend (queue, item); } } } // This is essentially reference counting, and we are propagating // the refcount decrement here. We have less work to do than we may otherwise // because we are only working with a set of subgraphs of specializable functions. // // We rely on being able to see all of the references in the graph. // This is ensured by the function mono_aot_can_specialize. Everything in // all_specializable is a function that can be specialized, and is the resulting // node in the graph after all of the subsitutions are done. // // Anything disrupting the direct calls made with self-init will break this optimization. while (queue) { // Update the queue state. // Our only other per-iteration responsibility is now to free current NonnullPropWorkItem *current = (NonnullPropWorkItem *) queue->data; queue = queue->next; g_assert (current->argument < LLVMCountParams (current->lmethod)); // Does the actual leaf-node work here // Mark the function argument as nonnull for LLVM mono_llvm_set_func_nonnull_arg (current->lmethod, current->argument); // The rest of this is for propagating forward nullability changes // to calls that use the argument that is now nullable. // Get the actual LLVM value of the argument, so we can see which call instructions // used that argument LLVMValueRef caller_argument = LLVMGetParam (current->lmethod, current->argument); // Iterate over the calls using the newly-non-nullable argument GSList *calls = mono_llvm_calls_using (caller_argument); for (GSList *cursor = calls; cursor != NULL; cursor = cursor->next) { LLVMValueRef lcall = (LLVMValueRef) cursor->data; LLVMValueRef callee_lmethod = LLVMGetCalledValue (lcall); // If this wasn't a direct call for which mono_aot_can_specialize is true, // this lookup won't find a MonoMethod. MonoMethod *callee_method = (MonoMethod *) g_hash_table_lookup (all_specializable, callee_lmethod); if (!callee_method) continue; // Decrement number of nullable refs at that func's arg offset GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, callee_method); // It has module-local callers and is specializable, should have seen this call site // and inited this g_assert (call_site_union); // The function *definition* parameter arity should always be consistent int max_params = LLVMCountParams (callee_lmethod); if (call_site_union->len != max_params) { mono_llvm_dump_value (callee_lmethod); g_assert_not_reached (); } // Get the values that correspond to the parameters passed to the call // that used our argument LLVMValueRef *operands = mono_llvm_call_args (lcall); for (int call_argument = 0; call_argument < max_params; call_argument++) { // Every time we used the newly-non-nullable argument, decrement the nullable // refcount for that function. if (caller_argument == operands [call_argument]) { gint32 *nullable_count = &g_array_index (call_site_union, gint32, call_argument); g_assert (*nullable_count > 0); *nullable_count = *nullable_count - 1; // If we caused that callee's parameter to become newly nullable, add to work queue if (*nullable_count == 0) { NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem)); item->lmethod = callee_lmethod; item->argument = call_argument; queue = g_slist_prepend (queue, item); } } } g_free (operands); // Update nullability refcount information for the callee now g_hash_table_insert (module->method_to_call_info, callee_method, call_site_union); } g_slist_free (calls); g_free (current); } } /* * Emit the aot module into the LLVM bitcode file FILENAME. */ void mono_llvm_emit_aot_module (const char *filename, const char *cu_name) { LLVMTypeRef inited_type; LLVMValueRef real_inited; MonoLLVMModule *module = &aot_module; emit_llvm_code_end (module); /* * Create the real init_var and replace all uses of the dummy variable with * the real one. */ inited_type = LLVMArrayType (LLVMInt8Type (), module->max_inited_idx + 1); real_inited = LLVMAddGlobal (module->lmodule, inited_type, "mono_inited"); LLVMSetInitializer (real_inited, LLVMConstNull (inited_type)); LLVMSetLinkage (real_inited, LLVMInternalLinkage); mono_llvm_replace_uses_of (module->inited_var, real_inited); LLVMDeleteGlobal (module->inited_var); /* Replace the dummy info_ variables with the real ones */ for (int i = 0; i < module->cfgs->len; ++i) { MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i); // FIXME: Eliminate unused vars // FIXME: Speed this up if (cfg->llvm_dummy_info_var) { if (cfg->llvm_info_var) { mono_llvm_replace_uses_of (cfg->llvm_dummy_info_var, cfg->llvm_info_var); LLVMDeleteGlobal (cfg->llvm_dummy_info_var); } else { // FIXME: How can this happen ? LLVMSetInitializer (cfg->llvm_dummy_info_var, mono_llvm_create_constant_data_array (NULL, 0)); } } } if (module->llvm_only) { emit_get_method (&aot_module); emit_get_unbox_tramp (&aot_module); } emit_init_aotconst (module); emit_llvm_used (&aot_module); emit_dbg_info (&aot_module, filename, cu_name); emit_aot_file_info (&aot_module); /* Replace PLT entries for directly callable methods with the methods themselves */ { GHashTableIter iter; MonoJumpInfo *ji; LLVMValueRef callee; GHashTable *specializable = g_hash_table_new (NULL, NULL); g_hash_table_iter_init (&iter, module->plt_entries_ji); while (g_hash_table_iter_next (&iter, (void**)&ji, (void**)&callee)) { if (mono_aot_is_direct_callable (ji)) { LLVMValueRef lmethod; lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, ji->data.method); /* The types might not match because the caller might pass an rgctx */ if (lmethod && LLVMTypeOf (callee) == LLVMTypeOf (lmethod)) { mono_llvm_replace_uses_of (callee, lmethod); if (mono_aot_can_specialize (ji->data.method)) g_hash_table_insert (specializable, lmethod, ji->data.method); mono_aot_mark_unused_llvm_plt_entry (ji); } } } mono_llvm_propagate_nonnull_final (specializable, module); g_hash_table_destroy (specializable); } #if 0 { char *verifier_err; if (LLVMVerifyModule (module->lmodule, LLVMReturnStatusAction, &verifier_err)) { printf ("%s\n", verifier_err); g_assert_not_reached (); } } #endif /* Note: You can still dump an invalid bitcode file by running `llvm-dis` * in a debugger, set a breakpoint on `LLVMVerifyModule` and fake its * result to 0 (indicating success). */ LLVMWriteBitcodeToFile (module->lmodule, filename); } static LLVMValueRef md_string (const char *s) { return LLVMMDString (s, strlen (s)); } /* Debugging support */ static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef args [16], ver; /* * This can only be enabled when LLVM code is emitted into a separate object * file, since the AOT compiler also emits dwarf info, * and the abbrev indexes will not be correct since llvm has added its own * abbrevs. */ if (!module->emit_dwarf) return; mono_llvm_di_builder_finalize (module->di_builder); args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); args [1] = LLVMMDString ("Dwarf Version", strlen ("Dwarf Version")); args [2] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); ver = LLVMMDNode (args, 3); LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver); args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); args [1] = LLVMMDString ("Debug Info Version", strlen ("Debug Info Version")); args [2] = LLVMConstInt (LLVMInt64Type (), 3, FALSE); ver = LLVMMDNode (args, 3); LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver); } static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name) { MonoLLVMModule *module = ctx->module; MonoDebugMethodInfo *minfo = ctx->minfo; char *source_file, *dir, *filename; MonoSymSeqPoint *sym_seq_points; int n_seq_points; if (!minfo) return NULL; mono_debug_get_seq_points (minfo, &source_file, NULL, NULL, &sym_seq_points, &n_seq_points); if (!source_file) source_file = g_strdup ("<unknown>"); dir = g_path_get_dirname (source_file); filename = g_path_get_basename (source_file); g_free (source_file); return (LLVMValueRef)mono_llvm_di_create_function (module->di_builder, module->cu, method, cfg->method->name, name, dir, filename, n_seq_points ? sym_seq_points [0].line : 1); } static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code) { MonoCompile *cfg = ctx->cfg; if (ctx->minfo && cil_code && cil_code >= cfg->header->code && cil_code < cfg->header->code + cfg->header->code_size) { MonoDebugSourceLocation *loc; LLVMValueRef loc_md; loc = mono_debug_method_lookup_location (ctx->minfo, cil_code - cfg->header->code); if (loc) { loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, loc->row, loc->column); mono_llvm_di_set_location (builder, loc_md); mono_debug_free_source_location (loc); } } } static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder) { if (ctx->minfo) { LLVMValueRef loc_md; loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, 0, 0); mono_llvm_di_set_location (builder, loc_md); } } /* DESIGN: - Emit LLVM IR from the mono IR using the LLVM C API. - The original arch specific code remains, so we can fall back to it if we run into something we can't handle. */ /* A partial list of issues: - Handling of opcodes which can throw exceptions. In the mono JIT, these are implemented using code like this: method: <compare> throw_pos: b<cond> ex_label <rest of code> ex_label: push throw_pos - method call <exception trampoline> The problematic part is push throw_pos - method, which cannot be represented in the LLVM IR, since it does not support label values. -> this can be implemented in AOT mode using inline asm + labels, but cannot be implemented in JIT mode ? -> a possible but slower implementation would use the normal exception throwing code but it would need to control the placement of the throw code (it needs to be exactly after the compare+branch). -> perhaps add a PC offset intrinsics ? - efficient implementation of .ovf opcodes. These are currently implemented as: <ins which sets the condition codes> b<cond> ex_label Some overflow opcodes are now supported by LLVM SVN. - exception handling, unwinding. - SSA is disabled for methods with exception handlers - How to obtain unwind info for LLVM compiled methods ? -> this is now solved by converting the unwind info generated by LLVM into our format. - LLVM uses the c++ exception handling framework, while we use our home grown code, and couldn't use the c++ one: - its not supported under VC++, other exotic platforms. - it might be impossible to support filter clauses with it. - trampolines. The trampolines need a predictable call sequence, since they need to disasm the calling code to obtain register numbers / offsets. LLVM currently generates this code in non-JIT mode: mov -0x98(%rax),%eax callq *%rax Here, the vtable pointer is lost. -> solution: use one vtable trampoline per class. - passing/receiving the IMT pointer/RGCTX. -> solution: pass them as normal arguments ? - argument passing. LLVM does not allow the specification of argument registers etc. This means that all calls are made according to the platform ABI. - passing/receiving vtypes. Vtypes passed/received in registers are handled by the front end by using a signature with scalar arguments, and loading the parts of the vtype into those arguments. Vtypes passed on the stack are handled using the 'byval' attribute. - ldaddr. Supported though alloca, we need to emit the load/store code. - types. The mono JIT uses pointer sized iregs/double fregs, while LLVM uses precisely typed registers, so we have to keep track of the precise LLVM type of each vreg. This is made easier because the IR is already in SSA form. An additional problem is that our IR is not consistent with types, i.e. i32/i64 types are frequently used incorrectly. */ /* AOT SUPPORT: Emit LLVM bytecode into a .bc file, compile it using llc into a .s file, then link it with the file containing the methods emitted by the JIT and the AOT data structures. */ /* FIXME: Normalize some aspects of the mono IR to allow easier translation, like: * - each bblock should end with a branch * - setting the return value, making cfg->ret non-volatile * - avoid some transformations in the JIT which make it harder for us to generate * code. * - use pointer types to help optimizations. */ #else /* DISABLE_JIT */ void mono_llvm_cleanup (void) { } void mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager) { } void mono_llvm_init (gboolean enable_jit) { } #endif /* DISABLE_JIT */ #if !defined(DISABLE_JIT) && !defined(MONO_CROSS_COMPILE) /* LLVM JIT support */ /* * decode_llvm_eh_info: * * Decode the EH table emitted by llvm in jit mode, and store * the result into cfg. */ static void decode_llvm_eh_info (EmitContext *ctx, gpointer eh_frame) { MonoCompile *cfg = ctx->cfg; guint8 *cie, *fde; int fde_len; MonoLLVMFDEInfo info; MonoJitExceptionInfo *ei; guint8 *p = (guint8*)eh_frame; int version, fde_count, fde_offset; guint32 ei_len, i, nested_len; gpointer *type_info; gint32 *table; guint8 *unw_info; /* * Decode the one element EH table emitted by the MonoException class * in llvm. */ /* Similar to decode_llvm_mono_eh_frame () in aot-runtime.c */ version = *p; g_assert (version == 3); p ++; p ++; p = (guint8 *)ALIGN_PTR_TO (p, 4); fde_count = *(guint32*)p; p += 4; table = (gint32*)p; g_assert (fde_count <= 2); /* The first entry is the real method */ g_assert (table [0] == 1); fde_offset = table [1]; table += fde_count * 2; /* Extra entry */ cfg->code_len = table [0]; fde_len = table [1] - fde_offset; table += 2; fde = (guint8*)eh_frame + fde_offset; cie = (guint8*)table; /* Compute lengths */ mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, NULL, NULL, NULL); ei = (MonoJitExceptionInfo *)g_malloc0 (info.ex_info_len * sizeof (MonoJitExceptionInfo)); type_info = (gpointer *)g_malloc0 (info.ex_info_len * sizeof (gpointer)); unw_info = (guint8*)g_malloc0 (info.unw_info_len); mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, ei, type_info, unw_info); cfg->encoded_unwind_ops = unw_info; cfg->encoded_unwind_ops_len = info.unw_info_len; if (cfg->verbose_level > 1) mono_print_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len); if (info.this_reg != -1) { cfg->llvm_this_reg = info.this_reg; cfg->llvm_this_offset = info.this_offset; } ei_len = info.ex_info_len; // Nested clauses are currently disabled nested_len = 0; cfg->llvm_ex_info = (MonoJitExceptionInfo*)mono_mempool_alloc0 (cfg->mempool, (ei_len + nested_len) * sizeof (MonoJitExceptionInfo)); cfg->llvm_ex_info_len = ei_len + nested_len; memcpy (cfg->llvm_ex_info, ei, ei_len * sizeof (MonoJitExceptionInfo)); /* Fill the rest of the information from the type info */ for (i = 0; i < ei_len; ++i) { gint32 clause_index = *(gint32*)type_info [i]; MonoExceptionClause *clause = &cfg->header->clauses [clause_index]; cfg->llvm_ex_info [i].flags = clause->flags; cfg->llvm_ex_info [i].data.catch_class = clause->data.catch_class; cfg->llvm_ex_info [i].clause_index = clause_index; } } static MonoLLVMModule* init_jit_module (void) { MonoJitMemoryManager *jit_mm; MonoLLVMModule *module; // FIXME: jit_mm = get_default_jit_mm (); if (jit_mm->llvm_module) return (MonoLLVMModule*)jit_mm->llvm_module; mono_loader_lock (); if (jit_mm->llvm_module) { mono_loader_unlock (); return (MonoLLVMModule*)jit_mm->llvm_module; } module = g_new0 (MonoLLVMModule, 1); module->context = LLVMGetGlobalContext (); module->mono_ee = (MonoEERef*)mono_llvm_create_ee (&module->ee); // This contains just the intrinsics module->lmodule = LLVMModuleCreateWithName ("jit-global-module"); add_intrinsics (module->lmodule); add_types (module); module->llvm_types = g_hash_table_new (NULL, NULL); mono_memory_barrier (); jit_mm->llvm_module = module; mono_loader_unlock (); return (MonoLLVMModule*)jit_mm->llvm_module; } static void llvm_jit_finalize_method (EmitContext *ctx) { MonoCompile *cfg = ctx->cfg; int nvars = g_hash_table_size (ctx->jit_callees); LLVMValueRef *callee_vars = g_new0 (LLVMValueRef, nvars); gpointer *callee_addrs = g_new0 (gpointer, nvars); GHashTableIter iter; LLVMValueRef var; MonoMethod *callee; gpointer eh_frame; int i; /* * Compute the addresses of the LLVM globals pointing to the * methods called by the current method. Pass it to the trampoline * code so it can update them after their corresponding method was * compiled. */ g_hash_table_iter_init (&iter, ctx->jit_callees); i = 0; while (g_hash_table_iter_next (&iter, NULL, (void**)&var)) callee_vars [i ++] = var; mono_llvm_optimize_method (ctx->lmethod); if (cfg->verbose_level > 1) { g_print ("\n*** Optimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE)); if (cfg->compile_aot) { mono_llvm_dump_value (ctx->lmethod); } else { mono_llvm_dump_module (ctx->lmodule); } g_print ("***\n\n"); } mono_codeman_enable_write (); cfg->native_code = (guint8*)mono_llvm_compile_method (ctx->module->mono_ee, cfg, ctx->lmethod, nvars, callee_vars, callee_addrs, &eh_frame); mono_llvm_remove_gc_safepoint_poll (ctx->lmodule); mono_codeman_disable_write (); decode_llvm_eh_info (ctx, eh_frame); // FIXME: MonoJitMemoryManager *jit_mm = get_default_jit_mm (); jit_mm_lock (jit_mm); if (!jit_mm->llvm_jit_callees) jit_mm->llvm_jit_callees = g_hash_table_new (NULL, NULL); g_hash_table_iter_init (&iter, ctx->jit_callees); i = 0; while (g_hash_table_iter_next (&iter, (void**)&callee, (void**)&var)) { GSList *addrs = (GSList*)g_hash_table_lookup (jit_mm->llvm_jit_callees, callee); addrs = g_slist_prepend (addrs, callee_addrs [i]); g_hash_table_insert (jit_mm->llvm_jit_callees, callee, addrs); i ++; } jit_mm_unlock (jit_mm); } #else static MonoLLVMModule* init_jit_module (void) { g_assert_not_reached (); } static void llvm_jit_finalize_method (EmitContext *ctx) { g_assert_not_reached (); } #endif static MonoCPUFeatures cpu_features; MonoCPUFeatures mono_llvm_get_cpu_features (void) { static const CpuFeatureAliasFlag flags_map [] = { #if defined(TARGET_X86) || defined(TARGET_AMD64) { "sse", MONO_CPU_X86_SSE }, { "sse2", MONO_CPU_X86_SSE2 }, { "pclmul", MONO_CPU_X86_PCLMUL }, { "aes", MONO_CPU_X86_AES }, { "sse2", MONO_CPU_X86_SSE2 }, { "sse3", MONO_CPU_X86_SSE3 }, { "ssse3", MONO_CPU_X86_SSSE3 }, { "sse4.1", MONO_CPU_X86_SSE41 }, { "sse4.2", MONO_CPU_X86_SSE42 }, { "popcnt", MONO_CPU_X86_POPCNT }, { "avx", MONO_CPU_X86_AVX }, { "avx2", MONO_CPU_X86_AVX2 }, { "fma", MONO_CPU_X86_FMA }, { "lzcnt", MONO_CPU_X86_LZCNT }, { "bmi", MONO_CPU_X86_BMI1 }, { "bmi2", MONO_CPU_X86_BMI2 }, #endif #if defined(TARGET_ARM64) { "crc", MONO_CPU_ARM64_CRC }, { "crypto", MONO_CPU_ARM64_CRYPTO }, { "neon", MONO_CPU_ARM64_NEON }, { "rdm", MONO_CPU_ARM64_RDM }, { "dotprod", MONO_CPU_ARM64_DP }, #endif #if defined(TARGET_WASM) { "simd", MONO_CPU_WASM_SIMD }, #endif // flags_map cannot be zero length in MSVC, so add useless dummy entry for arm32 #if defined(TARGET_ARM) && defined(HOST_WIN32) { "inited", MONO_CPU_INITED}, #endif }; if (!cpu_features) cpu_features = MONO_CPU_INITED | (MonoCPUFeatures)mono_llvm_check_cpu_features (flags_map, G_N_ELEMENTS (flags_map)); return cpu_features; }
/** * \file * llvm "Backend" for the mono JIT * * Copyright 2009-2011 Novell Inc (http://www.novell.com) * Copyright 2011 Xamarin Inc (http://www.xamarin.com) * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include "config.h" #include <mono/metadata/debug-helpers.h> #include <mono/metadata/debug-internals.h> #include <mono/metadata/mempool-internals.h> #include <mono/metadata/environment.h> #include <mono/metadata/object-internals.h> #include <mono/metadata/abi-details.h> #include <mono/metadata/tokentype.h> #include <mono/utils/mono-tls.h> #include <mono/utils/mono-dl.h> #include <mono/utils/mono-time.h> #include <mono/utils/freebsd-dwarf.h> #ifndef __STDC_LIMIT_MACROS #define __STDC_LIMIT_MACROS #endif #ifndef __STDC_CONSTANT_MACROS #define __STDC_CONSTANT_MACROS #endif #include "llvm-c/BitWriter.h" #include "llvm-c/Analysis.h" #include "mini-llvm-cpp.h" #include "llvm-jit.h" #include "aot-compiler.h" #include "mini-llvm.h" #include "mini-runtime.h" #include <mono/utils/mono-math.h> #ifndef DISABLE_JIT #if defined(TARGET_AMD64) && defined(TARGET_WIN32) && defined(HOST_WIN32) && defined(_MSC_VER) #define TARGET_X86_64_WIN32_MSVC #endif #if defined(TARGET_X86_64_WIN32_MSVC) #define TARGET_WIN32_MSVC #endif #if LLVM_API_VERSION < 900 #error "The version of the mono llvm repository is too old." #endif /* * Information associated by mono with LLVM modules. */ typedef struct { LLVMModuleRef lmodule; LLVMValueRef throw_icall, rethrow, throw_corlib_exception; GHashTable *llvm_types; LLVMValueRef dummy_got_var; const char *get_method_symbol; const char *get_unbox_tramp_symbol; const char *init_aotconst_symbol; GHashTable *plt_entries; GHashTable *plt_entries_ji; GHashTable *method_to_lmethod; GHashTable *method_to_call_info; GHashTable *lvalue_to_lcalls; GHashTable *direct_callables; /* Maps got slot index -> LLVMValueRef */ GHashTable *aotconst_vars; char **bb_names; int bb_names_len; GPtrArray *used; LLVMTypeRef ptr_type; GPtrArray *subprogram_mds; MonoEERef *mono_ee; LLVMExecutionEngineRef ee; gboolean external_symbols; gboolean emit_dwarf; int max_got_offset; LLVMValueRef personality; gpointer gc_poll_cold_wrapper_compiled; /* For AOT */ MonoAssembly *assembly; char *global_prefix; MonoAotFileInfo aot_info; const char *eh_frame_symbol; LLVMValueRef get_method, get_unbox_tramp, init_aotconst_func; LLVMValueRef init_methods [AOT_INIT_METHOD_NUM]; LLVMValueRef code_start, code_end; LLVMValueRef inited_var; LLVMValueRef unbox_tramp_indexes; LLVMValueRef unbox_trampolines; LLVMValueRef gc_poll_cold_wrapper; LLVMValueRef info_var; LLVMTypeRef *info_var_eltypes; int max_inited_idx, max_method_idx; gboolean has_jitted_code; gboolean static_link; gboolean llvm_only; gboolean interp; GHashTable *idx_to_lmethod; GHashTable *idx_to_unbox_tramp; GPtrArray *callsite_list; LLVMContextRef context; LLVMValueRef sentinel_exception; LLVMValueRef gc_safe_point_flag_var; LLVMValueRef interrupt_flag_var; void *di_builder, *cu; GHashTable *objc_selector_to_var; GPtrArray *cfgs; int unbox_tramp_num, unbox_tramp_elemsize; GHashTable *got_idx_to_type; GHashTable *no_method_table_lmethods; } MonoLLVMModule; /* * Information associated by the backend with mono basic blocks. */ typedef struct { LLVMBasicBlockRef bblock, end_bblock; LLVMValueRef finally_ind; gboolean added, invoke_target; /* * If this bblock is the start of a finally clause, this is a list of bblocks it * needs to branch to in ENDFINALLY. */ GSList *call_handler_return_bbs; /* * If this bblock is the start of a finally clause, this is the bblock that * CALL_HANDLER needs to branch to. */ LLVMBasicBlockRef call_handler_target_bb; /* The list of switch statements generated by ENDFINALLY instructions */ GSList *endfinally_switch_ins_list; GSList *phi_nodes; } BBInfo; /* * Structure containing emit state */ typedef struct { MonoMemPool *mempool; /* Maps method names to the corresponding LLVMValueRef */ GHashTable *emitted_method_decls; MonoCompile *cfg; LLVMValueRef lmethod; MonoLLVMModule *module; LLVMModuleRef lmodule; BBInfo *bblocks; int sindex, default_index, ex_index; LLVMBuilderRef builder; LLVMValueRef *values, *addresses; MonoType **vreg_cli_types; LLVMCallInfo *linfo; MonoMethodSignature *sig; GSList *builders; GHashTable *region_to_handler; GHashTable *clause_to_handler; LLVMBuilderRef alloca_builder; LLVMValueRef last_alloca; LLVMValueRef rgctx_arg; LLVMValueRef this_arg; LLVMTypeRef *vreg_types; gboolean *is_vphi; LLVMTypeRef method_type; LLVMBasicBlockRef init_bb, inited_bb; gboolean *is_dead; gboolean *unreachable; gboolean llvm_only; gboolean has_got_access; gboolean is_linkonce; gboolean emit_dummy_arg; gboolean has_safepoints; gboolean has_catch; int this_arg_pindex, rgctx_arg_pindex; LLVMValueRef imt_rgctx_loc; GHashTable *llvm_types; LLVMValueRef dbg_md; MonoDebugMethodInfo *minfo; /* For every clause, the clauses it is nested in */ GSList **nested_in; LLVMValueRef ex_var; GHashTable *exc_meta; GPtrArray *callsite_list; GPtrArray *phi_values; GPtrArray *bblock_list; char *method_name; GHashTable *jit_callees; LLVMValueRef long_bb_break_var; int *gc_var_indexes; LLVMValueRef gc_pin_area; LLVMValueRef il_state; LLVMValueRef il_state_ret; } EmitContext; typedef struct { MonoBasicBlock *bb; MonoInst *phi; MonoBasicBlock *in_bb; int sreg; } PhiNode; /* * Instruction metadata * This is the same as ins_info, but LREG != IREG. */ #ifdef MINI_OP #undef MINI_OP #endif #ifdef MINI_OP3 #undef MINI_OP3 #endif #define MINI_OP(a,b,dest,src1,src2) dest, src1, src2, ' ', #define MINI_OP3(a,b,dest,src1,src2,src3) dest, src1, src2, src3, #define NONE ' ' #define IREG 'i' #define FREG 'f' #define VREG 'v' #define XREG 'x' #define LREG 'l' /* keep in sync with the enum in mini.h */ const char mini_llvm_ins_info[] = { #include "mini-ops.h" }; #undef MINI_OP #undef MINI_OP3 #if TARGET_SIZEOF_VOID_P == 4 #define GET_LONG_IMM(ins) ((ins)->inst_l) #else #define GET_LONG_IMM(ins) ((ins)->inst_imm) #endif #define LLVM_INS_INFO(opcode) (&mini_llvm_ins_info [((opcode) - OP_START - 1) * 4]) #if 0 #define TRACE_FAILURE(msg) do { printf ("%s\n", msg); } while (0) #else #define TRACE_FAILURE(msg) #endif #ifdef TARGET_X86 #define IS_TARGET_X86 1 #else #define IS_TARGET_X86 0 #endif #ifdef TARGET_AMD64 #define IS_TARGET_AMD64 1 #else #define IS_TARGET_AMD64 0 #endif #define ctx_ok(ctx) (!(ctx)->cfg->disable_llvm) enum { MAX_VECTOR_ELEMS = 32, // 2 vectors * 128 bits per vector / 8 bits per element ARM64_MAX_VECTOR_ELEMS = 16, }; const int mask_0_incr_1 [] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, }; static LLVMIntPredicate cond_to_llvm_cond [] = { LLVMIntEQ, LLVMIntNE, LLVMIntSLE, LLVMIntSGE, LLVMIntSLT, LLVMIntSGT, LLVMIntULE, LLVMIntUGE, LLVMIntULT, LLVMIntUGT, }; static LLVMRealPredicate fpcond_to_llvm_cond [] = { LLVMRealOEQ, LLVMRealUNE, LLVMRealOLE, LLVMRealOGE, LLVMRealOLT, LLVMRealOGT, LLVMRealULE, LLVMRealUGE, LLVMRealULT, LLVMRealUGT, LLVMRealORD, LLVMRealUNO }; /* See Table 3-1 ("Comparison Predicate for CMPPD and CMPPS Instructions") in * Vol. 2A of the Intel SDM. */ enum { SSE_eq_ord_nosignal = 0, SSE_lt_ord_signal = 1, SSE_le_ord_signal = 2, SSE_unord_nosignal = 3, SSE_neq_unord_nosignal = 4, SSE_nlt_unord_signal = 5, SSE_nle_unord_signal = 6, SSE_ord_nosignal = 7, }; static MonoLLVMModule aot_module; static GHashTable *intrins_id_to_intrins; static LLVMTypeRef i1_t, i2_t, i4_t, i8_t, r4_t, r8_t; static LLVMTypeRef sse_i1_t, sse_i2_t, sse_i4_t, sse_i8_t, sse_r4_t, sse_r8_t; static LLVMTypeRef v64_i1_t, v64_i2_t, v64_i4_t, v64_i8_t, v64_r4_t, v64_r8_t; static LLVMTypeRef v128_i1_t, v128_i2_t, v128_i4_t, v128_i8_t, v128_r4_t, v128_r8_t; static LLVMTypeRef void_func_t; static MonoLLVMModule *init_jit_module (void); static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code); static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder); static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name); static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name); static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit); static LLVMValueRef get_intrins (EmitContext *ctx, int id); static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id); static void llvm_jit_finalize_method (EmitContext *ctx); static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params); static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module); static void create_aot_info_var (MonoLLVMModule *module); static void set_invariant_load_flag (LLVMValueRef v); static void set_nonnull_load_flag (LLVMValueRef v); enum { INTRIN_scalar = 1 << 0, INTRIN_vector64 = 1 << 1, INTRIN_vector128 = 1 << 2, INTRIN_vectorwidths = 3, INTRIN_vectormask = 0x7, INTRIN_int8 = 1 << 3, INTRIN_int16 = 1 << 4, INTRIN_int32 = 1 << 5, INTRIN_int64 = 1 << 6, INTRIN_float32 = 1 << 7, INTRIN_float64 = 1 << 8, INTRIN_elementwidths = 6, }; typedef uint16_t llvm_ovr_tag_t; static LLVMTypeRef intrin_types [INTRIN_vectorwidths][INTRIN_elementwidths]; static const llvm_ovr_tag_t intrin_arm64_ovr [] = { #define INTRINS(sym, ...) 0, #define INTRINS_OVR(sym, ...) 0, #define INTRINS_OVR_2_ARG(sym, ...) 0, #define INTRINS_OVR_3_ARG(sym, ...) 0, #define INTRINS_OVR_TAG(sym, _, arch, spec) spec, #define INTRINS_OVR_TAG_KIND(sym, _, kind, arch, spec) spec, #include "llvm-intrinsics.h" }; enum { INTRIN_kind_ftoi = 1, INTRIN_kind_widen, INTRIN_kind_widen_across, INTRIN_kind_across, INTRIN_kind_arm64_dot_prod, }; static const uint8_t intrin_kind [] = { #define INTRINS(sym, ...) 0, #define INTRINS_OVR(sym, ...) 0, #define INTRINS_OVR_2_ARG(sym, ...) 0, #define INTRINS_OVR_3_ARG(sym, ...) 0, #define INTRINS_OVR_TAG(sym, _, arch, spec) 0, #define INTRINS_OVR_TAG_KIND(sym, _, arch, kind, spec) kind, #include "llvm-intrinsics.h" }; static inline llvm_ovr_tag_t ovr_tag_force_scalar (llvm_ovr_tag_t tag) { return (tag & ~INTRIN_vectormask) | INTRIN_scalar; } static inline llvm_ovr_tag_t ovr_tag_smaller_vector (llvm_ovr_tag_t tag) { return (tag & ~INTRIN_vectormask) | ((tag & INTRIN_vectormask) >> 1); } static inline llvm_ovr_tag_t ovr_tag_smaller_elements (llvm_ovr_tag_t tag) { return ((tag & ~INTRIN_vectormask) >> 1) | (tag & INTRIN_vectormask); } static inline llvm_ovr_tag_t ovr_tag_corresponding_integer (llvm_ovr_tag_t tag) { return ((tag & ~INTRIN_vectormask) >> 2) | (tag & INTRIN_vectormask); } static LLVMTypeRef ovr_tag_to_llvm_type (llvm_ovr_tag_t tag) { int vw = 0; int ew = 0; if (tag & INTRIN_vector64) vw = 1; else if (tag & INTRIN_vector128) vw = 2; if (tag & INTRIN_int16) ew = 1; else if (tag & INTRIN_int32) ew = 2; else if (tag & INTRIN_int64) ew = 3; else if (tag & INTRIN_float32) ew = 4; else if (tag & INTRIN_float64) ew = 5; return intrin_types [vw][ew]; } static int key_from_id_and_tag (int id, llvm_ovr_tag_t ovr_tag) { return (((int) ovr_tag) << 23) | id; } static llvm_ovr_tag_t ovr_tag_from_mono_vector_class (MonoClass *klass) { int size = mono_class_value_size (klass, NULL); llvm_ovr_tag_t ret = 0; switch (size) { case 8: ret |= INTRIN_vector64; break; case 16: ret |= INTRIN_vector128; break; } MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; switch (etype->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: ret |= INTRIN_int8; break; case MONO_TYPE_I2: case MONO_TYPE_U2: ret |= INTRIN_int16; break; case MONO_TYPE_I4: case MONO_TYPE_U4: ret |= INTRIN_int32; break; case MONO_TYPE_I8: case MONO_TYPE_U8: ret |= INTRIN_int64; break; case MONO_TYPE_R4: ret |= INTRIN_float32; break; case MONO_TYPE_R8: ret |= INTRIN_float64; break; } return ret; } static llvm_ovr_tag_t ovr_tag_from_llvm_type (LLVMTypeRef type) { llvm_ovr_tag_t ret = 0; LLVMTypeKind kind = LLVMGetTypeKind (type); LLVMTypeRef elem_t = NULL; switch (kind) { case LLVMVectorTypeKind: { elem_t = LLVMGetElementType (type); unsigned int bits = mono_llvm_get_prim_size_bits (type); switch (bits) { case 64: ret |= INTRIN_vector64; break; case 128: ret |= INTRIN_vector128; break; default: g_assert_not_reached (); } break; } default: g_assert_not_reached (); } if (elem_t == i1_t) ret |= INTRIN_int8; if (elem_t == i2_t) ret |= INTRIN_int16; if (elem_t == i4_t) ret |= INTRIN_int32; if (elem_t == i8_t) ret |= INTRIN_int64; if (elem_t == r4_t) ret |= INTRIN_float32; if (elem_t == r8_t) ret |= INTRIN_float64; return ret; } static inline void set_failure (EmitContext *ctx, const char *message) { TRACE_FAILURE (reason); ctx->cfg->exception_message = g_strdup (message); ctx->cfg->disable_llvm = TRUE; } static LLVMValueRef const_int1 (int v) { return LLVMConstInt (LLVMInt1Type (), v ? 1 : 0, FALSE); } static LLVMValueRef const_int8 (int v) { return LLVMConstInt (LLVMInt8Type (), v, FALSE); } static LLVMValueRef const_int32 (int v) { return LLVMConstInt (LLVMInt32Type (), v, FALSE); } static LLVMValueRef const_int64 (int64_t v) { return LLVMConstInt (LLVMInt64Type (), v, FALSE); } /* * IntPtrType: * * The LLVM type with width == TARGET_SIZEOF_VOID_P */ static LLVMTypeRef IntPtrType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (); } static LLVMTypeRef ObjRefType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0); } static LLVMTypeRef ThisType (void) { return TARGET_SIZEOF_VOID_P == 8 ? LLVMPointerType (LLVMInt64Type (), 0) : LLVMPointerType (LLVMInt32Type (), 0); } typedef struct { int32_t size; uint32_t align; } MonoSizeAlign; /* * get_vtype_size: * * Return the size of the LLVM representation of the vtype T. */ static MonoSizeAlign get_vtype_size_align (MonoType *t) { uint32_t align = 0; int32_t size = mono_class_value_size (mono_class_from_mono_type_internal (t), &align); /* LLVMArgAsIArgs depends on this since it stores whole words */ while (size < 2 * TARGET_SIZEOF_VOID_P && mono_is_power_of_two (size) == -1) size ++; MonoSizeAlign ret = { size, align }; return ret; } /* * simd_class_to_llvm_type: * * Return the LLVM type corresponding to the Mono.SIMD class KLASS */ static LLVMTypeRef simd_class_to_llvm_type (EmitContext *ctx, MonoClass *klass) { const char *klass_name = m_class_get_name (klass); if (!strcmp (klass_name, "Vector2d")) { return LLVMVectorType (LLVMDoubleType (), 2); } else if (!strcmp (klass_name, "Vector2l")) { return LLVMVectorType (LLVMInt64Type (), 2); } else if (!strcmp (klass_name, "Vector2ul")) { return LLVMVectorType (LLVMInt64Type (), 2); } else if (!strcmp (klass_name, "Vector4i")) { return LLVMVectorType (LLVMInt32Type (), 4); } else if (!strcmp (klass_name, "Vector4ui")) { return LLVMVectorType (LLVMInt32Type (), 4); } else if (!strcmp (klass_name, "Vector4f")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector8s")) { return LLVMVectorType (LLVMInt16Type (), 8); } else if (!strcmp (klass_name, "Vector8us")) { return LLVMVectorType (LLVMInt16Type (), 8); } else if (!strcmp (klass_name, "Vector16sb")) { return LLVMVectorType (LLVMInt8Type (), 16); } else if (!strcmp (klass_name, "Vector16b")) { return LLVMVectorType (LLVMInt8Type (), 16); } else if (!strcmp (klass_name, "Vector2")) { /* System.Numerics */ return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector3")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector4")) { return LLVMVectorType (LLVMFloatType (), 4); } else if (!strcmp (klass_name, "Vector`1") || !strcmp (klass_name, "Vector64`1") || !strcmp (klass_name, "Vector128`1") || !strcmp (klass_name, "Vector256`1")) { MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0]; int size = mono_class_value_size (klass, NULL); switch (etype->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMVectorType (LLVMInt8Type (), size); case MONO_TYPE_I2: case MONO_TYPE_U2: return LLVMVectorType (LLVMInt16Type (), size / 2); case MONO_TYPE_I4: case MONO_TYPE_U4: return LLVMVectorType (LLVMInt32Type (), size / 4); case MONO_TYPE_I8: case MONO_TYPE_U8: return LLVMVectorType (LLVMInt64Type (), size / 8); case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return LLVMVectorType (LLVMInt64Type (), size / 8); #else return LLVMVectorType (LLVMInt32Type (), size / 4); #endif case MONO_TYPE_R4: return LLVMVectorType (LLVMFloatType (), size / 4); case MONO_TYPE_R8: return LLVMVectorType (LLVMDoubleType (), size / 8); default: g_assert_not_reached (); return NULL; } } else { printf ("%s\n", klass_name); NOT_IMPLEMENTED; return NULL; } } static LLVMTypeRef simd_valuetuple_to_llvm_type (EmitContext *ctx, MonoClass *klass) { const char *klass_name = m_class_get_name (klass); if (!strcmp (klass_name, "ValueTuple`2")) { MonoType *etype = mono_class_get_generic_class (klass)->context.class_inst->type_argv [0]; if (etype->type != MONO_TYPE_GENERICINST) g_assert_not_reached (); MonoClass *eklass = etype->data.generic_class->cached_class; LLVMTypeRef ltype = simd_class_to_llvm_type (ctx, eklass); return LLVMArrayType (ltype, 2); } g_assert_not_reached (); } /* Return the 128 bit SIMD type corresponding to the mono type TYPE */ static inline G_GNUC_UNUSED LLVMTypeRef type_to_sse_type (int type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMVectorType (LLVMInt8Type (), 16); case MONO_TYPE_U2: case MONO_TYPE_I2: return LLVMVectorType (LLVMInt16Type (), 8); case MONO_TYPE_U4: case MONO_TYPE_I4: return LLVMVectorType (LLVMInt32Type (), 4); case MONO_TYPE_U8: case MONO_TYPE_I8: return LLVMVectorType (LLVMInt64Type (), 2); case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return LLVMVectorType (LLVMInt64Type (), 2); #else return LLVMVectorType (LLVMInt32Type (), 4); #endif case MONO_TYPE_R8: return LLVMVectorType (LLVMDoubleType (), 2); case MONO_TYPE_R4: return LLVMVectorType (LLVMFloatType (), 4); default: g_assert_not_reached (); return NULL; } } static LLVMTypeRef create_llvm_type_for_type (MonoLLVMModule *module, MonoClass *klass) { int i, size, nfields, esize; LLVMTypeRef *eltypes; char *name; MonoType *t; LLVMTypeRef ltype; t = m_class_get_byval_arg (klass); if (mini_type_is_hfa (t, &nfields, &esize)) { /* * This is needed on arm64 where HFAs are returned in * registers. */ /* SIMD types have size 16 in mono_class_value_size () */ if (m_class_is_simd_type (klass)) nfields = 16/ esize; size = nfields; eltypes = g_new (LLVMTypeRef, size); for (i = 0; i < size; ++i) eltypes [i] = esize == 4 ? LLVMFloatType () : LLVMDoubleType (); } else { MonoSizeAlign size_align = get_vtype_size_align (t); eltypes = g_new (LLVMTypeRef, size_align.size); size = 0; uint32_t bytes = 0; uint32_t chunk = size_align.align < TARGET_SIZEOF_VOID_P ? size_align.align : TARGET_SIZEOF_VOID_P; for (; chunk > 0; chunk = chunk >> 1) { for (; (bytes + chunk) <= size_align.size; bytes += chunk) { eltypes [size] = LLVMIntType (chunk * 8); ++size; } } } name = mono_type_full_name (m_class_get_byval_arg (klass)); ltype = LLVMStructCreateNamed (module->context, name); LLVMStructSetBody (ltype, eltypes, size, FALSE); g_free (eltypes); g_free (name); return ltype; } static LLVMTypeRef primitive_type_to_llvm_type (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return LLVMInt8Type (); case MONO_TYPE_I2: case MONO_TYPE_U2: return LLVMInt16Type (); case MONO_TYPE_I4: case MONO_TYPE_U4: return LLVMInt32Type (); case MONO_TYPE_I8: case MONO_TYPE_U8: return LLVMInt64Type (); case MONO_TYPE_R4: return LLVMFloatType (); case MONO_TYPE_R8: return LLVMDoubleType (); case MONO_TYPE_I: case MONO_TYPE_U: return IntPtrType (); default: return NULL; } } static MonoTypeEnum inst_c1_type (const MonoInst *ins) { return (MonoTypeEnum)ins->inst_c1; } /* * type_to_llvm_type: * * Return the LLVM type corresponding to T. */ static LLVMTypeRef type_to_llvm_type (EmitContext *ctx, MonoType *t) { if (m_type_is_byref (t)) return ThisType (); t = mini_get_underlying_type (t); LLVMTypeRef prim_llvm_type = primitive_type_to_llvm_type (t->type); if (prim_llvm_type != NULL) return prim_llvm_type; switch (t->type) { case MONO_TYPE_VOID: return LLVMVoidType (); case MONO_TYPE_OBJECT: return ObjRefType (); case MONO_TYPE_PTR: case MONO_TYPE_FNPTR: { MonoClass *klass = mono_class_from_mono_type_internal (t); MonoClass *ptr_klass = m_class_get_element_class (klass); MonoType *ptr_type = m_class_get_byval_arg (ptr_klass); /* Handle primitive pointers */ switch (ptr_type->type) { case MONO_TYPE_I1: case MONO_TYPE_I2: case MONO_TYPE_I4: case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: return LLVMPointerType (type_to_llvm_type (ctx, ptr_type), 0); } return ObjRefType (); } case MONO_TYPE_VAR: case MONO_TYPE_MVAR: /* Because of generic sharing */ return ObjRefType (); case MONO_TYPE_GENERICINST: if (!mono_type_generic_inst_is_valuetype (t)) return ObjRefType (); /* Fall through */ case MONO_TYPE_VALUETYPE: case MONO_TYPE_TYPEDBYREF: { MonoClass *klass; LLVMTypeRef ltype; klass = mono_class_from_mono_type_internal (t); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) return simd_class_to_llvm_type (ctx, klass); if (m_class_is_enumtype (klass)) return type_to_llvm_type (ctx, mono_class_enum_basetype_internal (klass)); ltype = (LLVMTypeRef)g_hash_table_lookup (ctx->module->llvm_types, klass); if (!ltype) { ltype = create_llvm_type_for_type (ctx->module, klass); g_hash_table_insert (ctx->module->llvm_types, klass, ltype); } return ltype; } default: printf ("X: %d\n", t->type); ctx->cfg->exception_message = g_strdup_printf ("type %s", mono_type_full_name (t)); ctx->cfg->disable_llvm = TRUE; return NULL; } } static gboolean primitive_type_is_unsigned (MonoTypeEnum t) { switch (t) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_CHAR: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: return TRUE; default: return FALSE; } } /* * type_is_unsigned: * * Return whenever T is an unsigned int type. */ static gboolean type_is_unsigned (EmitContext *ctx, MonoType *t) { t = mini_get_underlying_type (t); if (m_type_is_byref (t)) return FALSE; return primitive_type_is_unsigned (t->type); } /* * type_to_llvm_arg_type: * * Same as type_to_llvm_type, but treat i8/i16 as i32. */ static LLVMTypeRef type_to_llvm_arg_type (EmitContext *ctx, MonoType *t) { LLVMTypeRef ptype = type_to_llvm_type (ctx, t); if (ctx->cfg->llvm_only) return ptype; /* * This works on all abis except arm64/ios which passes multiple * arguments in one stack slot. */ #ifndef TARGET_ARM64 if (ptype == LLVMInt8Type () || ptype == LLVMInt16Type ()) { /* * LLVM generates code which only sets the lower bits, while JITted * code expects all the bits to be set. */ ptype = LLVMInt32Type (); } #endif return ptype; } /* * llvm_type_to_stack_type: * * Return the LLVM type which needs to be used when a value of type TYPE is pushed * on the IL stack. */ static G_GNUC_UNUSED LLVMTypeRef llvm_type_to_stack_type (MonoCompile *cfg, LLVMTypeRef type) { if (type == NULL) return NULL; if (type == LLVMInt8Type ()) return LLVMInt32Type (); else if (type == LLVMInt16Type ()) return LLVMInt32Type (); else if (!cfg->r4fp && type == LLVMFloatType ()) return LLVMDoubleType (); else return type; } /* * regtype_to_llvm_type: * * Return the LLVM type corresponding to the regtype C used in instruction * descriptions. */ static LLVMTypeRef regtype_to_llvm_type (char c) { switch (c) { case 'i': return LLVMInt32Type (); case 'l': return LLVMInt64Type (); case 'f': return LLVMDoubleType (); default: return NULL; } } /* * op_to_llvm_type: * * Return the LLVM type corresponding to the unary/binary opcode OPCODE. */ static LLVMTypeRef op_to_llvm_type (int opcode) { switch (opcode) { case OP_ICONV_TO_I1: case OP_LCONV_TO_I1: return LLVMInt8Type (); case OP_ICONV_TO_U1: case OP_LCONV_TO_U1: return LLVMInt8Type (); case OP_ICONV_TO_I2: case OP_LCONV_TO_I2: return LLVMInt16Type (); case OP_ICONV_TO_U2: case OP_LCONV_TO_U2: return LLVMInt16Type (); case OP_ICONV_TO_I4: case OP_LCONV_TO_I4: return LLVMInt32Type (); case OP_ICONV_TO_U4: case OP_LCONV_TO_U4: return LLVMInt32Type (); case OP_ICONV_TO_I8: return LLVMInt64Type (); case OP_ICONV_TO_R4: return LLVMFloatType (); case OP_ICONV_TO_R8: return LLVMDoubleType (); case OP_ICONV_TO_U8: return LLVMInt64Type (); case OP_FCONV_TO_I4: return LLVMInt32Type (); case OP_FCONV_TO_I8: return LLVMInt64Type (); case OP_FCONV_TO_I1: case OP_FCONV_TO_U1: case OP_RCONV_TO_I1: case OP_RCONV_TO_U1: return LLVMInt8Type (); case OP_FCONV_TO_I2: case OP_FCONV_TO_U2: case OP_RCONV_TO_I2: case OP_RCONV_TO_U2: return LLVMInt16Type (); case OP_FCONV_TO_U4: case OP_RCONV_TO_U4: return LLVMInt32Type (); case OP_FCONV_TO_U8: case OP_RCONV_TO_U8: return LLVMInt64Type (); case OP_IADD_OVF: case OP_IADD_OVF_UN: case OP_ISUB_OVF: case OP_ISUB_OVF_UN: case OP_IMUL_OVF: case OP_IMUL_OVF_UN: return LLVMInt32Type (); case OP_LADD_OVF: case OP_LADD_OVF_UN: case OP_LSUB_OVF: case OP_LSUB_OVF_UN: case OP_LMUL_OVF: case OP_LMUL_OVF_UN: return LLVMInt64Type (); default: printf ("%s\n", mono_inst_name (opcode)); g_assert_not_reached (); return NULL; } } #define CLAUSE_START(clause) ((clause)->try_offset) #define CLAUSE_END(clause) (((clause))->try_offset + ((clause))->try_len) /* * load_store_to_llvm_type: * * Return the size/sign/zero extension corresponding to the load/store opcode * OPCODE. */ static LLVMTypeRef load_store_to_llvm_type (int opcode, int *size, gboolean *sext, gboolean *zext) { *sext = FALSE; *zext = FALSE; switch (opcode) { case OP_LOADI1_MEMBASE: case OP_STOREI1_MEMBASE_REG: case OP_STOREI1_MEMBASE_IMM: case OP_ATOMIC_LOAD_I1: case OP_ATOMIC_STORE_I1: *size = 1; *sext = TRUE; return LLVMInt8Type (); case OP_LOADU1_MEMBASE: case OP_LOADU1_MEM: case OP_ATOMIC_LOAD_U1: case OP_ATOMIC_STORE_U1: *size = 1; *zext = TRUE; return LLVMInt8Type (); case OP_LOADI2_MEMBASE: case OP_STOREI2_MEMBASE_REG: case OP_STOREI2_MEMBASE_IMM: case OP_ATOMIC_LOAD_I2: case OP_ATOMIC_STORE_I2: *size = 2; *sext = TRUE; return LLVMInt16Type (); case OP_LOADU2_MEMBASE: case OP_LOADU2_MEM: case OP_ATOMIC_LOAD_U2: case OP_ATOMIC_STORE_U2: *size = 2; *zext = TRUE; return LLVMInt16Type (); case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: case OP_LOADI4_MEM: case OP_LOADU4_MEM: case OP_STOREI4_MEMBASE_REG: case OP_STOREI4_MEMBASE_IMM: case OP_ATOMIC_LOAD_I4: case OP_ATOMIC_STORE_I4: case OP_ATOMIC_LOAD_U4: case OP_ATOMIC_STORE_U4: *size = 4; return LLVMInt32Type (); case OP_LOADI8_MEMBASE: case OP_LOADI8_MEM: case OP_STOREI8_MEMBASE_REG: case OP_STOREI8_MEMBASE_IMM: case OP_ATOMIC_LOAD_I8: case OP_ATOMIC_STORE_I8: case OP_ATOMIC_LOAD_U8: case OP_ATOMIC_STORE_U8: *size = 8; return LLVMInt64Type (); case OP_LOADR4_MEMBASE: case OP_STORER4_MEMBASE_REG: case OP_ATOMIC_LOAD_R4: case OP_ATOMIC_STORE_R4: *size = 4; return LLVMFloatType (); case OP_LOADR8_MEMBASE: case OP_STORER8_MEMBASE_REG: case OP_ATOMIC_LOAD_R8: case OP_ATOMIC_STORE_R8: *size = 8; return LLVMDoubleType (); case OP_LOAD_MEMBASE: case OP_LOAD_MEM: case OP_STORE_MEMBASE_REG: case OP_STORE_MEMBASE_IMM: *size = TARGET_SIZEOF_VOID_P; return IntPtrType (); default: g_assert_not_reached (); return NULL; } } /* * ovf_op_to_intrins: * * Return the LLVM intrinsics corresponding to the overflow opcode OPCODE. */ static IntrinsicId ovf_op_to_intrins (int opcode) { switch (opcode) { case OP_IADD_OVF: return INTRINS_SADD_OVF_I32; case OP_IADD_OVF_UN: return INTRINS_UADD_OVF_I32; case OP_ISUB_OVF: return INTRINS_SSUB_OVF_I32; case OP_ISUB_OVF_UN: return INTRINS_USUB_OVF_I32; case OP_IMUL_OVF: return INTRINS_SMUL_OVF_I32; case OP_IMUL_OVF_UN: return INTRINS_UMUL_OVF_I32; case OP_LADD_OVF: return INTRINS_SADD_OVF_I64; case OP_LADD_OVF_UN: return INTRINS_UADD_OVF_I64; case OP_LSUB_OVF: return INTRINS_SSUB_OVF_I64; case OP_LSUB_OVF_UN: return INTRINS_USUB_OVF_I64; case OP_LMUL_OVF: return INTRINS_SMUL_OVF_I64; case OP_LMUL_OVF_UN: return INTRINS_UMUL_OVF_I64; default: g_assert_not_reached (); return (IntrinsicId)0; } } static IntrinsicId simd_ins_to_intrins (int opcode) { switch (opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_CVTPD2DQ: return INTRINS_SSE_CVTPD2DQ; case OP_CVTPS2DQ: return INTRINS_SSE_CVTPS2DQ; case OP_CVTPD2PS: return INTRINS_SSE_CVTPD2PS; case OP_CVTTPD2DQ: return INTRINS_SSE_CVTTPD2DQ; case OP_CVTTPS2DQ: return INTRINS_SSE_CVTTPS2DQ; case OP_SSE_SQRTSS: return INTRINS_SSE_SQRT_SS; case OP_SSE2_SQRTSD: return INTRINS_SSE_SQRT_SD; #endif default: g_assert_not_reached (); return (IntrinsicId)0; } } static LLVMTypeRef simd_op_to_llvm_type (int opcode) { #if defined(TARGET_X86) || defined(TARGET_AMD64) switch (opcode) { case OP_EXTRACT_R8: case OP_EXPAND_R8: return sse_r8_t; case OP_EXTRACT_I8: case OP_EXPAND_I8: return sse_i8_t; case OP_EXTRACT_I4: case OP_EXPAND_I4: return sse_i4_t; case OP_EXTRACT_I2: case OP_EXTRACTX_U2: case OP_EXPAND_I2: return sse_i2_t; case OP_EXTRACT_I1: case OP_EXPAND_I1: return sse_i1_t; case OP_EXTRACT_R4: case OP_EXPAND_R4: return sse_r4_t; case OP_CVTPD2DQ: case OP_CVTPD2PS: case OP_CVTTPD2DQ: return sse_r8_t; case OP_CVTPS2DQ: case OP_CVTTPS2DQ: return sse_r4_t; case OP_SQRTPS: case OP_RSQRTPS: case OP_DUPPS_LOW: case OP_DUPPS_HIGH: return sse_r4_t; case OP_SQRTPD: case OP_DUPPD: return sse_r8_t; default: g_assert_not_reached (); return NULL; } #else return NULL; #endif } static void set_cold_cconv (LLVMValueRef func) { /* * xcode10 (watchOS) and ARM/ARM64 doesn't seem to support preserveall, it fails with: * fatal error: error in backend: Unsupported calling convention */ #if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64) LLVMSetFunctionCallConv (func, LLVMColdCallConv); #endif } static void set_call_cold_cconv (LLVMValueRef func) { #if !defined(TARGET_WATCHOS) && !defined(TARGET_ARM) && !defined(TARGET_ARM64) LLVMSetInstructionCallConv (func, LLVMColdCallConv); #endif } /* * get_bb: * * Return the LLVM basic block corresponding to BB. */ static LLVMBasicBlockRef get_bb (EmitContext *ctx, MonoBasicBlock *bb) { char bb_name_buf [128]; char *bb_name; if (ctx->bblocks [bb->block_num].bblock == NULL) { if (bb->flags & BB_EXCEPTION_HANDLER) { int clause_index = (mono_get_block_region_notry (ctx->cfg, bb->region) >> 8) - 1; sprintf (bb_name_buf, "EH_CLAUSE%d_BB%d", clause_index, bb->block_num); bb_name = bb_name_buf; } else if (bb->block_num < 256) { if (!ctx->module->bb_names) { ctx->module->bb_names_len = 256; ctx->module->bb_names = g_new0 (char*, ctx->module->bb_names_len); } if (!ctx->module->bb_names [bb->block_num]) { char *n; n = g_strdup_printf ("BB%d", bb->block_num); mono_memory_barrier (); ctx->module->bb_names [bb->block_num] = n; } bb_name = ctx->module->bb_names [bb->block_num]; } else { sprintf (bb_name_buf, "BB%d", bb->block_num); bb_name = bb_name_buf; } ctx->bblocks [bb->block_num].bblock = LLVMAppendBasicBlock (ctx->lmethod, bb_name); ctx->bblocks [bb->block_num].end_bblock = ctx->bblocks [bb->block_num].bblock; } return ctx->bblocks [bb->block_num].bblock; } /* * get_end_bb: * * Return the last LLVM bblock corresponding to BB. * This might not be equal to the bb returned by get_bb () since we need to generate * multiple LLVM bblocks for a mono bblock to handle throwing exceptions. */ static LLVMBasicBlockRef get_end_bb (EmitContext *ctx, MonoBasicBlock *bb) { get_bb (ctx, bb); return ctx->bblocks [bb->block_num].end_bblock; } static LLVMBasicBlockRef gen_bb (EmitContext *ctx, const char *prefix) { char bb_name [128]; sprintf (bb_name, "%s%d", prefix, ++ ctx->ex_index); return LLVMAppendBasicBlock (ctx->lmethod, bb_name); } /* * resolve_patch: * * Return the target of the patch identified by TYPE and TARGET. */ static gpointer resolve_patch (MonoCompile *cfg, MonoJumpInfoType type, gconstpointer target) { MonoJumpInfo ji; ERROR_DECL (error); gpointer res; memset (&ji, 0, sizeof (ji)); ji.type = type; ji.data.target = target; res = mono_resolve_patch_target (cfg->method, NULL, &ji, FALSE, error); mono_error_assert_ok (error); return res; } /* * convert_full: * * Emit code to convert the LLVM value V to DTYPE. */ static LLVMValueRef convert_full (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype, gboolean is_unsigned) { LLVMTypeRef stype = LLVMTypeOf (v); if (stype != dtype) { gboolean ext = FALSE; /* Extend */ if (dtype == LLVMInt64Type () && (stype == LLVMInt32Type () || stype == LLVMInt16Type () || stype == LLVMInt8Type ())) ext = TRUE; else if (dtype == LLVMInt32Type () && (stype == LLVMInt16Type () || stype == LLVMInt8Type ())) ext = TRUE; else if (dtype == LLVMInt16Type () && (stype == LLVMInt8Type ())) ext = TRUE; if (ext) return is_unsigned ? LLVMBuildZExt (ctx->builder, v, dtype, "") : LLVMBuildSExt (ctx->builder, v, dtype, ""); if (dtype == LLVMDoubleType () && stype == LLVMFloatType ()) return LLVMBuildFPExt (ctx->builder, v, dtype, ""); /* Trunc */ if (stype == LLVMInt64Type () && (dtype == LLVMInt32Type () || dtype == LLVMInt16Type () || dtype == LLVMInt8Type ())) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMInt32Type () && (dtype == LLVMInt16Type () || dtype == LLVMInt8Type ())) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMInt16Type () && dtype == LLVMInt8Type ()) return LLVMBuildTrunc (ctx->builder, v, dtype, ""); if (stype == LLVMDoubleType () && dtype == LLVMFloatType ()) return LLVMBuildFPTrunc (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind && LLVMGetTypeKind (dtype) == LLVMPointerTypeKind) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (dtype) == LLVMPointerTypeKind) return LLVMBuildIntToPtr (ctx->builder, v, dtype, ""); if (LLVMGetTypeKind (stype) == LLVMPointerTypeKind) return LLVMBuildPtrToInt (ctx->builder, v, dtype, ""); if (mono_arch_is_soft_float ()) { if (stype == LLVMInt32Type () && dtype == LLVMFloatType ()) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); if (stype == LLVMInt32Type () && dtype == LLVMDoubleType ()) return LLVMBuildBitCast (ctx->builder, LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""), dtype, ""); } if (LLVMGetTypeKind (stype) == LLVMVectorTypeKind && LLVMGetTypeKind (dtype) == LLVMVectorTypeKind) { if (mono_llvm_get_prim_size_bits (stype) == mono_llvm_get_prim_size_bits (dtype)) return LLVMBuildBitCast (ctx->builder, v, dtype, ""); } mono_llvm_dump_value (v); mono_llvm_dump_type (dtype); printf ("\n"); g_assert_not_reached (); return NULL; } else { return v; } } static LLVMValueRef convert (EmitContext *ctx, LLVMValueRef v, LLVMTypeRef dtype) { return convert_full (ctx, v, dtype, FALSE); } static void emit_memset (EmitContext *ctx, LLVMBuilderRef builder, LLVMValueRef v, LLVMValueRef size, int alignment) { LLVMValueRef args [5]; int aindex = 0; args [aindex ++] = v; args [aindex ++] = LLVMConstInt (LLVMInt8Type (), 0, FALSE); args [aindex ++] = size; args [aindex ++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); LLVMBuildCall (builder, get_intrins (ctx, INTRINS_MEMSET), args, aindex, ""); } /* * emit_volatile_load: * * If vreg is volatile, emit a load from its address. */ static LLVMValueRef emit_volatile_load (EmitContext *ctx, int vreg) { MonoType *t; LLVMValueRef v; // On arm64, we pass the rgctx in a callee saved // register on arm64 (x15), and llvm might keep the value in that register // even through the register is marked as 'reserved' inside llvm. v = mono_llvm_build_load (ctx->builder, ctx->addresses [vreg], "", TRUE); t = ctx->vreg_cli_types [vreg]; if (t && !m_type_is_byref (t)) { /* * Might have to zero extend since llvm doesn't have * unsigned types. */ if (t->type == MONO_TYPE_U1 || t->type == MONO_TYPE_U2 || t->type == MONO_TYPE_CHAR || t->type == MONO_TYPE_BOOLEAN) v = LLVMBuildZExt (ctx->builder, v, LLVMInt32Type (), ""); else if (t->type == MONO_TYPE_I1 || t->type == MONO_TYPE_I2) v = LLVMBuildSExt (ctx->builder, v, LLVMInt32Type (), ""); else if (t->type == MONO_TYPE_U8) v = LLVMBuildZExt (ctx->builder, v, LLVMInt64Type (), ""); } return v; } /* * emit_volatile_store: * * If VREG is volatile, emit a store from its value to its address. */ static void emit_volatile_store (EmitContext *ctx, int vreg) { MonoInst *var = get_vreg_to_inst (ctx->cfg, vreg); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { g_assert (ctx->addresses [vreg]); #ifdef TARGET_WASM /* Need volatile stores otherwise the compiler might move them */ mono_llvm_build_store (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg], TRUE, LLVM_BARRIER_NONE); #else LLVMBuildStore (ctx->builder, convert (ctx, ctx->values [vreg], type_to_llvm_type (ctx, var->inst_vtype)), ctx->addresses [vreg]); #endif } } static LLVMTypeRef sig_to_llvm_sig_no_cinfo (EmitContext *ctx, MonoMethodSignature *sig) { LLVMTypeRef ret_type; LLVMTypeRef *param_types = NULL; LLVMTypeRef res; int i, pindex; ret_type = type_to_llvm_type (ctx, sig->ret); if (!ctx_ok (ctx)) return NULL; param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3); pindex = 0; if (sig->hasthis) param_types [pindex ++] = ThisType (); for (i = 0; i < sig->param_count; ++i) param_types [pindex ++] = type_to_llvm_arg_type (ctx, sig->params [i]); if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } res = LLVMFunctionType (ret_type, param_types, pindex, FALSE); g_free (param_types); return res; } /* * sig_to_llvm_sig_full: * * Return the LLVM signature corresponding to the mono signature SIG using the * calling convention information in CINFO. Fill out the parameter mapping information in CINFO. */ static LLVMTypeRef sig_to_llvm_sig_full (EmitContext *ctx, MonoMethodSignature *sig, LLVMCallInfo *cinfo) { LLVMTypeRef ret_type; LLVMTypeRef *param_types = NULL; LLVMTypeRef res; int i, j, pindex, vret_arg_pindex = 0; gboolean vretaddr = FALSE; MonoType *rtype; if (!cinfo) return sig_to_llvm_sig_no_cinfo (ctx, sig); ret_type = type_to_llvm_type (ctx, sig->ret); if (!ctx_ok (ctx)) return NULL; rtype = mini_get_underlying_type (sig->ret); switch (cinfo->ret.storage) { case LLVMArgVtypeInReg: /* LLVM models this by returning an aggregate value */ if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgNone) { LLVMTypeRef members [2]; members [0] = IntPtrType (); ret_type = LLVMStructType (members, 1, FALSE); } else if (cinfo->ret.pair_storage [0] == LLVMArgNone && cinfo->ret.pair_storage [1] == LLVMArgNone) { /* Empty struct */ ret_type = LLVMVoidType (); } else if (cinfo->ret.pair_storage [0] == LLVMArgInIReg && cinfo->ret.pair_storage [1] == LLVMArgInIReg) { LLVMTypeRef members [2]; members [0] = IntPtrType (); members [1] = IntPtrType (); ret_type = LLVMStructType (members, 2, FALSE); } else { g_assert_not_reached (); } break; case LLVMArgVtypeByVal: /* Vtype returned normally by val */ break; case LLVMArgVtypeAsScalar: { int size = mono_class_value_size (mono_class_from_mono_type_internal (rtype), NULL); /* LLVM models this by returning an int */ if (size < TARGET_SIZEOF_VOID_P) { g_assert (cinfo->ret.nslots == 1); ret_type = LLVMIntType (size * 8); } else { g_assert (cinfo->ret.nslots == 1 || cinfo->ret.nslots == 2); ret_type = LLVMIntType (cinfo->ret.nslots * sizeof (target_mgreg_t) * 8); } break; } case LLVMArgAsIArgs: ret_type = LLVMArrayType (IntPtrType (), cinfo->ret.nslots); break; case LLVMArgFpStruct: { /* Vtype returned as a fp struct */ LLVMTypeRef members [16]; /* Have to create our own structure since we don't map fp structures to LLVM fp structures yet */ for (i = 0; i < cinfo->ret.nslots; ++i) members [i] = cinfo->ret.esize == 8 ? LLVMDoubleType () : LLVMFloatType (); ret_type = LLVMStructType (members, cinfo->ret.nslots, FALSE); break; } case LLVMArgVtypeByRef: /* Vtype returned using a hidden argument */ ret_type = LLVMVoidType (); break; case LLVMArgVtypeRetAddr: case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: case LLVMArgGsharedvtVariable: vretaddr = TRUE; ret_type = LLVMVoidType (); break; case LLVMArgWasmVtypeAsScalar: g_assert (cinfo->ret.esize); ret_type = LLVMIntType (cinfo->ret.esize * 8); break; default: break; } param_types = g_new0 (LLVMTypeRef, (sig->param_count * 8) + 3); pindex = 0; if (cinfo->ret.storage == LLVMArgVtypeByRef) { /* * Has to be the first argument because of the sret argument attribute * FIXME: This might conflict with passing 'this' as the first argument, but * this is only used on arm64 which has a dedicated struct return register. */ cinfo->vret_arg_pindex = pindex; param_types [pindex] = type_to_llvm_arg_type (ctx, sig->ret); if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; } if (!ctx->llvm_only && cinfo->rgctx_arg) { cinfo->rgctx_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } if (cinfo->imt_arg) { cinfo->imt_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } if (vretaddr) { /* Compute the index in the LLVM signature where the vret arg needs to be passed */ vret_arg_pindex = pindex; if (cinfo->vret_arg_index == 1) { /* Add the slots consumed by the first argument */ LLVMArgInfo *ainfo = &cinfo->args [0]; switch (ainfo->storage) { case LLVMArgVtypeInReg: for (j = 0; j < 2; ++j) { if (ainfo->pair_storage [j] == LLVMArgInIReg) vret_arg_pindex ++; } break; default: vret_arg_pindex ++; } } cinfo->vret_arg_pindex = vret_arg_pindex; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); if (sig->hasthis) { cinfo->this_arg_pindex = pindex; param_types [pindex ++] = ThisType (); cinfo->args [0].pindex = cinfo->this_arg_pindex; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &cinfo->args [i + sig->hasthis]; if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); ainfo->pindex = pindex; switch (ainfo->storage) { case LLVMArgVtypeInReg: for (j = 0; j < 2; ++j) { switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: param_types [pindex ++] = LLVMIntType (TARGET_SIZEOF_VOID_P * 8); break; case LLVMArgNone: break; default: g_assert_not_reached (); } } break; case LLVMArgVtypeByVal: param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type); if (!ctx_ok (ctx)) break; param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; break; case LLVMArgAsIArgs: if (ainfo->esize == 8) param_types [pindex] = LLVMArrayType (LLVMInt64Type (), ainfo->nslots); else param_types [pindex] = LLVMArrayType (IntPtrType (), ainfo->nslots); pindex ++; break; case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: param_types [pindex] = type_to_llvm_arg_type (ctx, ainfo->type); if (!ctx_ok (ctx)) break; param_types [pindex] = LLVMPointerType (param_types [pindex], 0); pindex ++; break; case LLVMArgAsFpArgs: { int j; /* Emit dummy fp arguments if needed so the rest is passed on the stack */ for (j = 0; j < ainfo->ndummy_fpargs; ++j) param_types [pindex ++] = LLVMDoubleType (); for (j = 0; j < ainfo->nslots; ++j) param_types [pindex ++] = ainfo->esize == 8 ? LLVMDoubleType () : LLVMFloatType (); break; } case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: g_assert (ainfo->esize); param_types [pindex ++] = LLVMIntType (ainfo->esize * 8); break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: param_types [pindex ++] = LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0); break; case LLVMArgGsharedvtVariable: param_types [pindex ++] = LLVMPointerType (IntPtrType (), 0); break; default: param_types [pindex ++] = type_to_llvm_arg_type (ctx, ainfo->type); break; } } if (!ctx_ok (ctx)) { g_free (param_types); return NULL; } if (vretaddr && vret_arg_pindex == pindex) param_types [pindex ++] = IntPtrType (); if (ctx->llvm_only && cinfo->rgctx_arg) { /* Pass the rgctx as the last argument */ cinfo->rgctx_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } else if (ctx->llvm_only && cinfo->dummy_arg) { /* Pass a dummy arg last */ cinfo->dummy_arg_pindex = pindex; param_types [pindex] = ctx->module->ptr_type; pindex ++; } res = LLVMFunctionType (ret_type, param_types, pindex, FALSE); g_free (param_types); return res; } static LLVMTypeRef sig_to_llvm_sig (EmitContext *ctx, MonoMethodSignature *sig) { return sig_to_llvm_sig_full (ctx, sig, NULL); } /* * LLVMFunctionType1: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType0 (LLVMTypeRef ReturnType, int IsVarArg) { return LLVMFunctionType (ReturnType, NULL, 0, IsVarArg); } /* * LLVMFunctionType1: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType1 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, int IsVarArg) { LLVMTypeRef param_types [1]; param_types [0] = ParamType1; return LLVMFunctionType (ReturnType, param_types, 1, IsVarArg); } /* * LLVMFunctionType2: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType2 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, int IsVarArg) { LLVMTypeRef param_types [2]; param_types [0] = ParamType1; param_types [1] = ParamType2; return LLVMFunctionType (ReturnType, param_types, 2, IsVarArg); } /* * LLVMFunctionType3: * * Create an LLVM function type from the arguments. */ static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType3 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, int IsVarArg) { LLVMTypeRef param_types [3]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; return LLVMFunctionType (ReturnType, param_types, 3, IsVarArg); } static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType4 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, LLVMTypeRef ParamType4, int IsVarArg) { LLVMTypeRef param_types [4]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; param_types [3] = ParamType4; return LLVMFunctionType (ReturnType, param_types, 4, IsVarArg); } static G_GNUC_UNUSED LLVMTypeRef LLVMFunctionType5 (LLVMTypeRef ReturnType, LLVMTypeRef ParamType1, LLVMTypeRef ParamType2, LLVMTypeRef ParamType3, LLVMTypeRef ParamType4, LLVMTypeRef ParamType5, int IsVarArg) { LLVMTypeRef param_types [5]; param_types [0] = ParamType1; param_types [1] = ParamType2; param_types [2] = ParamType3; param_types [3] = ParamType4; param_types [4] = ParamType5; return LLVMFunctionType (ReturnType, param_types, 5, IsVarArg); } /* * create_builder: * * Create an LLVM builder and remember it so it can be freed later. */ static LLVMBuilderRef create_builder (EmitContext *ctx) { LLVMBuilderRef builder = LLVMCreateBuilder (); if (mono_use_fast_math) mono_llvm_set_fast_math (builder); ctx->builders = g_slist_prepend_mempool (ctx->cfg->mempool, ctx->builders, builder); emit_default_dbg_loc (ctx, builder); return builder; } static char* get_aotconst_name (MonoJumpInfoType type, gconstpointer data, int got_offset) { char *name; int len; switch (type) { case MONO_PATCH_INFO_JIT_ICALL_ID: name = g_strdup_printf ("jit_icall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name); break; case MONO_PATCH_INFO_JIT_ICALL_ADDR_NOCALL: name = g_strdup_printf ("jit_icall_addr_nocall_%s", mono_find_jit_icall_info ((MonoJitICallId)(gsize)data)->name); break; case MONO_PATCH_INFO_RGCTX_SLOT_INDEX: { MonoJumpInfoRgctxEntry *entry = (MonoJumpInfoRgctxEntry*)data; name = g_strdup_printf ("rgctx_slot_index_%s", mono_rgctx_info_type_to_str (entry->info_type)); break; } case MONO_PATCH_INFO_AOT_MODULE: case MONO_PATCH_INFO_GC_SAFE_POINT_FLAG: case MONO_PATCH_INFO_GC_CARD_TABLE_ADDR: case MONO_PATCH_INFO_GC_NURSERY_START: case MONO_PATCH_INFO_GC_NURSERY_BITS: case MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG: name = g_strdup_printf ("%s", mono_ji_type_to_string (type)); len = strlen (name); for (int i = 0; i < len; ++i) name [i] = tolower (name [i]); break; default: name = g_strdup_printf ("%s_%d", mono_ji_type_to_string (type), got_offset); len = strlen (name); for (int i = 0; i < len; ++i) name [i] = tolower (name [i]); break; } return name; } static int compute_aot_got_offset (MonoLLVMModule *module, MonoJumpInfo *ji, LLVMTypeRef llvm_type) { guint32 got_offset = mono_aot_get_got_offset (ji); LLVMTypeRef lookup_type = (LLVMTypeRef) g_hash_table_lookup (module->got_idx_to_type, GINT_TO_POINTER (got_offset)); if (!lookup_type) { lookup_type = llvm_type; } else if (llvm_type != lookup_type) { lookup_type = module->ptr_type; } else { return got_offset; } g_hash_table_insert (module->got_idx_to_type, GINT_TO_POINTER (got_offset), lookup_type); return got_offset; } /* Allocate a GOT slot for TYPE/DATA, and emit IR to load it */ static LLVMValueRef get_aotconst_module (MonoLLVMModule *module, LLVMBuilderRef builder, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type, guint32 *out_got_offset, MonoJumpInfo **out_ji) { guint32 got_offset; LLVMValueRef load; MonoJumpInfo tmp_ji; tmp_ji.type = type; tmp_ji.data.target = data; MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji); if (out_ji) *out_ji = ji; got_offset = compute_aot_got_offset (module, ji, llvm_type); module->max_got_offset = MAX (module->max_got_offset, got_offset); if (out_got_offset) *out_got_offset = got_offset; if (module->static_link && type == MONO_PATCH_INFO_GC_SAFE_POINT_FLAG) { if (!module->gc_safe_point_flag_var) { const char *symbol = "mono_polling_required"; module->gc_safe_point_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol); LLVMSetLinkage (module->gc_safe_point_flag_var, LLVMExternalLinkage); } return module->gc_safe_point_flag_var; } if (module->static_link && type == MONO_PATCH_INFO_INTERRUPTION_REQUEST_FLAG) { if (!module->interrupt_flag_var) { const char *symbol = "mono_thread_interruption_request_flag"; module->interrupt_flag_var = LLVMAddGlobal (module->lmodule, llvm_type, symbol); LLVMSetLinkage (module->interrupt_flag_var, LLVMExternalLinkage); } return module->interrupt_flag_var; } LLVMValueRef const_var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (got_offset)); if (!const_var) { LLVMTypeRef type = llvm_type; // FIXME: char *name = get_aotconst_name (ji->type, ji->data.target, got_offset); char *symbol = g_strdup_printf ("aotconst_%s", name); g_free (name); LLVMValueRef v = LLVMAddGlobal (module->lmodule, type, symbol); LLVMSetVisibility (v, LLVMHiddenVisibility); LLVMSetLinkage (v, LLVMInternalLinkage); LLVMSetInitializer (v, LLVMConstNull (type)); // FIXME: LLVMSetAlignment (v, 8); g_hash_table_insert (module->aotconst_vars, GINT_TO_POINTER (got_offset), v); const_var = v; } load = LLVMBuildLoad (builder, const_var, ""); if (mono_aot_is_shared_got_offset (got_offset)) set_invariant_load_flag (load); if (type == MONO_PATCH_INFO_LDSTR) set_nonnull_load_flag (load); load = LLVMBuildBitCast (builder, load, llvm_type, ""); return load; } static LLVMValueRef get_aotconst (EmitContext *ctx, MonoJumpInfoType type, gconstpointer data, LLVMTypeRef llvm_type) { MonoCompile *cfg; guint32 got_offset; MonoJumpInfo *ji; LLVMValueRef load; cfg = ctx->cfg; load = get_aotconst_module (ctx->module, ctx->builder, type, data, llvm_type, &got_offset, &ji); ji->next = cfg->patch_info; cfg->patch_info = ji; /* * If the got slot is shared, it means its initialized when the aot image is loaded, so we don't need to * explicitly initialize it. */ if (!mono_aot_is_shared_got_offset (got_offset)) { //mono_print_ji (ji); //printf ("\n"); ctx->cfg->got_access_count ++; } return load; } static LLVMValueRef get_dummy_aotconst (EmitContext *ctx, LLVMTypeRef llvm_type) { LLVMValueRef indexes [2]; LLVMValueRef got_entry_addr, load; LLVMBuilderRef builder = ctx->builder; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); got_entry_addr = LLVMBuildGEP (builder, ctx->module->dummy_got_var, indexes, 2, ""); load = LLVMBuildLoad (builder, got_entry_addr, ""); load = convert (ctx, load, llvm_type); return load; } typedef struct { MonoJumpInfo *ji; MonoMethod *method; LLVMValueRef load; LLVMTypeRef type; LLVMValueRef lmethod; } CallSite; static LLVMValueRef get_callee_llvmonly (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { LLVMValueRef callee; char *callee_name = NULL; if (ctx->module->static_link && ctx->module->assembly->image != mono_get_corlib ()) { if (type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); g_assert (info); if (info->func != info->wrapper) { type = MONO_PATCH_INFO_METHOD; data = mono_icall_get_wrapper_method (info); callee_name = mono_aot_get_mangled_method_name ((MonoMethod*)data); } } else if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_class_get_image (method->klass) != ctx->module->assembly->image && mono_aot_is_externally_callable (method)) callee_name = mono_aot_get_mangled_method_name (method); } } if (!callee_name) callee_name = mono_aot_get_direct_call_symbol (type, data); if (callee_name) { /* Directly callable */ // FIXME: Locking callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetVisibility (callee, LLVMHiddenVisibility); g_hash_table_insert (ctx->module->direct_callables, (char*)callee_name, callee); } else { /* LLVMTypeRef's are uniqued */ if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig) return LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0)); g_free (callee_name); } return callee; } /* * Change references to icalls/pinvokes/jit icalls to their wrappers when in corlib, so * they can be called directly. */ if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); if (info->func != info->wrapper) { type = MONO_PATCH_INFO_METHOD; data = mono_icall_get_wrapper_method (info); } } if (ctx->module->assembly->image == mono_get_corlib () && type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_method_is_icall (method) || m_method_is_pinvoke (method)) data = mono_marshal_get_native_wrapper (method, TRUE, TRUE); } /* * Instead of emitting an indirect call through a got slot, emit a placeholder, and * replace it with a direct call or an indirect call in mono_llvm_fixup_aot_module () * after all methods have been emitted. */ if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *method = (MonoMethod*)data; if (m_class_get_image (method->klass)->assembly == ctx->module->assembly) { MonoJumpInfo tmp_ji; tmp_ji.type = type; tmp_ji.data.target = method; MonoJumpInfo *ji = mono_aot_patch_info_dup (&tmp_ji); ji->next = ctx->cfg->patch_info; ctx->cfg->patch_info = ji; LLVMTypeRef llvm_type = LLVMPointerType (llvm_sig, 0); ctx->cfg->got_access_count ++; CallSite *info = g_new0 (CallSite, 1); info->method = method; info->ji = ji; info->type = llvm_type; /* * Emit a dummy load to represent the callee, and either replace it with * a reference to the llvm method for the callee, or from a load from the * GOT. */ LLVMValueRef load = get_dummy_aotconst (ctx, llvm_type); info->load = load; info->lmethod = ctx->lmethod; g_ptr_array_add (ctx->callsite_list, info); return load; } } /* * All other calls are made through the GOT. */ callee = get_aotconst (ctx, type, data, LLVMPointerType (llvm_sig, 0)); return callee; } /* * get_callee: * * Return an llvm value representing the callee given by the arguments. */ static LLVMValueRef get_callee (EmitContext *ctx, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { LLVMValueRef callee; char *callee_name; MonoJumpInfo *ji = NULL; if (ctx->llvm_only) return get_callee_llvmonly (ctx, llvm_sig, type, data); callee_name = NULL; /* Cross-assembly direct calls */ if (type == MONO_PATCH_INFO_METHOD) { MonoMethod *cmethod = (MonoMethod*)data; if (m_class_get_image (cmethod->klass) != ctx->module->assembly->image) { MonoJumpInfo tmp_ji; memset (&tmp_ji, 0, sizeof (MonoJumpInfo)); tmp_ji.type = type; tmp_ji.data.target = data; if (mono_aot_is_direct_callable (&tmp_ji)) { /* * This will add a reference to cmethod's image so it will * be loaded when the current AOT image is loaded, so * the GOT slots used by the init method code are initialized. */ tmp_ji.type = MONO_PATCH_INFO_IMAGE; tmp_ji.data.image = m_class_get_image (cmethod->klass); ji = mono_aot_patch_info_dup (&tmp_ji); mono_aot_get_got_offset (ji); callee_name = mono_aot_get_mangled_method_name (cmethod); callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->direct_callables, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetLinkage (callee, LLVMExternalLinkage); g_hash_table_insert (ctx->module->direct_callables, callee_name, callee); } else { /* LLVMTypeRef's are uniqued */ if (LLVMGetElementType (LLVMTypeOf (callee)) != llvm_sig) callee = LLVMConstBitCast (callee, LLVMPointerType (llvm_sig, 0)); g_free (callee_name); } return callee; } } } callee_name = mono_aot_get_plt_symbol (type, data); if (!callee_name) return NULL; if (ctx->cfg->compile_aot) /* Add a patch so referenced wrappers can be compiled in full aot mode */ mono_add_patch_info (ctx->cfg, 0, type, data); // FIXME: Locking callee = (LLVMValueRef)g_hash_table_lookup (ctx->module->plt_entries, callee_name); if (!callee) { callee = LLVMAddFunction (ctx->lmodule, callee_name, llvm_sig); LLVMSetVisibility (callee, LLVMHiddenVisibility); g_hash_table_insert (ctx->module->plt_entries, (char*)callee_name, callee); } if (ctx->cfg->compile_aot) { ji = g_new0 (MonoJumpInfo, 1); ji->type = type; ji->data.target = data; g_hash_table_insert (ctx->module->plt_entries_ji, ji, callee); } return callee; } static LLVMValueRef get_jit_callee (EmitContext *ctx, const char *name, LLVMTypeRef llvm_sig, MonoJumpInfoType type, gconstpointer data) { gpointer target; // This won't be patched so compile the wrapper immediately if (type == MONO_PATCH_INFO_JIT_ICALL_ID) { MonoJitICallInfo * const info = mono_find_jit_icall_info ((MonoJitICallId)(gsize)data); target = (gpointer)mono_icall_get_wrapper_full (info, TRUE); } else { target = resolve_patch (ctx->cfg, type, data); } LLVMValueRef tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); LLVMValueRef callee = LLVMBuildLoad (ctx->builder, tramp_var, ""); return callee; } static int get_handler_clause (MonoCompile *cfg, MonoBasicBlock *bb) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *clause; int i; /* Directly */ if (bb->region != -1 && MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY)) return (bb->region >> 8) - 1; /* Indirectly */ for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (clause, bb->real_offset) && clause->flags == MONO_EXCEPTION_CLAUSE_NONE) return i; } return -1; } static MonoExceptionClause * get_most_deep_clause (MonoCompile *cfg, EmitContext *ctx, MonoBasicBlock *bb) { if (bb == cfg->bb_init) return NULL; // Since they're sorted by nesting we just need // the first one that the bb is a member of for (int i = 0; i < cfg->header->num_clauses; i++) { MonoExceptionClause *curr = &cfg->header->clauses [i]; if (MONO_OFFSET_IN_CLAUSE (curr, bb->real_offset)) return curr; } return NULL; } static void set_metadata_flag (LLVMValueRef v, const char *flag_name) { LLVMValueRef md_arg; int md_kind; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("mono", 4); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_nonnull_load_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; flag_name = "nonnull"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("<index>", strlen ("<index>")); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_nontemporal_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; // FIXME: Cache this flag_name = "nontemporal"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = const_int32 (1); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } static void set_invariant_load_flag (LLVMValueRef v) { LLVMValueRef md_arg; int md_kind; const char *flag_name; // FIXME: Cache this flag_name = "invariant.load"; md_kind = LLVMGetMDKindID (flag_name, strlen (flag_name)); md_arg = LLVMMDString ("<index>", strlen ("<index>")); LLVMSetMetadata (v, md_kind, LLVMMDNode (&md_arg, 1)); } /* * emit_call: * * Emit an LLVM call or invoke instruction depending on whenever the call is inside * a try region. */ static LLVMValueRef emit_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, LLVMValueRef callee, LLVMValueRef *args, int pindex) { MonoCompile *cfg = ctx->cfg; LLVMValueRef lcall = NULL; LLVMBuilderRef builder = *builder_ref; MonoExceptionClause *clause; if (ctx->llvm_only) { clause = bb ? get_most_deep_clause (cfg, ctx, bb) : NULL; // FIXME: Use an invoke only for calls inside try-catch blocks if (clause && (!cfg->deopt || ctx->has_catch)) { /* * Have to use an invoke instead of a call, branching to the * handler bblock of the clause containing this bblock. */ intptr_t key = CLAUSE_END (clause); LLVMBasicBlockRef lpad_bb = (LLVMBasicBlockRef)g_hash_table_lookup (ctx->exc_meta, (gconstpointer)key); // FIXME: Find the one that has the lowest end bound for the right start address // FIXME: Finally + nesting if (lpad_bb) { LLVMBasicBlockRef noex_bb = gen_bb (ctx, "CALL_NOEX_BB"); /* Use an invoke */ lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, lpad_bb, ""); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; } } } else { int clause_index = get_handler_clause (cfg, bb); if (clause_index != -1) { MonoMethodHeader *header = cfg->header; MonoExceptionClause *ec = &header->clauses [clause_index]; MonoBasicBlock *tblock; LLVMBasicBlockRef ex_bb, noex_bb; /* * Have to use an invoke instead of a call, branching to the * handler bblock of the clause containing this bblock. */ g_assert (ec->flags == MONO_EXCEPTION_CLAUSE_NONE || ec->flags == MONO_EXCEPTION_CLAUSE_FINALLY || ec->flags == MONO_EXCEPTION_CLAUSE_FAULT); tblock = cfg->cil_offset_to_bb [ec->handler_offset]; g_assert (tblock); ctx->bblocks [tblock->block_num].invoke_target = TRUE; ex_bb = get_bb (ctx, tblock); noex_bb = gen_bb (ctx, "NOEX_BB"); /* Use an invoke */ lcall = LLVMBuildInvoke (builder, callee, args, pindex, noex_bb, ex_bb, ""); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; } } if (!lcall) { lcall = LLVMBuildCall (builder, callee, args, pindex, ""); ctx->builder = builder; } if (builder_ref) *builder_ref = ctx->builder; return lcall; } static LLVMValueRef emit_load (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef addr, LLVMValueRef base, const char *name, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier) { LLVMValueRef res; /* * We emit volatile loads for loads which can fault, because otherwise * LLVM will generate invalid code when encountering a load from a * NULL address. */ if (barrier != LLVM_BARRIER_NONE) res = mono_llvm_build_atomic_load (*builder_ref, addr, name, is_volatile, size, barrier); else res = mono_llvm_build_load (*builder_ref, addr, name, is_volatile); return res; } static void emit_store_general (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile, BarrierKind barrier) { if (barrier != LLVM_BARRIER_NONE) mono_llvm_build_aligned_store (*builder_ref, value, addr, barrier, size); else mono_llvm_build_store (*builder_ref, value, addr, is_volatile, barrier); } static void emit_store (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, int size, LLVMValueRef value, LLVMValueRef addr, LLVMValueRef base, gboolean is_faulting, gboolean is_volatile) { emit_store_general (ctx, bb, builder_ref, size, value, addr, base, is_faulting, is_volatile, LLVM_BARRIER_NONE); } /* * emit_cond_system_exception: * * Emit code to throw the exception EXC_TYPE if the condition CMP is false. * Might set the ctx exception. */ static void emit_cond_system_exception (EmitContext *ctx, MonoBasicBlock *bb, const char *exc_type, LLVMValueRef cmp, gboolean force_explicit) { LLVMBasicBlockRef ex_bb, ex2_bb = NULL, noex_bb; LLVMBuilderRef builder; MonoClass *exc_class; LLVMValueRef args [2]; LLVMValueRef callee; gboolean no_pc = FALSE; static MonoClass *exc_classes [MONO_EXC_INTRINS_NUM]; if (IS_TARGET_AMD64) /* Some platforms don't require the pc argument */ no_pc = TRUE; int exc_id = mini_exception_id_by_name (exc_type); if (!exc_classes [exc_id]) exc_classes [exc_id] = mono_class_load_from_name (mono_get_corlib (), "System", exc_type); exc_class = exc_classes [exc_id]; ex_bb = gen_bb (ctx, "EX_BB"); if (ctx->llvm_only) ex2_bb = gen_bb (ctx, "EX2_BB"); noex_bb = gen_bb (ctx, "NOEX_BB"); LLVMValueRef branch = LLVMBuildCondBr (ctx->builder, cmp, ex_bb, noex_bb); if (exc_id == MONO_EXC_NULL_REF && !ctx->cfg->disable_llvm_implicit_null_checks && !force_explicit) { mono_llvm_set_implicit_branch (ctx->builder, branch); } /* Emit exception throwing code */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, ex_bb); if (ctx->cfg->llvm_only) { LLVMBuildBr (builder, ex2_bb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb); if (exc_id == MONO_EXC_NULL_REF) { static LLVMTypeRef sig; if (!sig) sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); /* Can't cache this */ callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception)); emit_call (ctx, bb, &builder, callee, NULL, 0); } else { static LLVMTypeRef sig; if (!sig) sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE); callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_corlib_exception)); args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE); emit_call (ctx, bb, &builder, callee, args, 1); } LLVMBuildUnreachable (builder); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; ctx->ex_index ++; return; } callee = ctx->module->throw_corlib_exception; if (!callee) { LLVMTypeRef sig; if (no_pc) sig = LLVMFunctionType1 (LLVMVoidType (), LLVMInt32Type (), FALSE); else sig = LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), LLVMPointerType (LLVMInt8Type (), 0), FALSE); const MonoJitICallId icall_id = MONO_JIT_ICALL_mono_llvm_throw_corlib_exception_abs_trampoline; if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } else { /* * Differences between the LLVM/non-LLVM throw corlib exception trampoline: * - On x86, LLVM generated code doesn't push the arguments * - The trampoline takes the throw address as an arguments, not a pc offset. */ callee = get_jit_callee (ctx, "llvm_throw_corlib_exception_trampoline", sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); /* * Make sure that ex_bb starts with the invoke, so the block address points to it, and not to the load * added by get_jit_callee (). */ ex2_bb = gen_bb (ctx, "EX2_BB"); LLVMBuildBr (builder, ex2_bb); ex_bb = ex2_bb; ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ex2_bb); } } args [0] = LLVMConstInt (LLVMInt32Type (), m_class_get_type_token (exc_class) - MONO_TOKEN_TYPE_DEF, FALSE); /* * The LLVM mono branch contains changes so a block address can be passed as an * argument to a call. */ if (no_pc) { emit_call (ctx, bb, &builder, callee, args, 1); } else { args [1] = LLVMBlockAddress (ctx->lmethod, ex_bb); emit_call (ctx, bb, &builder, callee, args, 2); } LLVMBuildUnreachable (builder); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); ctx->bblocks [bb->block_num].end_bblock = noex_bb; ctx->ex_index ++; return; } /* * emit_args_to_vtype: * * Emit code to store the vtype in the arguments args to the address ADDRESS. */ static void emit_args_to_vtype (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args) { int j, size, nslots; MonoClass *klass; t = mini_get_underlying_type (t); klass = mono_class_from_mono_type_internal (t); size = mono_class_value_size (klass, NULL); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), ""); if (ainfo->storage == LLVMArgAsFpArgs) nslots = ainfo->nslots; else nslots = 2; for (j = 0; j < nslots; ++j) { LLVMValueRef index [2], addr, daddr; int part_size = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size; LLVMTypeRef part_type; while (part_size != 1 && part_size != 2 && part_size != 4 && part_size < 8) part_size ++; if (ainfo->pair_storage [j] == LLVMArgNone) continue; switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: { part_type = LLVMIntType (part_size * 8); if (MONO_CLASS_IS_SIMD (ctx->cfg, klass)) { index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE); addr = LLVMBuildGEP (builder, address, index, 1, ""); } else { daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); } LLVMBuildStore (builder, convert (ctx, args [j], part_type), LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (part_type, 0), "")); break; } case LLVMArgInFPReg: { LLVMTypeRef arg_type; if (ainfo->esize == 8) arg_type = LLVMDoubleType (); else arg_type = LLVMFloatType (); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), ""); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); LLVMBuildStore (builder, args [j], addr); break; } case LLVMArgNone: break; default: g_assert_not_reached (); } size -= TARGET_SIZEOF_VOID_P; } } /* * emit_vtype_to_args: * * Emit code to load a vtype at address ADDRESS into scalar arguments. Store the arguments * into ARGS, and the number of arguments into NARGS. */ static void emit_vtype_to_args (EmitContext *ctx, LLVMBuilderRef builder, MonoType *t, LLVMValueRef address, LLVMArgInfo *ainfo, LLVMValueRef *args, guint32 *nargs) { int pindex = 0; int j, nslots; LLVMTypeRef arg_type; t = mini_get_underlying_type (t); int32_t size = get_vtype_size_align (t).size; if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) address = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (LLVMInt8Type (), 0), ""); if (ainfo->storage == LLVMArgAsFpArgs) nslots = ainfo->nslots; else nslots = 2; for (j = 0; j < nslots; ++j) { LLVMValueRef index [2], addr, daddr; int partsize = size > TARGET_SIZEOF_VOID_P ? TARGET_SIZEOF_VOID_P : size; if (ainfo->pair_storage [j] == LLVMArgNone) continue; switch (ainfo->pair_storage [j]) { case LLVMArgInIReg: if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (t))) { index [0] = LLVMConstInt (LLVMInt32Type (), j * TARGET_SIZEOF_VOID_P, FALSE); addr = LLVMBuildGEP (builder, address, index, 1, ""); } else { daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (IntPtrType (), 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); } args [pindex ++] = convert (ctx, LLVMBuildLoad (builder, LLVMBuildBitCast (ctx->builder, addr, LLVMPointerType (LLVMIntType (partsize * 8), 0), ""), ""), IntPtrType ()); break; case LLVMArgInFPReg: if (ainfo->esize == 8) arg_type = LLVMDoubleType (); else arg_type = LLVMFloatType (); daddr = LLVMBuildBitCast (ctx->builder, address, LLVMPointerType (arg_type, 0), ""); index [0] = LLVMConstInt (LLVMInt32Type (), j, FALSE); addr = LLVMBuildGEP (builder, daddr, index, 1, ""); args [pindex ++] = LLVMBuildLoad (builder, addr, ""); break; case LLVMArgNone: break; default: g_assert_not_reached (); } size -= TARGET_SIZEOF_VOID_P; } *nargs = pindex; } static LLVMValueRef build_alloca_llvm_type_name (EmitContext *ctx, LLVMTypeRef t, int align, const char *name) { /* * Have to place all alloca's at the end of the entry bb, since otherwise they would * get executed every time control reaches them. */ LLVMPositionBuilder (ctx->alloca_builder, get_bb (ctx, ctx->cfg->bb_entry), ctx->last_alloca); ctx->last_alloca = mono_llvm_build_alloca (ctx->alloca_builder, t, NULL, align, name); return ctx->last_alloca; } static LLVMValueRef build_alloca_llvm_type (EmitContext *ctx, LLVMTypeRef t, int align) { return build_alloca_llvm_type_name (ctx, t, align, ""); } static LLVMValueRef build_named_alloca (EmitContext *ctx, MonoType *t, char const *name) { MonoClass *k = mono_class_from_mono_type_internal (t); int align; g_assert (!mini_is_gsharedvt_variable_type (t)); if (MONO_CLASS_IS_SIMD (ctx->cfg, k)) align = mono_class_value_size (k, NULL); else align = mono_class_min_align (k); /* Sometimes align is not a power of 2 */ while (mono_is_power_of_two (align) == -1) align ++; return build_alloca_llvm_type_name (ctx, type_to_llvm_type (ctx, t), align, name); } static LLVMValueRef build_alloca (EmitContext *ctx, MonoType *t) { return build_named_alloca (ctx, t, ""); } static LLVMValueRef emit_gsharedvt_ldaddr (EmitContext *ctx, int vreg) { /* * gsharedvt local. * Compute the address of the local as gsharedvt_locals_var + gsharedvt_info_var->locals_offsets [idx]. */ MonoCompile *cfg = ctx->cfg; LLVMBuilderRef builder = ctx->builder; LLVMValueRef offset, offset_var; LLVMValueRef info_var = ctx->values [cfg->gsharedvt_info_var->dreg]; LLVMValueRef locals_var = ctx->values [cfg->gsharedvt_locals_var->dreg]; LLVMValueRef ptr; char *name; g_assert (info_var); g_assert (locals_var); int idx = cfg->gsharedvt_vreg_to_idx [vreg] - 1; offset = LLVMConstInt (LLVMInt32Type (), MONO_STRUCT_OFFSET (MonoGSharedVtMethodRuntimeInfo, entries) + (idx * TARGET_SIZEOF_VOID_P), FALSE); ptr = LLVMBuildAdd (builder, convert (ctx, info_var, IntPtrType ()), convert (ctx, offset, IntPtrType ()), ""); name = g_strdup_printf ("gsharedvt_local_%d_offset", vreg); offset_var = LLVMBuildLoad (builder, convert (ctx, ptr, LLVMPointerType (LLVMInt32Type (), 0)), name); return LLVMBuildAdd (builder, convert (ctx, locals_var, IntPtrType ()), convert (ctx, offset_var, IntPtrType ()), ""); } /* * Put the global into the 'llvm.used' array to prevent it from being optimized away. */ static void mark_as_used (MonoLLVMModule *module, LLVMValueRef global) { if (!module->used) module->used = g_ptr_array_sized_new (16); g_ptr_array_add (module->used, global); } static void emit_llvm_used (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMTypeRef used_type; LLVMValueRef used, *used_elem; int i; if (!module->used) return; used_type = LLVMArrayType (LLVMPointerType (LLVMInt8Type (), 0), module->used->len); used = LLVMAddGlobal (lmodule, used_type, "llvm.used"); used_elem = g_new0 (LLVMValueRef, module->used->len); for (i = 0; i < module->used->len; ++i) used_elem [i] = LLVMConstBitCast ((LLVMValueRef)g_ptr_array_index (module->used, i), LLVMPointerType (LLVMInt8Type (), 0)); LLVMSetInitializer (used, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), used_elem, module->used->len)); LLVMSetLinkage (used, LLVMAppendingLinkage); LLVMSetSection (used, "llvm.metadata"); } /* * emit_get_method: * * Emit a function mapping method indexes to their code */ static void emit_get_method (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, switch_ins, m; LLVMBasicBlockRef entry_bb, fail_bb, bb, code_start_bb, code_end_bb, main_bb; LLVMBasicBlockRef *bbs = NULL; LLVMTypeRef rtype; LLVMBuilderRef builder = LLVMCreateBuilder (); LLVMValueRef table = NULL; char *name; int i; gboolean emit_table = FALSE; #ifdef TARGET_WASM /* * Emit a table of functions instead of a switch statement, * its very efficient on wasm. This might be usable on * other platforms too. */ emit_table = TRUE; #endif rtype = LLVMPointerType (LLVMInt8Type (), 0); int table_len = module->max_method_idx + 1; if (emit_table) { LLVMTypeRef table_type; LLVMValueRef *table_elems; char *table_name; table_type = LLVMArrayType (rtype, table_len); table_name = g_strdup_printf ("%s_method_table", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); for (i = 0; i < table_len; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i)); if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m)) table_elems [i] = LLVMBuildBitCast (builder, m, rtype, ""); else table_elems [i] = LLVMConstNull (rtype); } LLVMSetInitializer (table, LLVMConstArray (LLVMPointerType (LLVMInt8Type (), 0), table_elems, table_len)); } /* * Emit a switch statement. Emitting a table of function addresses is smaller/faster, * but generating code seems safer. */ func = LLVMAddFunction (lmodule, module->get_method_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->get_method = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); /* * Return llvm_code_start/llvm_code_end when called with -1/-2. * Hopefully, the toolchain doesn't reorder these functions. If it does, * then we will have to find another solution. */ name = g_strdup_printf ("BB_CODE_START"); code_start_bb = LLVMAppendBasicBlock (func, name); g_free (name); LLVMPositionBuilderAtEnd (builder, code_start_bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_start, rtype, "")); name = g_strdup_printf ("BB_CODE_END"); code_end_bb = LLVMAppendBasicBlock (func, name); g_free (name); LLVMPositionBuilderAtEnd (builder, code_end_bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, module->code_end, rtype, "")); if (emit_table) { /* * Because table_len is computed using the method indexes available for us, it * might not include methods which are not compiled because of AOT profiles. * So table_len can be smaller than info->nmethods. Add a bounds check because * of that. * switch (index) { * case -1: return code_start; * case -2: return code_end; * default: return index < table_len ? method_table [index] : 0; */ fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), rtype, "")); main_bb = LLVMAppendBasicBlock (func, "MAIN"); LLVMPositionBuilderAtEnd (builder, main_bb); LLVMValueRef base = table; LLVMValueRef indexes [2]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMGetParam (func, 0); LLVMValueRef addr = LLVMBuildGEP (builder, base, indexes, 2, ""); LLVMValueRef res = mono_llvm_build_load (builder, addr, "", FALSE); LLVMBuildRet (builder, res); LLVMBasicBlockRef default_bb = LLVMAppendBasicBlock (func, "DEFAULT"); LLVMPositionBuilderAtEnd (builder, default_bb); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_len, FALSE), ""); LLVMBuildCondBr (builder, cmp, fail_bb, main_bb); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), default_bb, 0); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb); } else { bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1); for (i = 0; i < module->max_method_idx + 1; ++i) { name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_lmethod, GINT_TO_POINTER (i)); if (m && !g_hash_table_lookup (module->no_method_table_lmethods, m)) LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, "")); else LLVMBuildRet (builder, LLVMConstNull (rtype)); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMConstNull (rtype)); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -1, FALSE), code_start_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), -2, FALSE), code_end_bb); for (i = 0; i < module->max_method_idx + 1; ++i) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); } } mark_as_used (module, func); LLVMDisposeBuilder (builder); } /* * emit_get_unbox_tramp: * * Emit a function mapping method indexes to their unbox trampoline */ static void emit_get_unbox_tramp (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, switch_ins, m; LLVMBasicBlockRef entry_bb, fail_bb, bb; LLVMBasicBlockRef *bbs; LLVMTypeRef rtype; LLVMBuilderRef builder = LLVMCreateBuilder (); char *name; int i; gboolean emit_table = FALSE; /* Similar to emit_get_method () */ #ifndef TARGET_WATCHOS emit_table = TRUE; #endif rtype = LLVMPointerType (LLVMInt8Type (), 0); if (emit_table) { // About 10% of methods have an unbox tramp, so emit a table of indexes for them // that the runtime can search using a binary search int len = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) len ++; } LLVMTypeRef table_type, elemtype; LLVMValueRef *table_elems; LLVMValueRef table; char *table_name; int table_len; int elemsize; table_len = len; elemsize = module->max_method_idx < 65000 ? 2 : 4; // The index table elemtype = elemsize == 2 ? LLVMInt16Type () : LLVMInt32Type (); table_type = LLVMArrayType (elemtype, table_len); table_name = g_strdup_printf ("%s_unbox_tramp_indexes", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); int idx = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) table_elems [idx ++] = LLVMConstInt (elemtype, i, FALSE); } LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len)); module->unbox_tramp_indexes = table; // The trampoline table elemtype = rtype; table_type = LLVMArrayType (elemtype, table_len); table_name = g_strdup_printf ("%s_unbox_trampolines", module->global_prefix); table = LLVMAddGlobal (lmodule, table_type, table_name); table_elems = g_new0 (LLVMValueRef, table_len); idx = 0; for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (m) table_elems [idx ++] = LLVMBuildBitCast (builder, m, rtype, ""); } LLVMSetInitializer (table, LLVMConstArray (elemtype, table_elems, table_len)); module->unbox_trampolines = table; module->unbox_tramp_num = table_len; module->unbox_tramp_elemsize = elemsize; return; } func = LLVMAddFunction (lmodule, module->get_unbox_tramp_symbol, LLVMFunctionType1 (rtype, LLVMInt32Type (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->get_unbox_tramp = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); bbs = g_new0 (LLVMBasicBlockRef, module->max_method_idx + 1); for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (!m) continue; name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); LLVMBuildRet (builder, LLVMBuildBitCast (builder, m, rtype, "")); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRet (builder, LLVMConstNull (rtype)); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); for (i = 0; i < module->max_method_idx + 1; ++i) { m = (LLVMValueRef)g_hash_table_lookup (module->idx_to_unbox_tramp, GINT_TO_POINTER (i)); if (!m) continue; LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); } mark_as_used (module, func); LLVMDisposeBuilder (builder); } /* * emit_init_aotconst: * * Emit a function to initialize the aotconst_ variables. Called by the runtime. */ static void emit_init_aotconst (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder = LLVMCreateBuilder (); func = LLVMAddFunction (lmodule, module->init_aotconst_symbol, LLVMFunctionType2 (LLVMVoidType (), LLVMInt32Type (), IntPtrType (), FALSE)); LLVMSetLinkage (func, LLVMExternalLinkage); LLVMSetVisibility (func, LLVMHiddenVisibility); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->init_aotconst_func = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); LLVMPositionBuilderAtEnd (builder, entry_bb); #ifdef TARGET_WASM /* Emit a table of aotconst addresses instead of a switch statement to save space */ LLVMValueRef aotconsts; LLVMTypeRef aotconst_addr_type = LLVMPointerType (module->ptr_type, 0); int table_size = module->max_got_offset + 1; LLVMTypeRef aotconst_arr_type = LLVMArrayType (aotconst_addr_type, table_size); LLVMValueRef aotconst_dummy = LLVMAddGlobal (module->lmodule, module->ptr_type, "aotconst_dummy"); LLVMSetInitializer (aotconst_dummy, LLVMConstNull (module->ptr_type)); LLVMSetVisibility (aotconst_dummy, LLVMHiddenVisibility); LLVMSetLinkage (aotconst_dummy, LLVMInternalLinkage); aotconsts = LLVMAddGlobal (module->lmodule, aotconst_arr_type, "aotconsts"); LLVMValueRef *aotconst_init = g_new0 (LLVMValueRef, table_size); for (int i = 0; i < table_size; ++i) { LLVMValueRef aotconst = (LLVMValueRef)g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i)); if (aotconst) aotconst_init [i] = LLVMConstBitCast (aotconst, aotconst_addr_type); else aotconst_init [i] = LLVMConstBitCast (aotconst_dummy, aotconst_addr_type); } LLVMSetInitializer (aotconsts, LLVMConstArray (aotconst_addr_type, aotconst_init, table_size)); LLVMSetVisibility (aotconsts, LLVMHiddenVisibility); LLVMSetLinkage (aotconsts, LLVMInternalLinkage); LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "EXIT_BB"); LLVMBasicBlockRef main_bb = LLVMAppendBasicBlock (func, "BB"); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGE, LLVMGetParam (func, 0), LLVMConstInt (LLVMInt32Type (), table_size, FALSE), ""); LLVMBuildCondBr (builder, cmp, exit_bb, main_bb); LLVMPositionBuilderAtEnd (builder, main_bb); LLVMValueRef indexes [2]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMGetParam (func, 0); LLVMValueRef aotconst_addr = LLVMBuildLoad (builder, LLVMBuildGEP (builder, aotconsts, indexes, 2, ""), ""); LLVMBuildStore (builder, LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), module->ptr_type, ""), aotconst_addr); LLVMBuildBr (builder, exit_bb); LLVMPositionBuilderAtEnd (builder, exit_bb); LLVMBuildRetVoid (builder); #else LLVMValueRef switch_ins; LLVMBasicBlockRef fail_bb, bb; LLVMBasicBlockRef *bbs = NULL; char *name; bbs = g_new0 (LLVMBasicBlockRef, module->max_got_offset + 1); for (int i = 0; i < module->max_got_offset + 1; ++i) { name = g_strdup_printf ("BB_%d", i); bb = LLVMAppendBasicBlock (func, name); g_free (name); bbs [i] = bb; LLVMPositionBuilderAtEnd (builder, bb); LLVMValueRef var = g_hash_table_lookup (module->aotconst_vars, GINT_TO_POINTER (i)); if (var) { LLVMValueRef addr = LLVMBuildBitCast (builder, var, LLVMPointerType (IntPtrType (), 0), ""); LLVMBuildStore (builder, LLVMGetParam (func, 1), addr); } LLVMBuildRetVoid (builder); } fail_bb = LLVMAppendBasicBlock (func, "FAIL"); LLVMPositionBuilderAtEnd (builder, fail_bb); LLVMBuildRetVoid (builder); LLVMPositionBuilderAtEnd (builder, entry_bb); switch_ins = LLVMBuildSwitch (builder, LLVMGetParam (func, 0), fail_bb, 0); for (int i = 0; i < module->max_got_offset + 1; ++i) LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); #endif LLVMDisposeBuilder (builder); } /* Add a function to mark the beginning of LLVM code */ static void emit_llvm_code_start (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; func = LLVMAddFunction (lmodule, "llvm_code_start", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->code_start = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } /* * emit_init_func: * * Emit functions to initialize LLVM methods. * These are wrappers around the mini_llvm_init_method () JIT icall. * The wrappers handle adding the 'amodule' argument, loading the vtable from different locations, and they have * a cold calling convention. */ static LLVMValueRef emit_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func, indexes [2], args [16], callee, info_var, index_var, inited_var, cmp; LLVMBasicBlockRef entry_bb, inited_bb, notinited_bb; LLVMBuilderRef builder; LLVMTypeRef icall_sig; const char *wrapper_name = mono_marshal_get_aot_init_wrapper_name (subtype); LLVMTypeRef func_type = NULL; LLVMTypeRef arg_type = module->ptr_type; char *name = g_strdup_printf ("%s_%s", module->global_prefix, wrapper_name); switch (subtype) { case AOT_INIT_METHOD: func_type = LLVMFunctionType1 (LLVMVoidType (), arg_type, FALSE); break; case AOT_INIT_METHOD_GSHARED_MRGCTX: case AOT_INIT_METHOD_GSHARED_VTABLE: func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, IntPtrType (), FALSE); break; case AOT_INIT_METHOD_GSHARED_THIS: func_type = LLVMFunctionType2 (LLVMVoidType (), arg_type, ObjRefType (), FALSE); break; default: g_assert_not_reached (); } func = LLVMAddFunction (lmodule, name, func_type); info_var = LLVMGetParam (func, 0); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); set_cold_cconv (func); entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); /* Load method_index which is emitted at the start of the method info */ indexes [0] = const_int32 (0); indexes [1] = const_int32 (0); // FIXME: Make sure its aligned index_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, LLVMBuildBitCast (builder, info_var, LLVMPointerType (LLVMInt32Type (), 0), ""), indexes, 1, ""), "method_index"); /* Check for is_inited here as well, since this can be called from JITted code which might not check it */ indexes [0] = const_int32 (0); indexes [1] = index_var; inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, module->inited_var, indexes, 2, ""), "is_inited"); cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), ""); inited_bb = LLVMAppendBasicBlock (func, "INITED"); notinited_bb = LLVMAppendBasicBlock (func, "NOT_INITED"); LLVMBuildCondBr (builder, cmp, notinited_bb, inited_bb); LLVMPositionBuilderAtEnd (builder, notinited_bb); LLVMValueRef amodule_var = get_aotconst_module (module, builder, MONO_PATCH_INFO_AOT_MODULE, NULL, LLVMPointerType (IntPtrType (), 0), NULL, NULL); args [0] = LLVMBuildPtrToInt (builder, module->info_var, IntPtrType (), ""); args [1] = LLVMBuildPtrToInt (builder, amodule_var, IntPtrType (), ""); args [2] = info_var; switch (subtype) { case AOT_INIT_METHOD: args [3] = LLVMConstNull (IntPtrType ()); break; case AOT_INIT_METHOD_GSHARED_VTABLE: args [3] = LLVMGetParam (func, 1); break; case AOT_INIT_METHOD_GSHARED_THIS: /* Load this->vtable */ args [3] = LLVMBuildBitCast (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), ""); indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoObject, vtable) / SIZEOF_VOID_P); args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable"); break; case AOT_INIT_METHOD_GSHARED_MRGCTX: /* Load mrgctx->vtable */ args [3] = LLVMBuildIntToPtr (builder, LLVMGetParam (func, 1), LLVMPointerType (IntPtrType (), 0), ""); indexes [0] = const_int32 (MONO_STRUCT_OFFSET (MonoMethodRuntimeGenericContext, class_vtable) / SIZEOF_VOID_P); args [3] = LLVMBuildLoad (builder, LLVMBuildGEP (builder, args [3], indexes, 1, ""), "vtable"); break; default: g_assert_not_reached (); break; } /* Call the mini_llvm_init_method JIT icall */ icall_sig = LLVMFunctionType4 (LLVMVoidType (), IntPtrType (), IntPtrType (), arg_type, IntPtrType (), FALSE); callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GINT_TO_POINTER (MONO_JIT_ICALL_mini_llvm_init_method), LLVMPointerType (icall_sig, 0), NULL, NULL); LLVMBuildCall (builder, callee, args, LLVMCountParamTypes (icall_sig), ""); /* * Set the inited flag * This is already done by the LLVM methods themselves, but its needed by JITted methods. */ indexes [0] = const_int32 (0); indexes [1] = index_var; LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, module->inited_var, indexes, 2, "")); LLVMBuildBr (builder, inited_bb); LLVMPositionBuilderAtEnd (builder, inited_bb); LLVMBuildRetVoid (builder); LLVMVerifyFunction (func, LLVMAbortProcessAction); LLVMDisposeBuilder (builder); g_free (name); return func; } /* Emit a wrapper around the parameterless JIT icall ICALL_ID with a cold calling convention */ static LLVMValueRef emit_icall_cold_wrapper (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoJitICallId icall_id, gboolean aot) { LLVMValueRef func, callee; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; LLVMTypeRef sig; char *name; name = g_strdup_printf ("%s_icall_cold_wrapper_%d", module->global_prefix, icall_id); func = LLVMAddFunction (lmodule, name, LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); set_cold_cconv (func); entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); if (aot) { callee = get_aotconst_module (module, builder, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id), LLVMPointerType (sig, 0), NULL, NULL); } else { MonoJitICallInfo * const info = mono_find_jit_icall_info (icall_id); gpointer target = (gpointer)mono_icall_get_wrapper_full (info, TRUE); LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, LLVMPointerType (sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); callee = LLVMBuildLoad (builder, tramp_var, ""); } LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildRetVoid (builder); LLVMVerifyFunction(func, LLVMAbortProcessAction); LLVMDisposeBuilder (builder); return func; } /* * Emit wrappers around the C icalls used to initialize llvm methods, to * make the calling code smaller and to enable usage of the llvm * cold calling convention. */ static void emit_init_funcs (MonoLLVMModule *module) { for (int i = 0; i < AOT_INIT_METHOD_NUM; ++i) module->init_methods [i] = emit_init_func (module, i); } static LLVMValueRef get_init_func (MonoLLVMModule *module, MonoAotInitSubtype subtype) { return module->init_methods [subtype]; } static void emit_gc_safepoint_poll (MonoLLVMModule *module, LLVMModuleRef lmodule, MonoCompile *cfg) { gboolean is_aot = cfg == NULL || cfg->compile_aot; LLVMValueRef func = mono_llvm_get_or_insert_gc_safepoint_poll (lmodule); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); if (is_aot) { #if TARGET_WIN32 if (module->static_link) { LLVMSetLinkage (func, LLVMInternalLinkage); /* Prevent it from being optimized away, leading to asserts inside 'opt' */ mark_as_used (module, func); } else { LLVMSetLinkage (func, LLVMWeakODRLinkage); } #else LLVMSetLinkage (func, LLVMWeakODRLinkage); #endif } else { mono_llvm_add_func_attr (func, LLVM_ATTR_OPTIMIZE_NONE); // no need to waste time here, the function is already optimized and will be inlined. mono_llvm_add_func_attr (func, LLVM_ATTR_NO_INLINE); // optnone attribute requires noinline (but it will be inlined anyway) if (!module->gc_poll_cold_wrapper_compiled) { ERROR_DECL (error); /* Compiling a method here is a bit ugly, but it works */ MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL); module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error); mono_error_assert_ok (error); } } LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.entry"); LLVMBasicBlockRef poll_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.poll"); LLVMBasicBlockRef exit_bb = LLVMAppendBasicBlock (func, "gc.safepoint_poll.exit"); LLVMTypeRef ptr_type = LLVMPointerType (IntPtrType (), 0); LLVMBuilderRef builder = LLVMCreateBuilder (); /* entry: */ LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMValueRef poll_val_ptr; if (is_aot) { poll_val_ptr = get_aotconst_module (module, builder, MONO_PATCH_INFO_GC_SAFE_POINT_FLAG, NULL, ptr_type, NULL, NULL); } else { LLVMValueRef poll_val_int = LLVMConstInt (IntPtrType (), (guint64) &mono_polling_required, FALSE); poll_val_ptr = LLVMBuildIntToPtr (builder, poll_val_int, ptr_type, ""); } LLVMValueRef poll_val_ptr_load = LLVMBuildLoad (builder, poll_val_ptr, ""); // probably needs to be volatile LLVMValueRef poll_val = LLVMBuildPtrToInt (builder, poll_val_ptr_load, IntPtrType (), ""); LLVMValueRef poll_val_zero = LLVMConstNull (LLVMTypeOf (poll_val)); LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, poll_val, poll_val_zero, ""); mono_llvm_build_weighted_branch (builder, cmp, exit_bb, poll_bb, 1000 /* weight for exit_bb */, 1 /* weight for poll_bb */); /* poll: */ LLVMPositionBuilderAtEnd (builder, poll_bb); LLVMValueRef call; if (is_aot) { LLVMValueRef icall_wrapper = emit_icall_cold_wrapper (module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, TRUE); module->gc_poll_cold_wrapper = icall_wrapper; call = LLVMBuildCall (builder, icall_wrapper, NULL, 0, ""); } else { // in JIT mode we have to emit @gc.safepoint_poll function for each method (module) // this function calls gc_poll_cold_wrapper_compiled via a global variable. // @gc.safepoint_poll will be inlined and can be deleted after -place-safepoints pass. LLVMTypeRef poll_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); LLVMTypeRef poll_sig_ptr = LLVMPointerType (poll_sig, 0); gpointer target = resolve_patch (cfg, MONO_PATCH_INFO_ABS, module->gc_poll_cold_wrapper_compiled); LLVMValueRef tramp_var = LLVMAddGlobal (lmodule, poll_sig_ptr, "mono_threads_state_poll"); LLVMValueRef target_val = LLVMConstInt (LLVMInt64Type (), (guint64) target, FALSE); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (target_val, poll_sig_ptr)); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); LLVMValueRef callee = LLVMBuildLoad (builder, tramp_var, ""); call = LLVMBuildCall (builder, callee, NULL, 0, ""); } set_call_cold_cconv (call); LLVMBuildBr (builder, exit_bb); /* exit: */ LLVMPositionBuilderAtEnd (builder, exit_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } static void emit_llvm_code_end (MonoLLVMModule *module) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef func; LLVMBasicBlockRef entry_bb; LLVMBuilderRef builder; func = LLVMAddFunction (lmodule, "llvm_code_end", LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE)); LLVMSetLinkage (func, LLVMInternalLinkage); mono_llvm_add_func_attr (func, LLVM_ATTR_NO_UNWIND); module->code_end = func; entry_bb = LLVMAppendBasicBlock (func, "ENTRY"); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, entry_bb); LLVMBuildRetVoid (builder); LLVMDisposeBuilder (builder); } static void emit_div_check (EmitContext *ctx, LLVMBuilderRef builder, MonoBasicBlock *bb, MonoInst *ins, LLVMValueRef lhs, LLVMValueRef rhs) { gboolean need_div_check = ctx->cfg->backend->need_div_check; if (bb->region) /* LLVM doesn't know that these can throw an exception since they are not called through an intrinsic */ need_div_check = TRUE; if (!need_div_check) return; switch (ins->opcode) { case OP_IDIV: case OP_LDIV: case OP_IREM: case OP_LREM: case OP_IDIV_UN: case OP_LDIV_UN: case OP_IREM_UN: case OP_LREM_UN: case OP_IDIV_IMM: case OP_LDIV_IMM: case OP_IREM_IMM: case OP_LREM_IMM: case OP_IDIV_UN_IMM: case OP_LDIV_UN_IMM: case OP_IREM_UN_IMM: case OP_LREM_UN_IMM: { LLVMValueRef cmp; gboolean is_signed = (ins->opcode == OP_IDIV || ins->opcode == OP_LDIV || ins->opcode == OP_IREM || ins->opcode == OP_LREM || ins->opcode == OP_IDIV_IMM || ins->opcode == OP_LDIV_IMM || ins->opcode == OP_IREM_IMM || ins->opcode == OP_LREM_IMM); cmp = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), 0, FALSE), ""); emit_cond_system_exception (ctx, bb, "DivideByZeroException", cmp, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; /* b == -1 && a == 0x80000000 */ if (is_signed) { LLVMValueRef c = (LLVMTypeOf (lhs) == LLVMInt32Type ()) ? LLVMConstInt (LLVMTypeOf (lhs), 0x80000000, FALSE) : LLVMConstInt (LLVMTypeOf (lhs), 0x8000000000000000LL, FALSE); LLVMValueRef cond1 = LLVMBuildICmp (builder, LLVMIntEQ, rhs, LLVMConstInt (LLVMTypeOf (rhs), -1, FALSE), ""); LLVMValueRef cond2 = LLVMBuildICmp (builder, LLVMIntEQ, lhs, c, ""); cmp = LLVMBuildICmp (builder, LLVMIntEQ, LLVMBuildAnd (builder, cond1, cond2, ""), LLVMConstInt (LLVMInt1Type (), 1, FALSE), ""); emit_cond_system_exception (ctx, bb, "OverflowException", cmp, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; } break; } default: break; } } /* * emit_method_init: * * Emit code to initialize the GOT slots used by the method. */ static void emit_method_init (EmitContext *ctx) { LLVMValueRef indexes [16], args [16]; LLVMValueRef inited_var, cmp, call; LLVMBasicBlockRef inited_bb, notinited_bb; LLVMBuilderRef builder = ctx->builder; MonoCompile *cfg = ctx->cfg; MonoAotInitSubtype subtype; ctx->module->max_inited_idx = MAX (ctx->module->max_inited_idx, cfg->method_index); indexes [0] = const_int32 (0); indexes [1] = const_int32 (cfg->method_index); inited_var = LLVMBuildLoad (builder, LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, ""), "is_inited"); args [0] = inited_var; args [1] = LLVMConstInt (LLVMInt8Type (), 1, FALSE); inited_var = LLVMBuildCall (ctx->builder, get_intrins (ctx, INTRINS_EXPECT_I8), args, 2, ""); cmp = LLVMBuildICmp (builder, LLVMIntEQ, inited_var, LLVMConstInt (LLVMTypeOf (inited_var), 0, FALSE), ""); inited_bb = ctx->inited_bb; notinited_bb = gen_bb (ctx, "NOTINITED_BB"); ctx->cfg->llvmonly_init_cond = LLVMBuildCondBr (ctx->builder, cmp, notinited_bb, inited_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, notinited_bb); LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), 0); char *symbol = g_strdup_printf ("info_dummy_%s", cfg->llvm_method_name); LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, type, symbol); g_free (symbol); cfg->llvm_dummy_info_var = info_var; int nargs = 0; args [nargs ++] = convert (ctx, info_var, ctx->module->ptr_type); switch (cfg->rgctx_access) { case MONO_RGCTX_ACCESS_MRGCTX: if (ctx->rgctx_arg) { args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); subtype = AOT_INIT_METHOD_GSHARED_MRGCTX; } else { g_assert (ctx->this_arg); args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ()); subtype = AOT_INIT_METHOD_GSHARED_THIS; } break; case MONO_RGCTX_ACCESS_VTABLE: args [nargs ++] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); subtype = AOT_INIT_METHOD_GSHARED_VTABLE; break; case MONO_RGCTX_ACCESS_THIS: args [nargs ++] = convert (ctx, ctx->this_arg, ObjRefType ()); subtype = AOT_INIT_METHOD_GSHARED_THIS; break; case MONO_RGCTX_ACCESS_NONE: subtype = AOT_INIT_METHOD; break; default: g_assert_not_reached (); } call = LLVMBuildCall (builder, ctx->module->init_methods [subtype], args, nargs, ""); /* * This enables llvm to keep arguments in their original registers/ * scratch registers, since the call will not clobber them. */ set_call_cold_cconv (call); // Set the inited flag indexes [0] = const_int32 (0); indexes [1] = const_int32 (cfg->method_index); LLVMBuildStore (builder, LLVMConstInt (LLVMInt8Type (), 1, FALSE), LLVMBuildGEP (builder, ctx->module->inited_var, indexes, 2, "")); LLVMBuildBr (builder, inited_bb); ctx->bblocks [cfg->bb_entry->block_num].end_bblock = inited_bb; builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, inited_bb); } static void emit_unbox_tramp (EmitContext *ctx, const char *method_name, LLVMTypeRef method_type, LLVMValueRef method, int method_index) { /* * Emit unbox trampoline using a tailcall */ LLVMValueRef tramp, call, *args; LLVMBuilderRef builder; LLVMBasicBlockRef lbb; LLVMCallInfo *linfo; char *tramp_name; int i, nargs; tramp_name = g_strdup_printf ("ut_%s", method_name); tramp = LLVMAddFunction (ctx->module->lmodule, tramp_name, method_type); LLVMSetLinkage (tramp, LLVMInternalLinkage); mono_llvm_add_func_attr (tramp, LLVM_ATTR_OPTIMIZE_FOR_SIZE); //mono_llvm_add_func_attr (tramp, LLVM_ATTR_NO_UNWIND); linfo = ctx->linfo; // FIXME: Reduce code duplication with mono_llvm_compile_method () etc. if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1) mono_llvm_add_param_attr (LLVMGetParam (tramp, ctx->rgctx_arg_pindex), LLVM_ATTR_IN_REG); if (ctx->cfg->vret_addr) { LLVMSetValueName (LLVMGetParam (tramp, linfo->vret_arg_pindex), "vret"); if (linfo->ret.storage == LLVMArgVtypeByRef) { mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET); mono_llvm_add_param_attr (LLVMGetParam (tramp, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS); } } lbb = LLVMAppendBasicBlock (tramp, ""); builder = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder, lbb); nargs = LLVMCountParamTypes (method_type); args = g_new0 (LLVMValueRef, nargs); for (i = 0; i < nargs; ++i) { args [i] = LLVMGetParam (tramp, i); if (i == ctx->this_arg_pindex) { LLVMTypeRef arg_type = LLVMTypeOf (args [i]); args [i] = LLVMBuildPtrToInt (builder, args [i], IntPtrType (), ""); args [i] = LLVMBuildAdd (builder, args [i], LLVMConstInt (IntPtrType (), MONO_ABI_SIZEOF (MonoObject), FALSE), ""); args [i] = LLVMBuildIntToPtr (builder, args [i], arg_type, ""); } } call = LLVMBuildCall (builder, method, args, nargs, ""); if (!ctx->llvm_only && ctx->rgctx_arg_pindex != -1) mono_llvm_add_instr_attr (call, 1 + ctx->rgctx_arg_pindex, LLVM_ATTR_IN_REG); if (linfo->ret.storage == LLVMArgVtypeByRef) mono_llvm_add_instr_attr (call, 1 + linfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET); // FIXME: This causes assertions in clang //mono_llvm_set_must_tailcall (call); if (LLVMGetReturnType (method_type) == LLVMVoidType ()) LLVMBuildRetVoid (builder); else LLVMBuildRet (builder, call); g_hash_table_insert (ctx->module->idx_to_unbox_tramp, GINT_TO_POINTER (method_index), tramp); LLVMDisposeBuilder (builder); } #ifdef TARGET_WASM static void emit_gc_pin (EmitContext *ctx, LLVMBuilderRef builder, int vreg) { LLVMValueRef index0 = LLVMConstInt (LLVMInt32Type (), 0, FALSE); LLVMValueRef index1 = LLVMConstInt (LLVMInt32Type (), ctx->gc_var_indexes [vreg] - 1, FALSE); LLVMValueRef indexes [] = { index0, index1 }; LLVMValueRef addr = LLVMBuildGEP (builder, ctx->gc_pin_area, indexes, 2, ""); mono_llvm_build_store (builder, convert (ctx, ctx->values [vreg], IntPtrType ()), addr, TRUE, LLVM_BARRIER_NONE); } #endif /* * emit_entry_bb: * * Emit code to load/convert arguments. */ static void emit_entry_bb (EmitContext *ctx, LLVMBuilderRef builder) { int i, j, pindex; MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig = ctx->sig; LLVMCallInfo *linfo = ctx->linfo; MonoBasicBlock *bb; char **names; LLVMBuilderRef old_builder = ctx->builder; ctx->builder = builder; ctx->alloca_builder = create_builder (ctx); #ifdef TARGET_WASM /* * For GC stack scanning to work, allocate an area on the stack and store * every ref vreg into it after its written. Because the stack is scanned * conservatively, the objects will be pinned, so the vregs can directly * reference the objects, there is no need to load them from the stack * on every access. */ ctx->gc_var_indexes = g_new0 (int, cfg->next_vreg); int ngc_vars = 0; for (i = 0; i < cfg->next_vreg; ++i) { if (vreg_is_ref (cfg, i)) { ctx->gc_var_indexes [i] = ngc_vars + 1; ngc_vars ++; } } // FIXME: Count only live vregs ctx->gc_pin_area = build_alloca_llvm_type_name (ctx, LLVMArrayType (IntPtrType (), ngc_vars), 0, "gc_pin"); #endif /* * Handle indirect/volatile variables by allocating memory for them * using 'alloca', and storing their address in a temporary. */ for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if ((var->opcode == OP_GSHAREDVT_LOCAL || var->opcode == OP_GSHAREDVT_ARG_REGOFFSET)) continue; if (var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) || (mini_type_is_vtype (var->inst_vtype) && !MONO_CLASS_IS_SIMD (ctx->cfg, var->klass))) { if (!ctx_ok (ctx)) return; /* Could be already created by an OP_VPHI */ if (!ctx->addresses [var->dreg]) { if (var->flags & MONO_INST_LMF) { // FIXME: Allocate a smaller struct in the deopt case int size = cfg->deopt ? MONO_ABI_SIZEOF (MonoLMFExt) : MONO_ABI_SIZEOF (MonoLMF); ctx->addresses [var->dreg] = build_alloca_llvm_type_name (ctx, LLVMArrayType (LLVMInt8Type (), size), sizeof (target_mgreg_t), "lmf"); } else { char *name = g_strdup_printf ("vreg_loc_%d", var->dreg); ctx->addresses [var->dreg] = build_named_alloca (ctx, var->inst_vtype, name); g_free (name); } } ctx->vreg_cli_types [var->dreg] = var->inst_vtype; } } names = g_new (char *, sig->param_count); mono_method_get_param_names (cfg->method, (const char **) names); for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis]; int reg = cfg->args [i + sig->hasthis]->dreg; char *name; pindex = ainfo->pindex; LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgAsFpArgs: { LLVMValueRef args [8]; int j; pindex += ainfo->ndummy_fpargs; /* The argument is received as a set of int/fp arguments, store them into the real argument */ memset (args, 0, sizeof (args)); if (ainfo->storage == LLVMArgVtypeInReg) { args [0] = LLVMGetParam (ctx->lmethod, pindex); if (ainfo->pair_storage [1] != LLVMArgNone) args [1] = LLVMGetParam (ctx->lmethod, pindex + 1); } else { g_assert (ainfo->nslots <= 8); for (j = 0; j < ainfo->nslots; ++j) args [j] = LLVMGetParam (ctx->lmethod, pindex + j); } ctx->addresses [reg] = build_alloca (ctx, ainfo->type); emit_args_to_vtype (ctx, builder, ainfo->type, ctx->addresses [reg], ainfo, args); break; } case LLVMArgVtypeByVal: { ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; } case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: { /* The argument is passed by ref */ ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; } case LLVMArgAsIArgs: { LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); int size; MonoType *t = mini_get_underlying_type (ainfo->type); /* The argument is received as an array of ints, store it into the real argument */ ctx->addresses [reg] = build_alloca (ctx, t); size = mono_class_value_size (mono_class_from_mono_type_internal (t), NULL); if (size == 0) { } else if (size < TARGET_SIZEOF_VOID_P) { /* The upper bits of the registers might not be valid */ LLVMValueRef val = LLVMBuildExtractValue (builder, arg, 0, ""); LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (size * 8), 0)); LLVMBuildStore (ctx->builder, LLVMBuildTrunc (builder, val, LLVMIntType (size * 8), ""), dest); } else { LLVMBuildStore (ctx->builder, arg, convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMTypeOf (arg), 0))); } break; } case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: { MonoType *t = mini_get_underlying_type (ainfo->type); /* The argument is received as a scalar */ ctx->addresses [reg] = build_alloca (ctx, t); LLVMValueRef dest = convert (ctx, ctx->addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)); LLVMBuildStore (ctx->builder, arg, dest); break; } case LLVMArgGsharedvtFixed: { /* These are non-gsharedvt arguments passed by ref, the rest of the IR treats them as scalars */ LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); if (names [i]) name = g_strdup_printf ("arg_%s", names [i]); else name = g_strdup_printf ("arg_%d", i); ctx->values [reg] = LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), name); break; } case LLVMArgGsharedvtFixedVtype: { LLVMValueRef arg = LLVMGetParam (ctx->lmethod, pindex); if (names [i]) name = g_strdup_printf ("vtype_arg_%s", names [i]); else name = g_strdup_printf ("vtype_arg_%d", i); /* Non-gsharedvt vtype argument passed by ref, the rest of the IR treats it as a vtype */ g_assert (ctx->addresses [reg]); LLVMSetValueName (ctx->addresses [reg], name); LLVMBuildStore (builder, LLVMBuildLoad (builder, convert (ctx, arg, LLVMPointerType (type_to_llvm_type (ctx, ainfo->type), 0)), ""), ctx->addresses [reg]); break; } case LLVMArgGsharedvtVariable: /* The IR treats these as variables with addresses */ if (!ctx->addresses [reg]) ctx->addresses [reg] = LLVMGetParam (ctx->lmethod, pindex); break; default: { LLVMTypeRef t; /* Needed to avoid phi argument mismatch errors since operations on pointers produce i32/i64 */ if (m_type_is_byref (ainfo->type)) t = IntPtrType (); else t = type_to_llvm_type (ctx, ainfo->type); ctx->values [reg] = convert_full (ctx, ctx->values [reg], llvm_type_to_stack_type (cfg, t), type_is_unsigned (ctx, ainfo->type)); break; } } switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgVtypeByVal: case LLVMArgAsIArgs: // FIXME: Enabling this fails on windows case LLVMArgVtypeAddr: case LLVMArgVtypeByRef: { if (MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (ainfo->type))) /* Treat these as normal values */ ctx->values [reg] = LLVMBuildLoad (builder, ctx->addresses [reg], "simd_vtype"); break; } default: break; } } g_free (names); if (sig->hasthis) { /* Handle this arguments as inputs to phi nodes */ int reg = cfg->args [0]->dreg; if (ctx->vreg_types [reg]) ctx->values [reg] = convert (ctx, ctx->values [reg], ctx->vreg_types [reg]); } if (cfg->vret_addr) emit_volatile_store (ctx, cfg->vret_addr->dreg); if (sig->hasthis) emit_volatile_store (ctx, cfg->args [0]->dreg); for (i = 0; i < sig->param_count; ++i) if (!mini_type_is_vtype (sig->params [i])) emit_volatile_store (ctx, cfg->args [i + sig->hasthis]->dreg); if (sig->hasthis && !cfg->rgctx_var && cfg->gshared && !cfg->llvm_only) { LLVMValueRef this_alloc; /* * The exception handling code needs the location where the this argument was * stored for gshared methods. We create a separate alloca to hold it, and mark it * with the "mono.this" custom metadata to tell llvm that it needs to save its * location into the LSDA. */ this_alloc = mono_llvm_build_alloca (builder, ThisType (), LLVMConstInt (LLVMInt32Type (), 1, FALSE), 0, ""); /* This volatile store will keep the alloca alive */ mono_llvm_build_store (builder, ctx->values [cfg->args [0]->dreg], this_alloc, TRUE, LLVM_BARRIER_NONE); set_metadata_flag (this_alloc, "mono.this"); } if (cfg->rgctx_var) { if (!(cfg->rgctx_var->flags & MONO_INST_VOLATILE)) { /* FIXME: This could be volatile even in llvmonly mode if used inside a clause etc. */ g_assert (!ctx->addresses [cfg->rgctx_var->dreg]); ctx->values [cfg->rgctx_var->dreg] = ctx->rgctx_arg; } else { LLVMValueRef rgctx_alloc, store; /* * We handle the rgctx arg similarly to the this pointer. */ g_assert (ctx->addresses [cfg->rgctx_var->dreg]); rgctx_alloc = ctx->addresses [cfg->rgctx_var->dreg]; /* This volatile store will keep the alloca alive */ store = mono_llvm_build_store (builder, convert (ctx, ctx->rgctx_arg, IntPtrType ()), rgctx_alloc, TRUE, LLVM_BARRIER_NONE); (void)store; /* unused */ set_metadata_flag (rgctx_alloc, "mono.this"); } } #ifdef TARGET_WASM /* * Store ref arguments to the pin area. * FIXME: This might not be needed, since the caller already does it ? */ for (i = 0; i < cfg->num_varinfo; ++i) { MonoInst *var = cfg->varinfo [i]; if (var->opcode == OP_ARG && vreg_is_ref (cfg, var->dreg) && ctx->values [var->dreg]) emit_gc_pin (ctx, builder, var->dreg); } #endif if (cfg->deopt) { LLVMValueRef addr, index [2]; MonoMethodHeader *header = cfg->header; int nfields = (sig->ret->type != MONO_TYPE_VOID ? 1 : 0) + sig->hasthis + sig->param_count + header->num_locals + 2; LLVMTypeRef *types = g_alloca (nfields * sizeof (LLVMTypeRef)); int findex = 0; /* method */ types [findex ++] = IntPtrType (); /* il_offset */ types [findex ++] = LLVMInt32Type (); int data_start = findex; /* data */ if (sig->ret->type != MONO_TYPE_VOID) types [findex ++] = IntPtrType (); if (sig->hasthis) types [findex ++] = IntPtrType (); for (int i = 0; i < sig->param_count; ++i) types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, sig->params [i]), 0); for (int i = 0; i < header->num_locals; ++i) types [findex ++] = LLVMPointerType (type_to_llvm_type (ctx, header->locals [i]), 0); g_assert (findex == nfields); char *name = g_strdup_printf ("%s_il_state", ctx->method_name); LLVMTypeRef il_state_type = LLVMStructCreateNamed (ctx->module->context, name); LLVMStructSetBody (il_state_type, types, nfields, FALSE); g_free (name); ctx->il_state = build_alloca_llvm_type_name (ctx, il_state_type, 0, "il_state"); g_assert (cfg->il_state_var); ctx->addresses [cfg->il_state_var->dreg] = ctx->il_state; /* Set il_state->il_offset = -1 */ index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); LLVMBuildStore (ctx->builder, LLVMConstInt (types [1], -1, FALSE), addr); /* * Set il_state->data [i] to either the address of the arg/local, or NULL. * Because of mono_liveness_handle_exception_clauses (), all locals used/reachable from * clauses are supposed to be volatile, so they have an address. */ findex = data_start; if (sig->ret->type != MONO_TYPE_VOID) { LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret); ctx->il_state_ret = build_alloca_llvm_type_name (ctx, ret_type, 0, "il_state_ret"); index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); LLVMBuildStore (ctx->builder, ctx->il_state_ret, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (ctx->il_state_ret), 0))); findex ++; } for (int i = 0; i < sig->hasthis + sig->param_count; ++i) { LLVMValueRef var_addr = ctx->addresses [cfg->args [i]->dreg]; index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); if (var_addr) LLVMBuildStore (ctx->builder, var_addr, convert (ctx, addr, LLVMPointerType (LLVMTypeOf (var_addr), 0))); else LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr); findex ++; } for (int i = 0; i < header->num_locals; ++i) { LLVMValueRef var_addr = ctx->addresses [cfg->locals [i]->dreg]; index [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); index [1] = LLVMConstInt (LLVMInt32Type (), findex, FALSE); addr = LLVMBuildGEP (builder, ctx->il_state, index, 2, ""); if (var_addr) LLVMBuildStore (ctx->builder, LLVMBuildBitCast (builder, var_addr, types [findex], ""), addr); else LLVMBuildStore (ctx->builder, LLVMConstNull (types [findex]), addr); findex ++; } } /* Initialize the method if needed */ if (cfg->compile_aot) { /* Emit a location for the initialization code */ ctx->init_bb = gen_bb (ctx, "INIT_BB"); ctx->inited_bb = gen_bb (ctx, "INITED_BB"); LLVMBuildBr (ctx->builder, ctx->init_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb); ctx->bblocks [cfg->bb_entry->block_num].end_bblock = ctx->inited_bb; } /* Compute nesting between clauses */ ctx->nested_in = (GSList**)mono_mempool_alloc0 (cfg->mempool, sizeof (GSList*) * cfg->header->num_clauses); for (i = 0; i < cfg->header->num_clauses; ++i) { for (j = 0; j < cfg->header->num_clauses; ++j) { MonoExceptionClause *clause1 = &cfg->header->clauses [i]; MonoExceptionClause *clause2 = &cfg->header->clauses [j]; if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) ctx->nested_in [i] = g_slist_prepend_mempool (cfg->mempool, ctx->nested_in [i], GINT_TO_POINTER (j)); } } /* * For finally clauses, create an indicator variable telling OP_ENDFINALLY whenever * it needs to continue normally, or return back to the exception handling system. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { char name [128]; if (!(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER))) continue; if (bb->in_scount == 0) { LLVMValueRef val; sprintf (name, "finally_ind_bb%d", bb->block_num); val = LLVMBuildAlloca (builder, LLVMInt32Type (), name); LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), val); ctx->bblocks [bb->block_num].finally_ind = val; } else { /* Create a variable to hold the exception var */ if (!ctx->ex_var) ctx->ex_var = LLVMBuildAlloca (builder, ObjRefType (), "exvar"); } } ctx->builder = old_builder; } static gboolean needs_extra_arg (EmitContext *ctx, MonoMethod *method) { WrapperInfo *info = NULL; /* * When targeting wasm, the caller and callee signature has to match exactly. This means * that every method which can be called indirectly need an extra arg since the caller * will call it through an ftnptr and will pass an extra arg. */ if (!ctx->cfg->llvm_only || !ctx->emit_dummy_arg) return FALSE; if (method->wrapper_type) info = mono_marshal_get_wrapper_info (method); switch (method->wrapper_type) { case MONO_WRAPPER_OTHER: if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG) /* Already have an explicit extra arg */ return FALSE; break; case MONO_WRAPPER_MANAGED_TO_NATIVE: if (strstr (method->name, "icall_wrapper")) /* These are JIT icall wrappers which are only called from JITted code directly */ return FALSE; /* Normal icalls can be virtual methods which need an extra arg */ break; case MONO_WRAPPER_RUNTIME_INVOKE: case MONO_WRAPPER_ALLOC: case MONO_WRAPPER_CASTCLASS: case MONO_WRAPPER_WRITE_BARRIER: case MONO_WRAPPER_NATIVE_TO_MANAGED: return FALSE; case MONO_WRAPPER_STELEMREF: if (info->subtype != WRAPPER_SUBTYPE_VIRTUAL_STELEMREF) return FALSE; break; case MONO_WRAPPER_MANAGED_TO_MANAGED: if (info->subtype == WRAPPER_SUBTYPE_STRING_CTOR) return FALSE; break; default: break; } if (method->string_ctor) return FALSE; /* These are called from gsharedvt code with an indirect call which doesn't pass an extra arg */ if (method->klass == mono_get_string_class () && (strstr (method->name, "memcpy") || strstr (method->name, "bzero"))) return FALSE; return TRUE; } static inline gboolean is_supported_callconv (EmitContext *ctx, MonoCallInst *call) { #if defined(TARGET_WIN32) && defined(TARGET_AMD64) gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || (call->signature->call_convention == MONO_CALL_C) || (call->signature->call_convention == MONO_CALL_STDCALL); #else gboolean result = (call->signature->call_convention == MONO_CALL_DEFAULT) || ((call->signature->call_convention == MONO_CALL_C) && ctx->llvm_only); #endif return result; } static void process_call (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef *builder_ref, MonoInst *ins) { MonoCompile *cfg = ctx->cfg; LLVMValueRef *values = ctx->values; LLVMValueRef *addresses = ctx->addresses; MonoCallInst *call = (MonoCallInst*)ins; MonoMethodSignature *sig = call->signature; LLVMValueRef callee = NULL, lcall; LLVMValueRef *args; LLVMCallInfo *cinfo; GSList *l; int i, len, nargs; gboolean vretaddr; LLVMTypeRef llvm_sig; gpointer target; gboolean is_virtual, calli; LLVMBuilderRef builder = *builder_ref; /* If both imt and rgctx arg are required, only pass the imt arg, the rgctx trampoline will pass the rgctx */ if (call->imt_arg_reg) call->rgctx_arg_reg = 0; if (!is_supported_callconv (ctx, call)) { set_failure (ctx, "non-default callconv"); return; } cinfo = call->cinfo; g_assert (cinfo); if (call->rgctx_arg_reg) cinfo->rgctx_arg = TRUE; if (call->imt_arg_reg) cinfo->imt_arg = TRUE; if (!call->rgctx_arg_reg && call->method && needs_extra_arg (ctx, call->method)) cinfo->dummy_arg = TRUE; vretaddr = (cinfo->ret.storage == LLVMArgVtypeRetAddr || cinfo->ret.storage == LLVMArgVtypeByRef || cinfo->ret.storage == LLVMArgGsharedvtFixed || cinfo->ret.storage == LLVMArgGsharedvtVariable || cinfo->ret.storage == LLVMArgGsharedvtFixedVtype); llvm_sig = sig_to_llvm_sig_full (ctx, sig, cinfo); if (!ctx_ok (ctx)) return; int const opcode = ins->opcode; is_virtual = opcode == OP_VOIDCALL_MEMBASE || opcode == OP_CALL_MEMBASE || opcode == OP_VCALL_MEMBASE || opcode == OP_LCALL_MEMBASE || opcode == OP_FCALL_MEMBASE || opcode == OP_RCALL_MEMBASE || opcode == OP_TAILCALL_MEMBASE; calli = !call->fptr_is_patch && (opcode == OP_VOIDCALL_REG || opcode == OP_CALL_REG || opcode == OP_VCALL_REG || opcode == OP_LCALL_REG || opcode == OP_FCALL_REG || opcode == OP_RCALL_REG || opcode == OP_TAILCALL_REG); /* FIXME: Avoid creating duplicate methods */ if (ins->flags & MONO_INST_HAS_METHOD) { if (is_virtual) { callee = NULL; } else { if (cfg->compile_aot) { callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_METHOD, call->method); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } else if (cfg->method == call->method) { callee = ctx->lmethod; } else { ERROR_DECL (error); static int tramp_index; char *name; name = g_strdup_printf ("[tramp_%d] %s", tramp_index, mono_method_full_name (call->method, TRUE)); tramp_index ++; /* * Use our trampoline infrastructure for lazy compilation instead of llvm's. * Make all calls through a global. The address of the global will be saved in * MonoJitDomainInfo.llvm_jit_callees and updated when the method it refers to is * compiled. */ LLVMValueRef tramp_var = (LLVMValueRef)g_hash_table_lookup (ctx->jit_callees, call->method); if (!tramp_var) { target = mono_create_jit_trampoline (call->method, error); if (!is_ok (error)) { set_failure (ctx, mono_error_get_message (error)); mono_error_cleanup (error); return; } tramp_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (llvm_sig, 0), name); LLVMSetInitializer (tramp_var, LLVMConstIntToPtr (LLVMConstInt (LLVMInt64Type (), (guint64)(size_t)target, FALSE), LLVMPointerType (llvm_sig, 0))); LLVMSetLinkage (tramp_var, LLVMExternalLinkage); g_hash_table_insert (ctx->jit_callees, call->method, tramp_var); } callee = LLVMBuildLoad (builder, tramp_var, ""); } } if (!cfg->llvm_only && call->method && strstr (m_class_get_name (call->method->klass), "AsyncVoidMethodBuilder")) { /* LLVM miscompiles async methods */ set_failure (ctx, "#13734"); return; } } else if (calli) { } else { const MonoJitICallId jit_icall_id = call->jit_icall_id; if (jit_icall_id) { if (cfg->compile_aot) { callee = get_callee (ctx, llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id)); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } else { callee = get_jit_callee (ctx, "", llvm_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (jit_icall_id)); } } else { if (cfg->compile_aot) { callee = NULL; if (cfg->abs_patches) { MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr); if (abs_ji) { callee = get_callee (ctx, llvm_sig, abs_ji->type, abs_ji->data.target); if (!callee) { set_failure (ctx, "can't encode patch"); return; } } } if (!callee) { set_failure (ctx, "aot"); return; } } else { if (cfg->abs_patches) { MonoJumpInfo *abs_ji = (MonoJumpInfo*)g_hash_table_lookup (cfg->abs_patches, call->fptr); if (abs_ji) { ERROR_DECL (error); target = mono_resolve_patch_target (cfg->method, NULL, abs_ji, FALSE, error); mono_error_assert_ok (error); callee = get_jit_callee (ctx, "", llvm_sig, abs_ji->type, abs_ji->data.target); } else { g_assert_not_reached (); } } else { g_assert_not_reached (); } } } } if (is_virtual) { int size = TARGET_SIZEOF_VOID_P; LLVMValueRef index; g_assert (ins->inst_offset % size == 0); index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); callee = convert (ctx, LLVMBuildLoad (builder, LLVMBuildGEP (builder, convert (ctx, values [ins->inst_basereg], LLVMPointerType (LLVMPointerType (IntPtrType (), 0), 0)), &index, 1, ""), ""), LLVMPointerType (llvm_sig, 0)); } else if (calli) { callee = convert (ctx, values [ins->sreg1], LLVMPointerType (llvm_sig, 0)); } else { if (ins->flags & MONO_INST_HAS_METHOD) { } } /* * Collect and convert arguments */ nargs = (sig->param_count * 16) + sig->hasthis + vretaddr + call->rgctx_reg + call->imt_arg_reg + call->cinfo->dummy_arg + 1; len = sizeof (LLVMValueRef) * nargs; args = g_newa (LLVMValueRef, nargs); memset (args, 0, len); l = call->out_ireg_args; if (call->rgctx_arg_reg) { g_assert (values [call->rgctx_arg_reg]); g_assert (cinfo->rgctx_arg_pindex < nargs); /* * On ARM, the imt/rgctx argument is passed in a caller save register, but some of our trampolines etc. clobber it, leading to * problems is LLVM moves the arg assignment earlier. To work around this, save the argument into a stack slot and load * it using a volatile load. */ #ifdef TARGET_ARM if (!ctx->imt_rgctx_loc) ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P); LLVMBuildStore (builder, convert (ctx, ctx->values [call->rgctx_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc); args [cinfo->rgctx_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE); #else args [cinfo->rgctx_arg_pindex] = convert (ctx, values [call->rgctx_arg_reg], ctx->module->ptr_type); #endif } if (call->imt_arg_reg) { g_assert (!ctx->llvm_only); g_assert (values [call->imt_arg_reg]); g_assert (cinfo->imt_arg_pindex < nargs); #ifdef TARGET_ARM if (!ctx->imt_rgctx_loc) ctx->imt_rgctx_loc = build_alloca_llvm_type (ctx, ctx->module->ptr_type, TARGET_SIZEOF_VOID_P); LLVMBuildStore (builder, convert (ctx, ctx->values [call->imt_arg_reg], ctx->module->ptr_type), ctx->imt_rgctx_loc); args [cinfo->imt_arg_pindex] = mono_llvm_build_load (builder, ctx->imt_rgctx_loc, "", TRUE); #else args [cinfo->imt_arg_pindex] = convert (ctx, values [call->imt_arg_reg], ctx->module->ptr_type); #endif } switch (cinfo->ret.storage) { case LLVMArgGsharedvtVariable: { MonoInst *var = get_vreg_to_inst (cfg, call->inst.dreg); if (var && var->opcode == OP_GSHAREDVT_LOCAL) { args [cinfo->vret_arg_pindex] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), IntPtrType ()); } else { g_assert (addresses [call->inst.dreg]); args [cinfo->vret_arg_pindex] = convert (ctx, addresses [call->inst.dreg], IntPtrType ()); } break; } default: if (vretaddr) { if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); g_assert (cinfo->vret_arg_pindex < nargs); if (cinfo->ret.storage == LLVMArgVtypeByRef) args [cinfo->vret_arg_pindex] = addresses [call->inst.dreg]; else args [cinfo->vret_arg_pindex] = LLVMBuildPtrToInt (builder, addresses [call->inst.dreg], IntPtrType (), ""); } break; } /* * Sometimes the same method is called with two different signatures (i.e. with and without 'this'), so * use the real callee for argument type conversion. */ LLVMTypeRef callee_type = LLVMGetElementType (LLVMTypeOf (callee)); LLVMTypeRef *param_types = (LLVMTypeRef*)g_alloca (sizeof (LLVMTypeRef) * LLVMCountParamTypes (callee_type)); LLVMGetParamTypes (callee_type, param_types); for (i = 0; i < sig->param_count + sig->hasthis; ++i) { guint32 regpair; int reg, pindex; LLVMArgInfo *ainfo = &call->cinfo->args [i]; pindex = ainfo->pindex; regpair = (guint32)(gssize)(l->data); reg = regpair & 0xffffff; args [pindex] = values [reg]; switch (ainfo->storage) { case LLVMArgVtypeInReg: case LLVMArgAsFpArgs: { guint32 nargs; int j; for (j = 0; j < ainfo->ndummy_fpargs; ++j) args [pindex + j] = LLVMConstNull (LLVMDoubleType ()); pindex += ainfo->ndummy_fpargs; g_assert (addresses [reg]); emit_vtype_to_args (ctx, builder, ainfo->type, addresses [reg], ainfo, args + pindex, &nargs); pindex += nargs; // FIXME: alignment // FIXME: Get rid of the VMOVE break; } case LLVMArgVtypeByVal: g_assert (addresses [reg]); args [pindex] = addresses [reg]; break; case LLVMArgVtypeAddr : case LLVMArgVtypeByRef: { g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0)); break; } case LLVMArgAsIArgs: g_assert (addresses [reg]); if (ainfo->esize == 8) args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (LLVMInt64Type (), ainfo->nslots), 0)), ""); else args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMArrayType (IntPtrType (), ainfo->nslots), 0)), ""); break; case LLVMArgVtypeAsScalar: g_assert_not_reached (); break; case LLVMArgWasmVtypeAsScalar: g_assert (addresses [reg]); args [pindex] = LLVMBuildLoad (ctx->builder, convert (ctx, addresses [reg], LLVMPointerType (LLVMIntType (ainfo->esize * 8), 0)), ""); break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (type_to_llvm_arg_type (ctx, ainfo->type), 0)); break; case LLVMArgGsharedvtVariable: g_assert (addresses [reg]); args [pindex] = convert (ctx, addresses [reg], LLVMPointerType (IntPtrType (), 0)); break; default: g_assert (args [pindex]); if (i == 0 && sig->hasthis) args [pindex] = convert (ctx, args [pindex], param_types [pindex]); else args [pindex] = convert (ctx, args [pindex], type_to_llvm_arg_type (ctx, ainfo->type)); break; } g_assert (pindex <= nargs); l = l->next; } if (call->cinfo->dummy_arg) { g_assert (call->cinfo->dummy_arg_pindex < nargs); args [call->cinfo->dummy_arg_pindex] = LLVMConstNull (ctx->module->ptr_type); } // FIXME: Align call sites /* * Emit the call */ lcall = emit_call (ctx, bb, &builder, callee, args, LLVMCountParamTypes (llvm_sig)); mono_llvm_nonnull_state_update (ctx, lcall, call->method, args, LLVMCountParamTypes (llvm_sig)); // If we just allocated an object, it's not null. if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) { mono_llvm_set_call_nonnull_ret (lcall); } if (ins->opcode != OP_TAILCALL && ins->opcode != OP_TAILCALL_MEMBASE && LLVMGetInstructionOpcode (lcall) == LLVMCall) mono_llvm_set_call_notailcall (lcall); // Add original method name we are currently emitting as a custom string metadata (the only way to leave comments in LLVM IR) if (mono_debug_enabled () && call && call->method) mono_llvm_add_string_metadata (lcall, "managed_name", mono_method_full_name (call->method, TRUE)); // As per the LLVM docs, a function has a noalias return value if and only if // it is an allocation function. This is an allocation function. if (call->method && call->method->wrapper_type == MONO_WRAPPER_ALLOC) { mono_llvm_set_call_noalias_ret (lcall); // All objects are expected to be 8-byte aligned (SGEN_ALLOC_ALIGN) mono_llvm_set_alignment_ret (lcall, 8); } /* * Modify cconv and parameter attributes to pass rgctx/imt correctly. */ #if defined(MONO_ARCH_IMT_REG) && defined(MONO_ARCH_RGCTX_REG) g_assert (MONO_ARCH_IMT_REG == MONO_ARCH_RGCTX_REG); #endif /* The two can't be used together, so use only one LLVM calling conv to pass them */ g_assert (!(call->rgctx_arg_reg && call->imt_arg_reg)); if (!sig->pinvoke && !cfg->llvm_only) LLVMSetInstructionCallConv (lcall, LLVMMono1CallConv); if (cinfo->ret.storage == LLVMArgVtypeByRef) mono_llvm_add_instr_attr (lcall, 1 + cinfo->vret_arg_pindex, LLVM_ATTR_STRUCT_RET); if (!ctx->llvm_only && call->rgctx_arg_reg) mono_llvm_add_instr_attr (lcall, 1 + cinfo->rgctx_arg_pindex, LLVM_ATTR_IN_REG); if (call->imt_arg_reg) mono_llvm_add_instr_attr (lcall, 1 + cinfo->imt_arg_pindex, LLVM_ATTR_IN_REG); /* Add byval attributes if needed */ for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &call->cinfo->args [i + sig->hasthis]; if (ainfo && ainfo->storage == LLVMArgVtypeByVal) mono_llvm_add_instr_attr (lcall, 1 + ainfo->pindex, LLVM_ATTR_BY_VAL); #ifdef TARGET_WASM if (ainfo && ainfo->storage == LLVMArgVtypeByRef) /* This causes llvm to make a copy of the value which is what we need */ mono_llvm_add_instr_byval_attr (lcall, 1 + ainfo->pindex, LLVMGetElementType (param_types [ainfo->pindex])); #endif } gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret)); gboolean should_promote_to_value = FALSE; const char *load_name = NULL; /* * Convert the result. Non-SIMD value types are manipulated via an * indirection. SIMD value types are represented directly as LLVM vector * values, and must have a corresponding LLVM value definition in * `values`. */ switch (cinfo->ret.storage) { case LLVMArgAsIArgs: case LLVMArgFpStruct: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); break; case LLVMArgVtypeByVal: /* * Only used by amd64 and x86. Only ever used when passing * arguments; never used for return values. */ g_assert_not_reached (); break; case LLVMArgVtypeInReg: { if (LLVMTypeOf (lcall) == LLVMVoidType ()) /* Empty struct */ break; if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, sig->ret); LLVMValueRef regs [2] = { 0 }; regs [0] = LLVMBuildExtractValue (builder, lcall, 0, ""); if (cinfo->ret.pair_storage [1] != LLVMArgNone) regs [1] = LLVMBuildExtractValue (builder, lcall, 1, ""); emit_args_to_vtype (ctx, builder, sig->ret, addresses [ins->dreg], &cinfo->ret, regs); load_name = "process_call_vtype_in_reg"; should_promote_to_value = is_simd; break; } case LLVMArgVtypeAsScalar: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); load_name = "process_call_vtype_as_scalar"; should_promote_to_value = is_simd; break; case LLVMArgVtypeRetAddr: case LLVMArgVtypeByRef: load_name = "process_call_vtype_ret_addr"; should_promote_to_value = is_simd; break; case LLVMArgGsharedvtVariable: break; case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: values [ins->dreg] = LLVMBuildLoad (builder, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0), FALSE), ""); break; case LLVMArgWasmVtypeAsScalar: if (!addresses [call->inst.dreg]) addresses [call->inst.dreg] = build_alloca (ctx, sig->ret); LLVMBuildStore (builder, lcall, convert_full (ctx, addresses [call->inst.dreg], LLVMPointerType (LLVMTypeOf (lcall), 0), FALSE)); break; default: if (sig->ret->type != MONO_TYPE_VOID) /* If the method returns an unsigned value, need to zext it */ values [ins->dreg] = convert_full (ctx, lcall, llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, sig->ret)), type_is_unsigned (ctx, sig->ret)); break; } if (should_promote_to_value) { g_assert (addresses [call->inst.dreg]); LLVMTypeRef addr_type = LLVMPointerType (type_to_llvm_type (ctx, sig->ret), 0); LLVMValueRef addr = convert_full (ctx, addresses [call->inst.dreg], addr_type, FALSE); values [ins->dreg] = LLVMBuildLoad (builder, addr, load_name); } *builder_ref = ctx->builder; } static void emit_llvmonly_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc) { MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mini_llvmonly_rethrow_exception : MONO_JIT_ICALL_mini_llvmonly_throw_exception; LLVMValueRef callee = rethrow ? ctx->module->rethrow : ctx->module->throw_icall; LLVMTypeRef exc_type = type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_exception_class ())); if (!callee) { LLVMTypeRef fun_sig = LLVMFunctionType1 (LLVMVoidType (), exc_type, FALSE); g_assert (ctx->cfg->compile_aot); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (icall_id)); } LLVMValueRef args [2]; args [0] = convert (ctx, exc, exc_type); emit_call (ctx, bb, &ctx->builder, callee, args, 1); LLVMBuildUnreachable (ctx->builder); ctx->builder = create_builder (ctx); } static void emit_throw (EmitContext *ctx, MonoBasicBlock *bb, gboolean rethrow, LLVMValueRef exc) { MonoMethodSignature *throw_sig; LLVMValueRef * const pcallee = rethrow ? &ctx->module->rethrow : &ctx->module->throw_icall; LLVMValueRef callee = *pcallee; char const * const icall_name = rethrow ? "mono_arch_rethrow_exception" : "mono_arch_throw_exception"; #ifndef TARGET_X86 const #endif MonoJitICallId icall_id = rethrow ? MONO_JIT_ICALL_mono_arch_rethrow_exception : MONO_JIT_ICALL_mono_arch_throw_exception; if (!callee) { throw_sig = mono_metadata_signature_alloc (mono_get_corlib (), 1); throw_sig->ret = m_class_get_byval_arg (mono_get_void_class ()); throw_sig->params [0] = m_class_get_byval_arg (mono_get_object_class ()); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } else { #ifdef TARGET_X86 /* * LLVM doesn't push the exception argument, so we need a different * trampoline. */ icall_id = rethrow ? MONO_JIT_ICALL_mono_llvm_rethrow_exception_trampoline : MONO_JIT_ICALL_mono_llvm_throw_exception_trampoline; #endif callee = get_jit_callee (ctx, icall_name, sig_to_llvm_sig (ctx, throw_sig), MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } mono_memory_barrier (); } LLVMValueRef arg; arg = convert (ctx, exc, type_to_llvm_type (ctx, m_class_get_byval_arg (mono_get_object_class ()))); emit_call (ctx, bb, &ctx->builder, callee, &arg, 1); } static void emit_resume_eh (EmitContext *ctx, MonoBasicBlock *bb) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception; LLVMValueRef callee; LLVMTypeRef fun_sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); g_assert (ctx->cfg->compile_aot); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); emit_call (ctx, bb, &ctx->builder, callee, NULL, 0); LLVMBuildUnreachable (ctx->builder); ctx->builder = create_builder (ctx); } static LLVMValueRef mono_llvm_emit_clear_exception_call (EmitContext *ctx, LLVMBuilderRef builder) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_clear_exception; LLVMTypeRef call_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); LLVMValueRef callee = NULL; if (!callee) { callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } g_assert (builder && callee); return LLVMBuildCall (builder, callee, NULL, 0, ""); } static LLVMValueRef mono_llvm_emit_load_exception_call (EmitContext *ctx, LLVMBuilderRef builder) { const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_load_exception; LLVMTypeRef call_sig = LLVMFunctionType (ObjRefType (), NULL, 0, FALSE); LLVMValueRef callee = NULL; g_assert (ctx->cfg->compile_aot); if (!callee) { callee = get_callee (ctx, call_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); } g_assert (builder && callee); return LLVMBuildCall (builder, callee, NULL, 0, "load_exception"); } static LLVMValueRef mono_llvm_emit_match_exception_call (EmitContext *ctx, LLVMBuilderRef builder, gint32 region_start, gint32 region_end) { const char *icall_name = "mini_llvmonly_match_exception"; const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_match_exception; ctx->builder = builder; LLVMValueRef args[5]; const int num_args = G_N_ELEMENTS (args); args [0] = convert (ctx, get_aotconst (ctx, MONO_PATCH_INFO_AOT_JIT_INFO, GINT_TO_POINTER (ctx->cfg->method_index), LLVMPointerType (IntPtrType (), 0)), IntPtrType ()); args [1] = LLVMConstInt (LLVMInt32Type (), region_start, 0); args [2] = LLVMConstInt (LLVMInt32Type (), region_end, 0); if (ctx->cfg->rgctx_var) { if (ctx->cfg->llvm_only) { args [3] = convert (ctx, ctx->rgctx_arg, IntPtrType ()); } else { LLVMValueRef rgctx_alloc = ctx->addresses [ctx->cfg->rgctx_var->dreg]; g_assert (rgctx_alloc); args [3] = LLVMBuildLoad (builder, convert (ctx, rgctx_alloc, LLVMPointerType (IntPtrType (), 0)), ""); } } else { args [3] = LLVMConstInt (IntPtrType (), 0, 0); } if (ctx->this_arg) args [4] = convert (ctx, ctx->this_arg, IntPtrType ()); else args [4] = LLVMConstInt (IntPtrType (), 0, 0); LLVMTypeRef match_sig = LLVMFunctionType5 (LLVMInt32Type (), IntPtrType (), LLVMInt32Type (), LLVMInt32Type (), IntPtrType (), IntPtrType (), FALSE); LLVMValueRef callee; g_assert (ctx->cfg->compile_aot); ctx->builder = builder; // get_callee expects ctx->builder to be the emitting builder callee = get_callee (ctx, match_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); g_assert (builder && callee); g_assert (ctx->ex_var); return LLVMBuildCall (builder, callee, args, num_args, icall_name); } // FIXME: This won't work because the code-finding makes this // not a constant. /*#define MONO_PERSONALITY_DEBUG*/ #ifdef MONO_PERSONALITY_DEBUG static const gboolean use_mono_personality_debug = TRUE; static const char *default_personality_name = "mono_debug_personality"; #else static const gboolean use_mono_personality_debug = FALSE; static const char *default_personality_name = "__gxx_personality_v0"; #endif static LLVMTypeRef default_cpp_lpad_exc_signature (void) { static LLVMTypeRef sig; if (!sig) { LLVMTypeRef signature [2]; signature [0] = LLVMPointerType (LLVMInt8Type (), 0); signature [1] = LLVMInt32Type (); sig = LLVMStructType (signature, 2, FALSE); } return sig; } static LLVMValueRef get_mono_personality (EmitContext *ctx) { LLVMValueRef personality = NULL; LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE); g_assert (ctx->cfg->compile_aot); if (!use_mono_personality_debug) { personality = LLVMGetNamedFunction (ctx->lmodule, default_personality_name); } else { personality = get_callee (ctx, personality_type, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_debug_personality)); } g_assert (personality); return personality; } static LLVMBasicBlockRef emit_landing_pad (EmitContext *ctx, int group_index, int group_size) { MonoCompile *cfg = ctx->cfg; LLVMBuilderRef old_builder = ctx->builder; MonoExceptionClause *group_start = cfg->header->clauses + group_index; LLVMBuilderRef lpadBuilder = create_builder (ctx); ctx->builder = lpadBuilder; MonoBasicBlock *handler_bb = cfg->cil_offset_to_bb [CLAUSE_START (group_start)]; g_assert (handler_bb); // <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+ LLVMValueRef personality = get_mono_personality (ctx); g_assert (personality); char *bb_name = g_strdup_printf ("LPAD%d_BB", group_index); LLVMBasicBlockRef lpad_bb = gen_bb (ctx, bb_name); g_free (bb_name); LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb); LLVMValueRef landing_pad = LLVMBuildLandingPad (lpadBuilder, default_cpp_lpad_exc_signature (), personality, 0, ""); g_assert (landing_pad); LLVMValueRef cast = LLVMBuildBitCast (lpadBuilder, ctx->module->sentinel_exception, LLVMPointerType (LLVMInt8Type (), 0), "int8TypeInfo"); LLVMAddClause (landing_pad, cast); if (ctx->cfg->deopt) { /* * Call mini_llvmonly_resume_exception_il_state (lmf, il_state) * * The call will execute the catch clause and the rest of the method and store the return * value into ctx->il_state_ret. */ if (!ctx->has_catch) { /* Unused */ LLVMBuildUnreachable (lpadBuilder); return lpad_bb; } const MonoJitICallId icall_id = MONO_JIT_ICALL_mini_llvmonly_resume_exception_il_state; LLVMValueRef callee; LLVMValueRef args [2]; LLVMTypeRef fun_sig = LLVMFunctionType2 (LLVMVoidType (), IntPtrType (), IntPtrType (), FALSE); callee = get_callee (ctx, fun_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (icall_id)); g_assert (ctx->cfg->lmf_var); g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]); args [0] = LLVMBuildPtrToInt (ctx->builder, ctx->addresses [ctx->cfg->lmf_var->dreg], IntPtrType (), ""); args [1] = LLVMBuildPtrToInt (ctx->builder, ctx->il_state, IntPtrType (), ""); emit_call (ctx, NULL, &ctx->builder, callee, args, 2); /* Return the value set in ctx->il_state_ret */ LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (ctx->lmethod))); LLVMBuilderRef builder = ctx->builder; LLVMValueRef addr, retval, gep, indexes [2]; switch (ctx->linfo->ret.storage) { case LLVMArgNone: LLVMBuildRetVoid (builder); break; case LLVMArgNormal: case LLVMArgWasmVtypeAsScalar: case LLVMArgVtypeInReg: { if (ctx->sig->ret->type == MONO_TYPE_VOID) { LLVMBuildRetVoid (builder); break; } addr = ctx->il_state_ret; g_assert (addr); addr = convert (ctx, ctx->il_state_ret, LLVMPointerType (ret_type, 0)); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); gep = LLVMBuildGEP (builder, addr, indexes, 1, ""); LLVMBuildRet (builder, LLVMBuildLoad (builder, gep, "")); break; } case LLVMArgVtypeRetAddr: { LLVMValueRef ret_addr; g_assert (cfg->vret_addr); ret_addr = ctx->values [cfg->vret_addr->dreg]; addr = ctx->il_state_ret; g_assert (addr); /* The ret value is in il_state_ret, copy it to the memory pointed to by the vret arg */ ret_type = type_to_llvm_type (ctx, ctx->sig->ret); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); gep = LLVMBuildGEP (builder, addr, indexes, 1, ""); retval = convert (ctx, LLVMBuildLoad (builder, gep, ""), ret_type); LLVMBuildStore (builder, retval, convert (ctx, ret_addr, LLVMPointerType (ret_type, 0))); LLVMBuildRetVoid (builder); break; } default: g_assert_not_reached (); break; } return lpad_bb; } LLVMBasicBlockRef resume_bb = gen_bb (ctx, "RESUME_BB"); LLVMBuilderRef resume_builder = create_builder (ctx); ctx->builder = resume_builder; LLVMPositionBuilderAtEnd (resume_builder, resume_bb); emit_resume_eh (ctx, handler_bb); // Build match ctx->builder = lpadBuilder; LLVMPositionBuilderAtEnd (lpadBuilder, lpad_bb); gboolean finally_only = TRUE; MonoExceptionClause *group_cursor = group_start; for (int i = 0; i < group_size; i ++) { if (!(group_cursor->flags & MONO_EXCEPTION_CLAUSE_FINALLY || group_cursor->flags & MONO_EXCEPTION_CLAUSE_FAULT)) finally_only = FALSE; group_cursor++; } // FIXME: // Handle landing pad inlining if (!finally_only) { // So at each level of the exception stack we will match the exception again. // During that match, we need to compare against the handler types for the current // protected region. We send the try start and end so that we can only check against // handlers for this lexical protected region. LLVMValueRef match = mono_llvm_emit_match_exception_call (ctx, lpadBuilder, group_start->try_offset, group_start->try_offset + group_start->try_len); // if returns -1, resume LLVMValueRef switch_ins = LLVMBuildSwitch (lpadBuilder, match, resume_bb, group_size); // else move to that target bb for (int i = 0; i < group_size; i++) { MonoExceptionClause *clause = group_start + i; int clause_index = clause - cfg->header->clauses; MonoBasicBlock *handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index)); g_assert (handler_bb); g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb); } } else { int clause_index = group_start - cfg->header->clauses; MonoBasicBlock *finally_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (clause_index)); g_assert (finally_bb); LLVMBuildBr (ctx->builder, ctx->bblocks [finally_bb->block_num].call_handler_target_bb); } ctx->builder = old_builder; return lpad_bb; } static LLVMValueRef create_const_vector (LLVMTypeRef t, const int *vals, int count) { g_assert (count <= MAX_VECTOR_ELEMS); LLVMValueRef llvm_vals [MAX_VECTOR_ELEMS]; for (int i = 0; i < count; i++) llvm_vals [i] = LLVMConstInt (t, vals [i], FALSE); return LLVMConstVector (llvm_vals, count); } static LLVMValueRef create_const_vector_i32 (const int *mask, int count) { return create_const_vector (LLVMInt32Type (), mask, count); } static LLVMValueRef create_const_vector_4_i32 (int v0, int v1, int v2, int v3) { LLVMValueRef mask [4]; mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE); mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE); mask [2] = LLVMConstInt (LLVMInt32Type (), v2, FALSE); mask [3] = LLVMConstInt (LLVMInt32Type (), v3, FALSE); return LLVMConstVector (mask, 4); } static LLVMValueRef create_const_vector_2_i32 (int v0, int v1) { LLVMValueRef mask [2]; mask [0] = LLVMConstInt (LLVMInt32Type (), v0, FALSE); mask [1] = LLVMConstInt (LLVMInt32Type (), v1, FALSE); return LLVMConstVector (mask, 2); } static LLVMValueRef broadcast_element (EmitContext *ctx, LLVMValueRef elem, int count) { LLVMTypeRef t = LLVMTypeOf (elem); LLVMTypeRef init_vec_t = LLVMVectorType (t, 1); LLVMValueRef undef = LLVMGetUndef (init_vec_t); LLVMValueRef vec = LLVMBuildInsertElement (ctx->builder, undef, elem, const_int32 (0), ""); LLVMValueRef select_zero = LLVMConstNull (LLVMVectorType (LLVMInt32Type (), count)); return LLVMBuildShuffleVector (ctx->builder, vec, undef, select_zero, "broadcast"); } static LLVMValueRef broadcast_constant (int const_val, LLVMTypeRef elem_t, int count) { int vals [MAX_VECTOR_ELEMS]; for (int i = 0; i < count; ++i) vals [i] = const_val; return create_const_vector (elem_t, vals, count); } static LLVMValueRef create_shift_vector (EmitContext *ctx, LLVMValueRef type_donor, LLVMValueRef shiftamt) { LLVMTypeRef t = LLVMTypeOf (type_donor); unsigned int elems = LLVMGetVectorSize (t); LLVMTypeRef elem_t = LLVMGetElementType (t); shiftamt = convert_full (ctx, shiftamt, elem_t, TRUE); shiftamt = broadcast_element (ctx, shiftamt, elems); return shiftamt; } static LLVMTypeRef to_integral_vector_type (LLVMTypeRef t) { unsigned int elems = LLVMGetVectorSize (t); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int bits = mono_llvm_get_prim_size_bits (elem_t); return LLVMVectorType (LLVMIntType (bits), elems); } static LLVMValueRef bitcast_to_integral (EmitContext *ctx, LLVMValueRef vec) { LLVMTypeRef src_t = LLVMTypeOf (vec); LLVMTypeRef dst_t = to_integral_vector_type (src_t); if (dst_t != src_t) return LLVMBuildBitCast (ctx->builder, vec, dst_t, "bc2i"); return vec; } static LLVMValueRef extract_high_elements (EmitContext *ctx, LLVMValueRef src_vec) { LLVMTypeRef src_t = LLVMTypeOf (src_vec); unsigned int src_elems = LLVMGetVectorSize (src_t); unsigned int dst_elems = src_elems / 2; int mask [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 0; i < dst_elems; ++i) mask [i] = dst_elems + i; return LLVMBuildShuffleVector (ctx->builder, src_vec, LLVMGetUndef (src_t), create_const_vector_i32 (mask, dst_elems), "extract_high"); } static LLVMValueRef keep_lowest_element (EmitContext *ctx, LLVMTypeRef dst_t, LLVMValueRef vec) { LLVMTypeRef t = LLVMTypeOf (vec); g_assert (LLVMGetElementType (dst_t) == LLVMGetElementType (t)); unsigned int elems = LLVMGetVectorSize (dst_t); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; mask [0] = 0; for (unsigned int i = 1; i < elems; ++i) mask [i] = src_elems; return LLVMBuildShuffleVector (ctx->builder, vec, LLVMConstNull (t), create_const_vector_i32 (mask, elems), "keep_lowest"); } static LLVMValueRef concatenate_vectors (EmitContext *ctx, LLVMValueRef xs, LLVMValueRef ys) { LLVMTypeRef t = LLVMTypeOf (xs); unsigned int elems = LLVMGetVectorSize (t) * 2; int mask [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 0; i < elems; ++i) mask [i] = i; return LLVMBuildShuffleVector (ctx->builder, xs, ys, create_const_vector_i32 (mask, elems), "concat_vecs"); } static LLVMValueRef scalar_from_vector (EmitContext *ctx, LLVMValueRef xs) { return LLVMBuildExtractElement (ctx->builder, xs, const_int32 (0), "v2s"); } static LLVMValueRef vector_from_scalar (EmitContext *ctx, LLVMTypeRef type, LLVMValueRef x) { return LLVMBuildInsertElement (ctx->builder, LLVMConstNull (type), x, const_int32 (0), "s2v"); } typedef struct { EmitContext *ctx; MonoBasicBlock *bb; LLVMBasicBlockRef continuation; LLVMValueRef phi; LLVMValueRef switch_ins; LLVMBasicBlockRef tmp_block; LLVMBasicBlockRef default_case; LLVMTypeRef switch_index_type; const char *name; int max_cases; int i; } ImmediateUnrollCtx; static ImmediateUnrollCtx immediate_unroll_begin ( EmitContext *ctx, MonoBasicBlock *bb, int max_cases, LLVMValueRef switch_index, LLVMTypeRef return_type, const char *name) { LLVMBasicBlockRef default_case = gen_bb (ctx, name); LLVMBasicBlockRef continuation = gen_bb (ctx, name); LLVMValueRef switch_ins = LLVMBuildSwitch (ctx->builder, switch_index, default_case, max_cases); LLVMPositionBuilderAtEnd (ctx->builder, continuation); LLVMValueRef phi = LLVMBuildPhi (ctx->builder, return_type, name); ImmediateUnrollCtx ictx = { 0 }; ictx.ctx = ctx; ictx.bb = bb; ictx.continuation = continuation; ictx.phi = phi; ictx.switch_ins = switch_ins; ictx.default_case = default_case; ictx.switch_index_type = LLVMTypeOf (switch_index); ictx.name = name; ictx.max_cases = max_cases; return ictx; } static gboolean immediate_unroll_next (ImmediateUnrollCtx *ictx, int *i) { if (ictx->i >= ictx->max_cases) return FALSE; ictx->tmp_block = gen_bb (ictx->ctx, ictx->name); LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->tmp_block); *i = ictx->i; ++ictx->i; return TRUE; } static void immediate_unroll_commit (ImmediateUnrollCtx *ictx, int switch_const, LLVMValueRef value) { LLVMBuildBr (ictx->ctx->builder, ictx->continuation); LLVMAddCase (ictx->switch_ins, LLVMConstInt (ictx->switch_index_type, switch_const, FALSE), ictx->tmp_block); LLVMAddIncoming (ictx->phi, &value, &ictx->tmp_block, 1); } static void immediate_unroll_default (ImmediateUnrollCtx *ictx) { LLVMPositionBuilderAtEnd (ictx->ctx->builder, ictx->default_case); } static void immediate_unroll_commit_default (ImmediateUnrollCtx *ictx, LLVMValueRef value) { LLVMBuildBr (ictx->ctx->builder, ictx->continuation); LLVMAddIncoming (ictx->phi, &value, &ictx->default_case, 1); } static void immediate_unroll_unreachable_default (ImmediateUnrollCtx *ictx) { immediate_unroll_default (ictx); LLVMBuildUnreachable (ictx->ctx->builder); } static LLVMValueRef immediate_unroll_end (ImmediateUnrollCtx *ictx, LLVMBasicBlockRef *continuation) { EmitContext *ctx = ictx->ctx; LLVMBuilderRef builder = ctx->builder; LLVMPositionBuilderAtEnd (builder, ictx->continuation); *continuation = ictx->continuation; ctx->bblocks [ictx->bb->block_num].end_bblock = ictx->continuation; return ictx->phi; } typedef struct { EmitContext *ctx; LLVMTypeRef intermediate_type; LLVMTypeRef return_type; gboolean needs_fake_scalar_op; llvm_ovr_tag_t ovr_tag; } ScalarOpFromVectorOpCtx; static inline gboolean check_needs_fake_scalar_op (MonoTypeEnum type) { #if defined(TARGET_ARM64) switch (type) { case MONO_TYPE_U1: case MONO_TYPE_I1: case MONO_TYPE_U2: case MONO_TYPE_I2: return TRUE; } #endif return FALSE; } static ScalarOpFromVectorOpCtx scalar_op_from_vector_op (EmitContext *ctx, LLVMTypeRef return_type, MonoInst *ins) { ScalarOpFromVectorOpCtx ret = { 0 }; ret.ctx = ctx; ret.intermediate_type = return_type; ret.return_type = return_type; ret.needs_fake_scalar_op = check_needs_fake_scalar_op (inst_c1_type (ins)); ret.ovr_tag = ovr_tag_from_llvm_type (return_type); if (!ret.needs_fake_scalar_op) { ret.ovr_tag = ovr_tag_force_scalar (ret.ovr_tag); ret.intermediate_type = ovr_tag_to_llvm_type (ret.ovr_tag); } return ret; } static void scalar_op_from_vector_op_process_args (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef *args, int num_args) { if (!sctx->needs_fake_scalar_op) for (int i = 0; i < num_args; ++i) args [i] = scalar_from_vector (sctx->ctx, args [i]); } static LLVMValueRef scalar_op_from_vector_op_process_result (ScalarOpFromVectorOpCtx *sctx, LLVMValueRef result) { if (sctx->needs_fake_scalar_op) return keep_lowest_element (sctx->ctx, LLVMTypeOf (result), result); return vector_from_scalar (sctx->ctx, sctx->return_type, result); } static void emit_llvmonly_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBasicBlockRef cbb) { int clause_index = MONO_REGION_CLAUSE_INDEX (bb->region); MonoExceptionClause *clause = &ctx->cfg->header->clauses [clause_index]; // Make exception available to catch blocks if (!(clause->flags & MONO_EXCEPTION_CLAUSE_FINALLY || clause->flags & MONO_EXCEPTION_CLAUSE_FAULT)) { LLVMValueRef mono_exc = mono_llvm_emit_load_exception_call (ctx, ctx->builder); g_assert (ctx->ex_var); LLVMBuildStore (ctx->builder, LLVMBuildBitCast (ctx->builder, mono_exc, ObjRefType (), ""), ctx->ex_var); if (bb->in_scount == 1) { MonoInst *exvar = bb->in_stack [0]; g_assert (!ctx->values [exvar->dreg]); g_assert (ctx->ex_var); ctx->values [exvar->dreg] = LLVMBuildLoad (ctx->builder, ctx->ex_var, "save_exception"); emit_volatile_store (ctx, exvar->dreg); } mono_llvm_emit_clear_exception_call (ctx, ctx->builder); } #ifdef TARGET_WASM if (ctx->cfg->lmf_var && !ctx->cfg->deopt) { LLVMValueRef callee; LLVMValueRef args [1]; LLVMTypeRef sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE); /* * There might be an LMF on the stack inserted to enable stack walking, see * method_needs_stack_walk (). If an exception is thrown, the LMF popping code * is not executed, so do it here. */ g_assert (ctx->addresses [ctx->cfg->lmf_var->dreg]); callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_pop_lmf)); args [0] = convert (ctx, ctx->addresses [ctx->cfg->lmf_var->dreg], ctx->module->ptr_type); emit_call (ctx, bb, &ctx->builder, callee, args, 1); } #endif LLVMBuilderRef handler_builder = create_builder (ctx); LLVMBasicBlockRef target_bb = ctx->bblocks [bb->block_num].call_handler_target_bb; LLVMPositionBuilderAtEnd (handler_builder, target_bb); // Make the handler code end with a jump to cbb LLVMBuildBr (handler_builder, cbb); } static void emit_handler_start (EmitContext *ctx, MonoBasicBlock *bb, LLVMBuilderRef builder) { MonoCompile *cfg = ctx->cfg; LLVMValueRef *values = ctx->values; LLVMModuleRef lmodule = ctx->lmodule; BBInfo *bblocks = ctx->bblocks; LLVMTypeRef i8ptr; LLVMValueRef personality; LLVMValueRef landing_pad; LLVMBasicBlockRef target_bb; MonoInst *exvar; static int ti_generator; char ti_name [128]; LLVMValueRef type_info; int clause_index; GSList *l; // <resultval> = landingpad <somety> personality <type> <pers_fn> <clause>+ if (cfg->compile_aot) { /* Use a dummy personality function */ personality = LLVMGetNamedFunction (lmodule, "mono_personality"); g_assert (personality); } else { /* Can't cache this as each method is in its own llvm module */ LLVMTypeRef personality_type = LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE); personality = LLVMAddFunction (ctx->lmodule, "mono_personality", personality_type); mono_llvm_add_func_attr (personality, LLVM_ATTR_NO_UNWIND); LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (personality, "ENTRY"); LLVMBuilderRef builder2 = LLVMCreateBuilder (); LLVMPositionBuilderAtEnd (builder2, entry_bb); LLVMBuildRet (builder2, LLVMConstInt (LLVMInt32Type (), 0, FALSE)); LLVMDisposeBuilder (builder2); } i8ptr = LLVMPointerType (LLVMInt8Type (), 0); clause_index = (mono_get_block_region_notry (cfg, bb->region) >> 8) - 1; /* * Create the type info */ sprintf (ti_name, "type_info_%d", ti_generator); ti_generator ++; if (cfg->compile_aot) { /* decode_eh_frame () in aot-runtime.c will decode this */ type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name); LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE)); /* * These symbols are not really used, the clause_index is embedded into the EH tables generated by DwarfMonoException in LLVM. */ LLVMSetLinkage (type_info, LLVMInternalLinkage); } else { type_info = LLVMAddGlobal (lmodule, LLVMInt32Type (), ti_name); LLVMSetInitializer (type_info, LLVMConstInt (LLVMInt32Type (), clause_index, FALSE)); } { LLVMTypeRef members [2], ret_type; members [0] = i8ptr; members [1] = LLVMInt32Type (); ret_type = LLVMStructType (members, 2, FALSE); landing_pad = LLVMBuildLandingPad (builder, ret_type, personality, 1, ""); LLVMAddClause (landing_pad, type_info); /* Store the exception into the exvar */ if (ctx->ex_var) LLVMBuildStore (builder, convert (ctx, LLVMBuildExtractValue (builder, landing_pad, 0, "ex_obj"), ObjRefType ()), ctx->ex_var); } /* * LLVM throw sites are associated with a one landing pad, and LLVM generated * code expects control to be transferred to this landing pad even in the * presence of nested clauses. The landing pad needs to branch to the landing * pads belonging to nested clauses based on the selector value returned by * the landing pad instruction, which is passed to the landing pad in a * register by the EH code. */ target_bb = bblocks [bb->block_num].call_handler_target_bb; g_assert (target_bb); /* * Branch to the correct landing pad */ LLVMValueRef ex_selector = LLVMBuildExtractValue (builder, landing_pad, 1, "ex_selector"); LLVMValueRef switch_ins = LLVMBuildSwitch (builder, ex_selector, target_bb, 0); for (l = ctx->nested_in [clause_index]; l; l = l->next) { int nesting_clause_index = GPOINTER_TO_INT (l->data); MonoBasicBlock *handler_bb; handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->clause_to_handler, GINT_TO_POINTER (nesting_clause_index)); g_assert (handler_bb); g_assert (ctx->bblocks [handler_bb->block_num].call_handler_target_bb); LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), nesting_clause_index, FALSE), ctx->bblocks [handler_bb->block_num].call_handler_target_bb); } /* Start a new bblock which CALL_HANDLER can branch to */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, target_bb); ctx->bblocks [bb->block_num].end_bblock = target_bb; /* Store the exception into the IL level exvar */ if (bb->in_scount == 1) { g_assert (bb->in_scount == 1); exvar = bb->in_stack [0]; // FIXME: This is shared with filter clauses ? g_assert (!values [exvar->dreg]); g_assert (ctx->ex_var); values [exvar->dreg] = LLVMBuildLoad (builder, ctx->ex_var, ""); emit_volatile_store (ctx, exvar->dreg); } /* Make normal branches to the start of the clause branch to the new bblock */ bblocks [bb->block_num].bblock = target_bb; } static LLVMValueRef get_double_const (MonoCompile *cfg, double val) { //#ifdef TARGET_WASM #if 0 //Wasm requires us to canonicalize NaNs. if (mono_isnan (val)) *(gint64 *)&val = 0x7FF8000000000000ll; #endif return LLVMConstReal (LLVMDoubleType (), val); } static LLVMValueRef get_float_const (MonoCompile *cfg, float val) { //#ifdef TARGET_WASM #if 0 if (mono_isnan (val)) *(int *)&val = 0x7FC00000; #endif if (cfg->r4fp) return LLVMConstReal (LLVMFloatType (), val); else return LLVMConstFPExt (LLVMConstReal (LLVMFloatType (), val), LLVMDoubleType ()); } static LLVMValueRef call_overloaded_intrins (EmitContext *ctx, int id, llvm_ovr_tag_t ovr_tag, LLVMValueRef *args, const char *name) { int key = key_from_id_and_tag (id, ovr_tag); LLVMValueRef intrins = get_intrins (ctx, key); int nargs = LLVMCountParamTypes (LLVMGetElementType (LLVMTypeOf (intrins))); for (int i = 0; i < nargs; ++i) { LLVMTypeRef t1 = LLVMTypeOf (args [i]); LLVMTypeRef t2 = LLVMTypeOf (LLVMGetParam (intrins, i)); if (t1 != t2) args [i] = convert (ctx, args [i], t2); } return LLVMBuildCall (ctx->builder, intrins, args, nargs, name); } static LLVMValueRef call_intrins (EmitContext *ctx, int id, LLVMValueRef *args, const char *name) { return call_overloaded_intrins (ctx, id, 0, args, name); } static void process_bb (EmitContext *ctx, MonoBasicBlock *bb) { MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig = ctx->sig; LLVMValueRef method = ctx->lmethod; LLVMValueRef *values = ctx->values; LLVMValueRef *addresses = ctx->addresses; LLVMCallInfo *linfo = ctx->linfo; BBInfo *bblocks = ctx->bblocks; MonoInst *ins; LLVMBasicBlockRef cbb; LLVMBuilderRef builder; gboolean has_terminator; LLVMValueRef v; LLVMValueRef lhs, rhs, arg3; int nins = 0; cbb = get_end_bb (ctx, bb); builder = create_builder (ctx); ctx->builder = builder; LLVMPositionBuilderAtEnd (builder, cbb); if (!ctx_ok (ctx)) return; if (cfg->interp_entry_only && bb != cfg->bb_init && bb != cfg->bb_entry && bb != cfg->bb_exit) { /* The interp entry code is in bb_entry, skip the rest as we might not be able to compile it */ LLVMBuildUnreachable (builder); return; } if (bb->flags & BB_EXCEPTION_HANDLER) { if (!ctx->llvm_only && !bblocks [bb->block_num].invoke_target) { set_failure (ctx, "handler without invokes"); return; } if (ctx->llvm_only) emit_llvmonly_handler_start (ctx, bb, cbb); else emit_handler_start (ctx, bb, builder); if (!ctx_ok (ctx)) return; builder = ctx->builder; } /* Handle PHI nodes first */ /* They should be grouped at the start of the bb */ for (ins = bb->code; ins; ins = ins->next) { emit_dbg_loc (ctx, builder, ins->cil_code); if (ins->opcode == OP_NOP) continue; if (!MONO_IS_PHI (ins)) break; if (cfg->interp_entry_only) break; int i; gboolean empty = TRUE; /* Check that all input bblocks really branch to us */ for (i = 0; i < bb->in_count; ++i) { if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_NOT_REACHED) ins->inst_phi_args [i + 1] = -1; else empty = FALSE; } if (empty) { /* LLVM doesn't like phi instructions with zero operands */ ctx->is_dead [ins->dreg] = TRUE; continue; } /* Created earlier, insert it now */ LLVMInsertIntoBuilder (builder, values [ins->dreg]); for (i = 0; i < ins->inst_phi_args [0]; i++) { int sreg1 = ins->inst_phi_args [i + 1]; int count, j; /* * Count the number of times the incoming bblock branches to us, * since llvm requires a separate entry for each. */ if (bb->in_bb [i]->last_ins && bb->in_bb [i]->last_ins->opcode == OP_SWITCH) { MonoInst *switch_ins = bb->in_bb [i]->last_ins; count = 0; for (j = 0; j < GPOINTER_TO_UINT (switch_ins->klass); ++j) { if (switch_ins->inst_many_bb [j] == bb) count ++; } } else { count = 1; } /* Remember for later */ for (j = 0; j < count; ++j) { PhiNode *node = (PhiNode*)mono_mempool_alloc0 (ctx->mempool, sizeof (PhiNode)); node->bb = bb; node->phi = ins; node->in_bb = bb->in_bb [i]; node->sreg = sreg1; bblocks [bb->in_bb [i]->block_num].phi_nodes = g_slist_prepend_mempool (ctx->mempool, bblocks [bb->in_bb [i]->block_num].phi_nodes, node); } } } // Add volatile stores for PHI nodes // These need to be emitted after the PHI nodes for (ins = bb->code; ins; ins = ins->next) { const char *spec = LLVM_INS_INFO (ins->opcode); if (ins->opcode == OP_NOP) continue; if (!MONO_IS_PHI (ins)) break; if (spec [MONO_INST_DEST] != 'v') emit_volatile_store (ctx, ins->dreg); } has_terminator = FALSE; for (ins = bb->code; ins; ins = ins->next) { const char *spec = LLVM_INS_INFO (ins->opcode); char *dname = NULL; char dname_buf [128]; emit_dbg_loc (ctx, builder, ins->cil_code); nins ++; if (nins > 1000) { /* * Some steps in llc are non-linear in the size of basic blocks, see #5714. * Start a new bblock. * Prevent the bblocks to be merged by doing a volatile load + cond branch * from localloc-ed memory. */ if (!cfg->llvm_only) ;//set_failure (ctx, "basic block too long"); if (!ctx->long_bb_break_var) { ctx->long_bb_break_var = build_alloca_llvm_type_name (ctx, LLVMInt32Type (), 0, "long_bb_break"); mono_llvm_build_store (ctx->alloca_builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE); } cbb = gen_bb (ctx, "CONT_LONG_BB"); LLVMBasicBlockRef dummy_bb = gen_bb (ctx, "CONT_LONG_BB_DUMMY"); LLVMValueRef load = mono_llvm_build_load (builder, ctx->long_bb_break_var, "", TRUE); /* * The long_bb_break_var is initialized to 0 in the prolog, so this branch will always go to 'cbb' * but llvm doesn't know that, so the branch is not going to be eliminated. */ LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntEQ, load, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMBuildCondBr (builder, cmp, cbb, dummy_bb); /* Emit a dummy false bblock which does nothing but contains a volatile store so it cannot be eliminated */ ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, dummy_bb); mono_llvm_build_store (builder, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ctx->long_bb_break_var, TRUE, LLVM_BARRIER_NONE); LLVMBuildBr (builder, cbb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, cbb); ctx->bblocks [bb->block_num].end_bblock = cbb; nins = 0; emit_dbg_loc (ctx, builder, ins->cil_code); } if (has_terminator) /* There could be instructions after a terminator, skip them */ break; if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins)) { sprintf (dname_buf, "t%d", ins->dreg); dname = dname_buf; } if (spec [MONO_INST_SRC1] != ' ' && spec [MONO_INST_SRC1] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT) && var->opcode != OP_GSHAREDVT_ARG_REGOFFSET) { lhs = emit_volatile_load (ctx, ins->sreg1); } else { /* It is ok for SETRET to have an uninitialized argument */ if (!values [ins->sreg1] && ins->opcode != OP_SETRET) { set_failure (ctx, "sreg1"); return; } lhs = values [ins->sreg1]; } } else { lhs = NULL; } if (spec [MONO_INST_SRC2] != ' ' && spec [MONO_INST_SRC2] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg2); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { rhs = emit_volatile_load (ctx, ins->sreg2); } else { if (!values [ins->sreg2]) { set_failure (ctx, "sreg2"); return; } rhs = values [ins->sreg2]; } } else { rhs = NULL; } if (spec [MONO_INST_SRC3] != ' ' && spec [MONO_INST_SRC3] != 'v') { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg3); if (var && var->flags & (MONO_INST_VOLATILE|MONO_INST_INDIRECT)) { arg3 = emit_volatile_load (ctx, ins->sreg3); } else { if (!values [ins->sreg3]) { set_failure (ctx, "sreg3"); return; } arg3 = values [ins->sreg3]; } } else { arg3 = NULL; } //mono_print_ins (ins); gboolean skip_volatile_store = FALSE; switch (ins->opcode) { case OP_NOP: case OP_NOT_NULL: case OP_LIVERANGE_START: case OP_LIVERANGE_END: break; case OP_ICONST: values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE); break; case OP_I8CONST: #if TARGET_SIZEOF_VOID_P == 4 values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); #else values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), (gint64)ins->inst_c0, FALSE); #endif break; case OP_R8CONST: values [ins->dreg] = get_double_const (cfg, *(double*)ins->inst_p0); break; case OP_R4CONST: values [ins->dreg] = get_float_const (cfg, *(float*)ins->inst_p0); break; case OP_DUMMY_ICONST: values [ins->dreg] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); break; case OP_DUMMY_I8CONST: values [ins->dreg] = LLVMConstInt (LLVMInt64Type (), 0, FALSE); break; case OP_DUMMY_R8CONST: values [ins->dreg] = LLVMConstReal (LLVMDoubleType (), 0.0f); break; case OP_BR: { LLVMBasicBlockRef target_bb = get_bb (ctx, ins->inst_target_bb); LLVMBuildBr (builder, target_bb); has_terminator = TRUE; break; } case OP_SWITCH: { int i; LLVMValueRef v; char bb_name [128]; LLVMBasicBlockRef new_bb; LLVMBuilderRef new_builder; // The default branch is already handled // FIXME: Handle it here /* Start new bblock */ sprintf (bb_name, "SWITCH_DEFAULT_BB%d", ctx->default_index ++); new_bb = LLVMAppendBasicBlock (ctx->lmethod, bb_name); lhs = convert (ctx, lhs, LLVMInt32Type ()); v = LLVMBuildSwitch (builder, lhs, new_bb, GPOINTER_TO_UINT (ins->klass)); for (i = 0; i < GPOINTER_TO_UINT (ins->klass); ++i) { MonoBasicBlock *target_bb = ins->inst_many_bb [i]; LLVMAddCase (v, LLVMConstInt (LLVMInt32Type (), i, FALSE), get_bb (ctx, target_bb)); } new_builder = create_builder (ctx); LLVMPositionBuilderAtEnd (new_builder, new_bb); LLVMBuildUnreachable (new_builder); has_terminator = TRUE; g_assert (!ins->next); break; } case OP_SETRET: switch (linfo->ret.storage) { case LLVMArgNormal: case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: case LLVMArgWasmVtypeAsScalar: { LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method))); LLVMValueRef retval = LLVMGetUndef (ret_type); gboolean src_in_reg = FALSE; gboolean is_simd = MONO_CLASS_IS_SIMD (ctx->cfg, mono_class_from_mono_type_internal (sig->ret)); switch (linfo->ret.storage) { case LLVMArgNormal: src_in_reg = TRUE; break; case LLVMArgVtypeInReg: case LLVMArgVtypeAsScalar: src_in_reg = is_simd; break; } if (src_in_reg && (!lhs || ctx->is_dead [ins->sreg1])) { /* * The method did not set its return value, probably because it * ends with a throw. */ LLVMBuildRet (builder, retval); break; } switch (linfo->ret.storage) { case LLVMArgNormal: retval = convert (ctx, lhs, type_to_llvm_type (ctx, sig->ret)); break; case LLVMArgVtypeInReg: if (is_simd) { /* The return type is an LLVM aggregate type, so a bare bitcast cannot be used to do this conversion. */ int width = mono_type_size (sig->ret, NULL); int elems = width / TARGET_SIZEOF_VOID_P; /* The return value might not be set if there is a throw */ LLVMValueRef val = LLVMBuildBitCast (builder, lhs, LLVMVectorType (IntPtrType (), elems), ""); for (int i = 0; i < elems; ++i) { LLVMValueRef element = LLVMBuildExtractElement (builder, val, const_int32 (i), ""); retval = LLVMBuildInsertValue (builder, retval, element, i, "setret_simd_vtype_in_reg"); } } else { LLVMValueRef addr = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""); for (int i = 0; i < 2; ++i) { if (linfo->ret.pair_storage [i] == LLVMArgInIReg) { LLVMValueRef indexes [2], part_addr; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), i, FALSE); part_addr = LLVMBuildGEP (builder, addr, indexes, 2, ""); retval = LLVMBuildInsertValue (builder, retval, LLVMBuildLoad (builder, part_addr, ""), i, ""); } else { g_assert (linfo->ret.pair_storage [i] == LLVMArgNone); } } } break; case LLVMArgVtypeAsScalar: if (is_simd) { retval = LLVMBuildBitCast (builder, values [ins->sreg1], ret_type, "setret_simd_vtype_as_scalar"); } else { g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), ""); } break; case LLVMArgWasmVtypeAsScalar: g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (ret_type, 0), ""), ""); break; } LLVMBuildRet (builder, retval); break; } case LLVMArgVtypeByRef: { LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtFixed: { LLVMTypeRef ret_type = type_to_llvm_type (ctx, sig->ret); /* The return value is in lhs, need to store to the vret argument */ /* sreg1 might not be set */ if (lhs) { g_assert (cfg->vret_addr); g_assert (values [cfg->vret_addr->dreg]); LLVMBuildStore (builder, convert (ctx, lhs, ret_type), convert (ctx, values [cfg->vret_addr->dreg], LLVMPointerType (ret_type, 0))); } LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtFixedVtype: { /* Already set */ LLVMBuildRetVoid (builder); break; } case LLVMArgGsharedvtVariable: { /* Already set */ LLVMBuildRetVoid (builder); break; } case LLVMArgVtypeRetAddr: { LLVMBuildRetVoid (builder); break; } case LLVMArgAsIArgs: case LLVMArgFpStruct: { LLVMTypeRef ret_type = LLVMGetReturnType (LLVMGetElementType (LLVMTypeOf (method))); LLVMValueRef retval; g_assert (addresses [ins->sreg1]); retval = LLVMBuildLoad (builder, convert (ctx, addresses [ins->sreg1], LLVMPointerType (ret_type, 0)), ""); LLVMBuildRet (builder, retval); break; } case LLVMArgNone: LLVMBuildRetVoid (builder); break; default: g_assert_not_reached (); break; } has_terminator = TRUE; break; case OP_ICOMPARE: case OP_FCOMPARE: case OP_RCOMPARE: case OP_LCOMPARE: case OP_COMPARE: case OP_ICOMPARE_IMM: case OP_LCOMPARE_IMM: case OP_COMPARE_IMM: { CompRelation rel; LLVMValueRef cmp, args [16]; gboolean likely = (ins->flags & MONO_INST_LIKELY) != 0; gboolean unlikely = FALSE; if (MONO_IS_COND_BRANCH_OP (ins->next)) { if (ins->next->inst_false_bb->out_of_line) likely = TRUE; else if (ins->next->inst_true_bb->out_of_line) unlikely = TRUE; } if (ins->next->opcode == OP_NOP) break; if (ins->next->opcode == OP_BR) /* The comparison result is not needed */ continue; rel = mono_opcode_to_cond (ins->next->opcode); if (ins->opcode == OP_ICOMPARE_IMM) { lhs = convert (ctx, lhs, LLVMInt32Type ()); rhs = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); } if (ins->opcode == OP_LCOMPARE_IMM) { lhs = convert (ctx, lhs, LLVMInt64Type ()); rhs = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); } if (ins->opcode == OP_LCOMPARE) { lhs = convert (ctx, lhs, LLVMInt64Type ()); rhs = convert (ctx, rhs, LLVMInt64Type ()); } if (ins->opcode == OP_ICOMPARE) { lhs = convert (ctx, lhs, LLVMInt32Type ()); rhs = convert (ctx, rhs, LLVMInt32Type ()); } if (lhs && rhs) { if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind) rhs = convert (ctx, rhs, LLVMTypeOf (lhs)); else if (LLVMGetTypeKind (LLVMTypeOf (rhs)) == LLVMPointerTypeKind) lhs = convert (ctx, lhs, LLVMTypeOf (rhs)); } /* We use COMPARE+SETcc/Bcc, llvm uses SETcc+br cond */ if (ins->opcode == OP_FCOMPARE) { cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), ""); } else if (ins->opcode == OP_RCOMPARE) { cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), ""); } else if (ins->opcode == OP_COMPARE_IMM) { LLVMIntPredicate llvm_pred = cond_to_llvm_cond [rel]; if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && ins->inst_imm == 0) { // We are emitting a NULL check for a pointer gboolean nonnull = mono_llvm_is_nonnull (lhs); if (nonnull && llvm_pred == LLVMIntEQ) cmp = LLVMConstInt (LLVMInt1Type (), FALSE, FALSE); else if (nonnull && llvm_pred == LLVMIntNE) cmp = LLVMConstInt (LLVMInt1Type (), TRUE, FALSE); else cmp = LLVMBuildICmp (builder, llvm_pred, lhs, LLVMConstNull (LLVMTypeOf (lhs)), ""); } else { cmp = LLVMBuildICmp (builder, llvm_pred, convert (ctx, lhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), ""); } } else if (ins->opcode == OP_LCOMPARE_IMM) { cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); } else if (ins->opcode == OP_COMPARE) { if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind && LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); else cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], convert (ctx, lhs, IntPtrType ()), convert (ctx, rhs, IntPtrType ()), ""); } else cmp = LLVMBuildICmp (builder, cond_to_llvm_cond [rel], lhs, rhs, ""); if (likely || unlikely) { args [0] = cmp; args [1] = LLVMConstInt (LLVMInt1Type (), likely ? 1 : 0, FALSE); cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, ""); } if (MONO_IS_COND_BRANCH_OP (ins->next)) { if (ins->next->inst_true_bb == ins->next->inst_false_bb) { /* * If the target bb contains PHI instructions, LLVM requires * two PHI entries for this bblock, while we only generate one. * So convert this to an unconditional bblock. (bxc #171). */ LLVMBuildBr (builder, get_bb (ctx, ins->next->inst_true_bb)); } else { LLVMBuildCondBr (builder, cmp, get_bb (ctx, ins->next->inst_true_bb), get_bb (ctx, ins->next->inst_false_bb)); } has_terminator = TRUE; } else if (MONO_IS_SETCC (ins->next)) { sprintf (dname_buf, "t%d", ins->next->dreg); dname = dname_buf; values [ins->next->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); /* Add stores for volatile variables */ emit_volatile_store (ctx, ins->next->dreg); } else if (MONO_IS_COND_EXC (ins->next)) { gboolean force_explicit_branch = FALSE; if (bb->region != -1) { /* Don't tag null check branches in exception-handling * regions with `make.implicit`. */ force_explicit_branch = TRUE; } emit_cond_system_exception (ctx, bb, (const char*)ins->next->inst_p1, cmp, force_explicit_branch); if (!ctx_ok (ctx)) break; builder = ctx->builder; } else { set_failure (ctx, "next"); break; } ins = ins->next; break; } case OP_FCEQ: case OP_FCNEQ: case OP_FCLT: case OP_FCLT_UN: case OP_FCGT: case OP_FCGT_UN: case OP_FCGE: case OP_FCLE: { CompRelation rel; LLVMValueRef cmp; rel = mono_opcode_to_cond (ins->opcode); cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMDoubleType ()), convert (ctx, rhs, LLVMDoubleType ()), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); break; } case OP_RCEQ: case OP_RCNEQ: case OP_RCLT: case OP_RCLT_UN: case OP_RCGT: case OP_RCGT_UN: { CompRelation rel; LLVMValueRef cmp; rel = mono_opcode_to_cond (ins->opcode); cmp = LLVMBuildFCmp (builder, fpcond_to_llvm_cond [rel], convert (ctx, lhs, LLVMFloatType ()), convert (ctx, rhs, LLVMFloatType ()), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp, LLVMInt32Type (), dname); break; } case OP_PHI: case OP_FPHI: case OP_VPHI: case OP_XPHI: { // Handled above skip_volatile_store = TRUE; break; } case OP_MOVE: case OP_LMOVE: case OP_XMOVE: case OP_SETFRET: g_assert (lhs); values [ins->dreg] = lhs; break; case OP_FMOVE: case OP_RMOVE: { MonoInst *var = get_vreg_to_inst (cfg, ins->dreg); g_assert (lhs); values [ins->dreg] = lhs; if (var && m_class_get_byval_arg (var->klass)->type == MONO_TYPE_R4) { /* * This is added by the spilling pass in case of the JIT, * but we have to do it ourselves. */ values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ()); } break; } case OP_MOVE_F_TO_I4: { values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""), LLVMInt32Type (), ""); break; } case OP_MOVE_I4_TO_F: { values [ins->dreg] = LLVMBuildFPExt (builder, LLVMBuildBitCast (builder, lhs, LLVMFloatType (), ""), LLVMDoubleType (), ""); break; } case OP_MOVE_F_TO_I8: { values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMInt64Type (), ""); break; } case OP_MOVE_I8_TO_F: { values [ins->dreg] = LLVMBuildBitCast (builder, lhs, LLVMDoubleType (), ""); break; } case OP_IADD: case OP_ISUB: case OP_IAND: case OP_IMUL: case OP_IDIV: case OP_IDIV_UN: case OP_IREM: case OP_IREM_UN: case OP_IOR: case OP_IXOR: case OP_ISHL: case OP_ISHR: case OP_ISHR_UN: case OP_FADD: case OP_FSUB: case OP_FMUL: case OP_FDIV: case OP_LADD: case OP_LSUB: case OP_LMUL: case OP_LDIV: case OP_LDIV_UN: case OP_LREM: case OP_LREM_UN: case OP_LAND: case OP_LOR: case OP_LXOR: case OP_LSHL: case OP_LSHR: case OP_LSHR_UN: lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); emit_div_check (ctx, builder, bb, ins, lhs, rhs); if (!ctx_ok (ctx)) break; builder = ctx->builder; switch (ins->opcode) { case OP_IADD: case OP_LADD: values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, dname); break; case OP_ISUB: case OP_LSUB: values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, dname); break; case OP_IMUL: case OP_LMUL: values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, dname); break; case OP_IREM: case OP_LREM: values [ins->dreg] = LLVMBuildSRem (builder, lhs, rhs, dname); break; case OP_IREM_UN: case OP_LREM_UN: values [ins->dreg] = LLVMBuildURem (builder, lhs, rhs, dname); break; case OP_IDIV: case OP_LDIV: values [ins->dreg] = LLVMBuildSDiv (builder, lhs, rhs, dname); break; case OP_IDIV_UN: case OP_LDIV_UN: values [ins->dreg] = LLVMBuildUDiv (builder, lhs, rhs, dname); break; case OP_FDIV: case OP_RDIV: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname); break; case OP_IAND: case OP_LAND: values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, dname); break; case OP_IOR: case OP_LOR: values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, dname); break; case OP_IXOR: case OP_LXOR: values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, dname); break; case OP_ISHL: case OP_LSHL: values [ins->dreg] = LLVMBuildShl (builder, lhs, rhs, dname); break; case OP_ISHR: case OP_LSHR: values [ins->dreg] = LLVMBuildAShr (builder, lhs, rhs, dname); break; case OP_ISHR_UN: case OP_LSHR_UN: values [ins->dreg] = LLVMBuildLShr (builder, lhs, rhs, dname); break; case OP_FADD: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname); break; case OP_FSUB: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname); break; case OP_FMUL: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname); break; default: g_assert_not_reached (); } break; case OP_RADD: case OP_RSUB: case OP_RMUL: case OP_RDIV: { lhs = convert (ctx, lhs, LLVMFloatType ()); rhs = convert (ctx, rhs, LLVMFloatType ()); switch (ins->opcode) { case OP_RADD: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, dname); break; case OP_RSUB: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, dname); break; case OP_RMUL: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, dname); break; case OP_RDIV: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, dname); break; default: g_assert_not_reached (); break; } break; } case OP_IADD_IMM: case OP_ISUB_IMM: case OP_IMUL_IMM: case OP_IREM_IMM: case OP_IREM_UN_IMM: case OP_IDIV_IMM: case OP_IDIV_UN_IMM: case OP_IAND_IMM: case OP_IOR_IMM: case OP_IXOR_IMM: case OP_ISHL_IMM: case OP_ISHR_IMM: case OP_ISHR_UN_IMM: case OP_LADD_IMM: case OP_LSUB_IMM: case OP_LMUL_IMM: case OP_LREM_IMM: case OP_LAND_IMM: case OP_LOR_IMM: case OP_LXOR_IMM: case OP_LSHL_IMM: case OP_LSHR_IMM: case OP_LSHR_UN_IMM: case OP_ADD_IMM: case OP_AND_IMM: case OP_MUL_IMM: case OP_SHL_IMM: case OP_SHR_IMM: case OP_SHR_UN_IMM: { LLVMValueRef imm; if (spec [MONO_INST_SRC1] == 'l') { imm = LLVMConstInt (LLVMInt64Type (), GET_LONG_IMM (ins), FALSE); } else { imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); } emit_div_check (ctx, builder, bb, ins, lhs, imm); if (!ctx_ok (ctx)) break; builder = ctx->builder; #if TARGET_SIZEOF_VOID_P == 4 if (ins->opcode == OP_LSHL_IMM || ins->opcode == OP_LSHR_IMM || ins->opcode == OP_LSHR_UN_IMM) imm = LLVMConstInt (LLVMInt32Type (), ins->inst_imm, FALSE); #endif if (LLVMGetTypeKind (LLVMTypeOf (lhs)) == LLVMPointerTypeKind) lhs = convert (ctx, lhs, IntPtrType ()); imm = convert (ctx, imm, LLVMTypeOf (lhs)); switch (ins->opcode) { case OP_IADD_IMM: case OP_LADD_IMM: case OP_ADD_IMM: values [ins->dreg] = LLVMBuildAdd (builder, lhs, imm, dname); break; case OP_ISUB_IMM: case OP_LSUB_IMM: values [ins->dreg] = LLVMBuildSub (builder, lhs, imm, dname); break; case OP_IMUL_IMM: case OP_MUL_IMM: case OP_LMUL_IMM: values [ins->dreg] = LLVMBuildMul (builder, lhs, imm, dname); break; case OP_IDIV_IMM: case OP_LDIV_IMM: values [ins->dreg] = LLVMBuildSDiv (builder, lhs, imm, dname); break; case OP_IDIV_UN_IMM: case OP_LDIV_UN_IMM: values [ins->dreg] = LLVMBuildUDiv (builder, lhs, imm, dname); break; case OP_IREM_IMM: case OP_LREM_IMM: values [ins->dreg] = LLVMBuildSRem (builder, lhs, imm, dname); break; case OP_IREM_UN_IMM: values [ins->dreg] = LLVMBuildURem (builder, lhs, imm, dname); break; case OP_IAND_IMM: case OP_LAND_IMM: case OP_AND_IMM: values [ins->dreg] = LLVMBuildAnd (builder, lhs, imm, dname); break; case OP_IOR_IMM: case OP_LOR_IMM: values [ins->dreg] = LLVMBuildOr (builder, lhs, imm, dname); break; case OP_IXOR_IMM: case OP_LXOR_IMM: values [ins->dreg] = LLVMBuildXor (builder, lhs, imm, dname); break; case OP_ISHL_IMM: case OP_LSHL_IMM: values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname); break; case OP_SHL_IMM: if (TARGET_SIZEOF_VOID_P == 8) { /* The IL is not regular */ lhs = convert (ctx, lhs, LLVMInt64Type ()); imm = convert (ctx, imm, LLVMInt64Type ()); } values [ins->dreg] = LLVMBuildShl (builder, lhs, imm, dname); break; case OP_ISHR_IMM: case OP_LSHR_IMM: case OP_SHR_IMM: values [ins->dreg] = LLVMBuildAShr (builder, lhs, imm, dname); break; case OP_ISHR_UN_IMM: /* This is used to implement conv.u4, so the lhs could be an i8 */ lhs = convert (ctx, lhs, LLVMInt32Type ()); imm = convert (ctx, imm, LLVMInt32Type ()); values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname); break; case OP_LSHR_UN_IMM: case OP_SHR_UN_IMM: values [ins->dreg] = LLVMBuildLShr (builder, lhs, imm, dname); break; default: g_assert_not_reached (); } break; } case OP_INEG: values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname); break; case OP_LNEG: if (LLVMTypeOf (lhs) != LLVMInt64Type ()) lhs = convert (ctx, lhs, LLVMInt64Type ()); values [ins->dreg] = LLVMBuildSub (builder, LLVMConstInt (LLVMInt64Type (), 0, FALSE), lhs, dname); break; case OP_FNEG: lhs = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname); break; case OP_RNEG: lhs = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = LLVMBuildFNeg (builder, lhs, dname); break; case OP_INOT: { guint32 v = 0xffffffff; values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt32Type (), v, FALSE), convert (ctx, lhs, LLVMInt32Type ()), dname); break; } case OP_LNOT: { if (LLVMTypeOf (lhs) != LLVMInt64Type ()) lhs = convert (ctx, lhs, LLVMInt64Type ()); guint64 v = 0xffffffffffffffffLL; values [ins->dreg] = LLVMBuildXor (builder, LLVMConstInt (LLVMInt64Type (), v, FALSE), lhs, dname); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_X86_LEA: { LLVMValueRef v1, v2; rhs = LLVMBuildSExt (builder, convert (ctx, rhs, LLVMInt32Type ()), LLVMInt64Type (), ""); v1 = LLVMBuildMul (builder, convert (ctx, rhs, IntPtrType ()), LLVMConstInt (IntPtrType (), ((unsigned long long)1 << ins->backend.shift_amount), FALSE), ""); v2 = LLVMBuildAdd (builder, convert (ctx, lhs, IntPtrType ()), v1, ""); values [ins->dreg] = LLVMBuildAdd (builder, v2, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), dname); break; } case OP_X86_BSF32: case OP_X86_BSF64: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt1Type (), 1, TRUE), }; int op = ins->opcode == OP_X86_BSF32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64; values [ins->dreg] = call_intrins (ctx, op, args, dname); break; } case OP_X86_BSR32: case OP_X86_BSR64: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt1Type (), 1, TRUE), }; int op = ins->opcode == OP_X86_BSR32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64; LLVMValueRef width = ins->opcode == OP_X86_BSR32 ? const_int32 (31) : const_int64 (63); LLVMValueRef tz = call_intrins (ctx, op, args, ""); values [ins->dreg] = LLVMBuildXor (builder, tz, width, dname); break; } #endif case OP_ICONV_TO_I1: case OP_ICONV_TO_I2: case OP_ICONV_TO_I4: case OP_ICONV_TO_U1: case OP_ICONV_TO_U2: case OP_ICONV_TO_U4: case OP_LCONV_TO_I1: case OP_LCONV_TO_I2: case OP_LCONV_TO_U1: case OP_LCONV_TO_U2: case OP_LCONV_TO_U4: { gboolean sign; sign = (ins->opcode == OP_ICONV_TO_I1) || (ins->opcode == OP_ICONV_TO_I2) || (ins->opcode == OP_ICONV_TO_I4) || (ins->opcode == OP_LCONV_TO_I1) || (ins->opcode == OP_LCONV_TO_I2); /* Have to do two casts since our vregs have type int */ v = LLVMBuildTrunc (builder, lhs, op_to_llvm_type (ins->opcode), ""); if (sign) values [ins->dreg] = LLVMBuildSExt (builder, v, LLVMInt32Type (), dname); else values [ins->dreg] = LLVMBuildZExt (builder, v, LLVMInt32Type (), dname); break; } case OP_ICONV_TO_I8: values [ins->dreg] = LLVMBuildSExt (builder, lhs, LLVMInt64Type (), dname); break; case OP_ICONV_TO_U8: values [ins->dreg] = LLVMBuildZExt (builder, lhs, LLVMInt64Type (), dname); break; case OP_FCONV_TO_I4: case OP_RCONV_TO_I4: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt32Type (), dname); break; case OP_FCONV_TO_I1: case OP_RCONV_TO_I1: values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt8Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U1: case OP_RCONV_TO_U1: values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildTrunc (builder, LLVMBuildFPToUI (builder, lhs, IntPtrType (), dname), LLVMInt8Type (), ""), LLVMInt32Type (), ""); break; case OP_FCONV_TO_I2: case OP_RCONV_TO_I2: values [ins->dreg] = LLVMBuildSExt (builder, LLVMBuildFPToSI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U2: case OP_RCONV_TO_U2: values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildFPToUI (builder, lhs, LLVMInt16Type (), dname), LLVMInt32Type (), ""); break; case OP_FCONV_TO_U4: case OP_RCONV_TO_U4: values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt32Type (), dname); break; case OP_FCONV_TO_U8: case OP_RCONV_TO_U8: values [ins->dreg] = LLVMBuildFPToUI (builder, lhs, LLVMInt64Type (), dname); break; case OP_FCONV_TO_I8: case OP_RCONV_TO_I8: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMInt64Type (), dname); break; case OP_ICONV_TO_R8: case OP_LCONV_TO_R8: values [ins->dreg] = LLVMBuildSIToFP (builder, lhs, LLVMDoubleType (), dname); break; case OP_ICONV_TO_R_UN: case OP_LCONV_TO_R_UN: values [ins->dreg] = LLVMBuildUIToFP (builder, lhs, LLVMDoubleType (), dname); break; #if TARGET_SIZEOF_VOID_P == 4 case OP_LCONV_TO_U: #endif case OP_LCONV_TO_I4: values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname); break; case OP_ICONV_TO_R4: case OP_LCONV_TO_R4: v = LLVMBuildSIToFP (builder, lhs, LLVMFloatType (), ""); if (cfg->r4fp) values [ins->dreg] = v; else values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname); break; case OP_FCONV_TO_R4: v = LLVMBuildFPTrunc (builder, lhs, LLVMFloatType (), ""); if (cfg->r4fp) values [ins->dreg] = v; else values [ins->dreg] = LLVMBuildFPExt (builder, v, LLVMDoubleType (), dname); break; case OP_RCONV_TO_R8: values [ins->dreg] = LLVMBuildFPExt (builder, lhs, LLVMDoubleType (), dname); break; case OP_RCONV_TO_R4: values [ins->dreg] = lhs; break; case OP_SEXT_I4: values [ins->dreg] = LLVMBuildSExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname); break; case OP_ZEXT_I4: values [ins->dreg] = LLVMBuildZExt (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMInt64Type (), dname); break; case OP_TRUNC_I4: values [ins->dreg] = LLVMBuildTrunc (builder, lhs, LLVMInt32Type (), dname); break; case OP_LOCALLOC_IMM: { LLVMValueRef v; guint32 size = ins->inst_imm; size = (size + (MONO_ARCH_FRAME_ALIGNMENT - 1)) & ~ (MONO_ARCH_FRAME_ALIGNMENT - 1); v = mono_llvm_build_alloca (builder, LLVMInt8Type (), LLVMConstInt (LLVMInt32Type (), size, FALSE), MONO_ARCH_FRAME_ALIGNMENT, ""); if (ins->flags & MONO_INST_INIT) emit_memset (ctx, builder, v, const_int32 (size), MONO_ARCH_FRAME_ALIGNMENT); values [ins->dreg] = v; break; } case OP_LOCALLOC: { LLVMValueRef v, size; size = LLVMBuildAnd (builder, LLVMBuildAdd (builder, convert (ctx, lhs, LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), MONO_ARCH_FRAME_ALIGNMENT - 1, FALSE), ""), LLVMConstInt (LLVMInt32Type (), ~ (MONO_ARCH_FRAME_ALIGNMENT - 1), FALSE), ""); v = mono_llvm_build_alloca (builder, LLVMInt8Type (), size, MONO_ARCH_FRAME_ALIGNMENT, ""); if (ins->flags & MONO_INST_INIT) emit_memset (ctx, builder, v, size, MONO_ARCH_FRAME_ALIGNMENT); values [ins->dreg] = v; break; } case OP_LOADI1_MEMBASE: case OP_LOADU1_MEMBASE: case OP_LOADI2_MEMBASE: case OP_LOADU2_MEMBASE: case OP_LOADI4_MEMBASE: case OP_LOADU4_MEMBASE: case OP_LOADI8_MEMBASE: case OP_LOADR4_MEMBASE: case OP_LOADR8_MEMBASE: case OP_LOAD_MEMBASE: case OP_LOADI8_MEM: case OP_LOADU1_MEM: case OP_LOADU2_MEM: case OP_LOADI4_MEM: case OP_LOADU4_MEM: case OP_LOAD_MEM: { int size = 8; LLVMValueRef base, index, addr; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); if (sext || zext) dname = (char*)""; if ((ins->opcode == OP_LOADI8_MEM) || (ins->opcode == OP_LOAD_MEM) || (ins->opcode == OP_LOADI4_MEM) || (ins->opcode == OP_LOADU4_MEM) || (ins->opcode == OP_LOADU1_MEM) || (ins->opcode == OP_LOADU2_MEM)) { addr = LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE); base = addr; } else { /* _MEMBASE */ base = lhs; if (ins->inst_offset == 0) { LLVMValueRef gep_base, gep_offset; if (mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else { addr = base; } } else if (ins->inst_offset % size != 0) { /* Unaligned load */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } } addr = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) values [ins->dreg] = mono_llvm_build_aligned_load (builder, addr, dname, is_volatile, 1); else values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, base, dname, is_faulting, is_volatile, LLVM_BARRIER_NONE); if (!(is_faulting || is_volatile) && (ins->flags & MONO_INST_INVARIANT_LOAD)) { /* * These will signal LLVM that these loads do not alias any stores, and * they can't fail, allowing them to be hoisted out of loops. */ set_invariant_load_flag (values [ins->dreg]); } if (sext) values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (zext) values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (!cfg->r4fp && ins->opcode == OP_LOADR4_MEMBASE) values [ins->dreg] = LLVMBuildFPExt (builder, values [ins->dreg], LLVMDoubleType (), dname); break; } case OP_STOREI1_MEMBASE_REG: case OP_STOREI2_MEMBASE_REG: case OP_STOREI4_MEMBASE_REG: case OP_STOREI8_MEMBASE_REG: case OP_STORER4_MEMBASE_REG: case OP_STORER8_MEMBASE_REG: case OP_STORE_MEMBASE_REG: { int size = 8; LLVMValueRef index, addr, base; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; if (!values [ins->inst_destbasereg]) { set_failure (ctx, "inst_destbasereg"); break; } t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; LLVMValueRef gep_base, gep_offset; if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else if (ins->inst_offset % size != 0) { /* Unaligned store */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } if (is_volatile && LLVMGetInstructionOpcode (base) == LLVMAlloca && !(ins->flags & MONO_INST_VOLATILE)) /* Storing to an alloca cannot fail */ is_volatile = FALSE; LLVMValueRef srcval = convert (ctx, values [ins->sreg1], t); LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1); else emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile); break; } case OP_STOREI1_MEMBASE_IMM: case OP_STOREI2_MEMBASE_IMM: case OP_STOREI4_MEMBASE_IMM: case OP_STOREI8_MEMBASE_IMM: case OP_STORE_MEMBASE_IMM: { int size = 8; LLVMValueRef index, addr, base; LLVMTypeRef t; gboolean sext = FALSE, zext = FALSE; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; gboolean is_unaligned = (ins->flags & MONO_INST_UNALIGNED) != 0; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; LLVMValueRef gep_base, gep_offset; if (ins->inst_offset == 0 && mono_llvm_can_be_gep (base, &gep_base, &gep_offset)) { addr = LLVMBuildGEP (builder, convert (ctx, gep_base, LLVMPointerType (LLVMInt8Type (), 0)), &gep_offset, 1, ""); } else if (ins->inst_offset % size != 0) { /* Unaligned store */ index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (LLVMInt8Type (), 0)), &index, 1, ""); } else { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); } LLVMValueRef srcval = convert (ctx, LLVMConstInt (IntPtrType (), ins->inst_imm, FALSE), t); LLVMValueRef ptrdst = convert (ctx, addr, LLVMPointerType (t, 0)); if (is_unaligned) mono_llvm_build_aligned_store (builder, srcval, ptrdst, is_volatile, 1); else emit_store (ctx, bb, &builder, size, srcval, ptrdst, base, is_faulting, is_volatile); break; } case OP_CHECK_THIS: emit_load (ctx, bb, &builder, TARGET_SIZEOF_VOID_P, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), lhs, "", TRUE, FALSE, LLVM_BARRIER_NONE); break; case OP_OUTARG_VTRETADDR: break; case OP_VOIDCALL: case OP_CALL: case OP_LCALL: case OP_FCALL: case OP_RCALL: case OP_VCALL: case OP_VOIDCALL_MEMBASE: case OP_CALL_MEMBASE: case OP_LCALL_MEMBASE: case OP_FCALL_MEMBASE: case OP_RCALL_MEMBASE: case OP_VCALL_MEMBASE: case OP_VOIDCALL_REG: case OP_CALL_REG: case OP_LCALL_REG: case OP_FCALL_REG: case OP_RCALL_REG: case OP_VCALL_REG: { process_call (ctx, bb, &builder, ins); break; } case OP_AOTCONST: { MonoJumpInfoType ji_type = ins->inst_c1; gpointer ji_data = ins->inst_p0; if (ji_type == MONO_PATCH_INFO_ICALL_ADDR) { char *symbol = mono_aot_get_direct_call_symbol (MONO_PATCH_INFO_ICALL_ADDR_CALL, ji_data); if (symbol) { /* * Avoid emitting a got entry for these since the method is directly called, and it might not be * resolvable at runtime using dlsym (). */ g_free (symbol); values [ins->dreg] = LLVMConstInt (IntPtrType (), 0, FALSE); break; } } values [ins->dreg] = get_aotconst (ctx, ji_type, ji_data, LLVMPointerType (IntPtrType (), 0)); break; } case OP_MEMMOVE: { int argn = 0; LLVMValueRef args [5]; args [argn++] = convert (ctx, values [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0)); args [argn++] = convert (ctx, values [ins->sreg2], LLVMPointerType (LLVMInt8Type (), 0)); args [argn++] = convert (ctx, values [ins->sreg3], LLVMInt64Type ()); args [argn++] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); // is_volatile call_intrins (ctx, INTRINS_MEMMOVE, args, ""); break; } case OP_NOT_REACHED: LLVMBuildUnreachable (builder); has_terminator = TRUE; g_assert (bb->block_num < cfg->max_block_num); ctx->unreachable [bb->block_num] = TRUE; /* Might have instructions after this */ while (ins->next) { MonoInst *next = ins->next; /* * FIXME: If later code uses the regs defined by these instructions, * compilation will fail. */ const char *spec = INS_INFO (next->opcode); if (spec [MONO_INST_DEST] == 'i' && !MONO_IS_STORE_MEMBASE (next)) ctx->values [next->dreg] = LLVMConstNull (LLVMInt32Type ()); MONO_DELETE_INS (bb, next); } break; case OP_LDADDR: { MonoInst *var = ins->inst_i0; MonoClass *klass = var->klass; if (var->opcode == OP_VTARG_ADDR && !MONO_CLASS_IS_SIMD(cfg, klass)) { /* The variable contains the vtype address */ values [ins->dreg] = values [var->dreg]; } else if (var->opcode == OP_GSHAREDVT_LOCAL) { values [ins->dreg] = emit_gsharedvt_ldaddr (ctx, var->dreg); } else { values [ins->dreg] = addresses [var->dreg]; } break; } case OP_SIN: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SIN, args, dname); break; } case OP_SINF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SINF, args, dname); break; } case OP_EXP: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_EXP, args, dname); break; } case OP_EXPF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_EXPF, args, dname); break; } case OP_LOG2: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2, args, dname); break; } case OP_LOG2F: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG2F, args, dname); break; } case OP_LOG10: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10, args, dname); break; } case OP_LOG10F: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG10F, args, dname); break; } case OP_LOG: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_LOG, args, dname); break; } case OP_TRUNC: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNC, args, dname); break; } case OP_TRUNCF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_TRUNCF, args, dname); break; } case OP_COS: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COS, args, dname); break; } case OP_COSF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COSF, args, dname); break; } case OP_SQRT: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SQRT, args, dname); break; } case OP_SQRTF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_SQRTF, args, dname); break; } case OP_FLOOR: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FLOOR, args, dname); break; } case OP_FLOORF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FLOORF, args, dname); break; } case OP_CEIL: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_CEIL, args, dname); break; } case OP_CEILF: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_CEILF, args, dname); break; } case OP_FMA: { LLVMValueRef args [3]; args [0] = convert (ctx, values [ins->sreg1], LLVMDoubleType ()); args [1] = convert (ctx, values [ins->sreg2], LLVMDoubleType ()); args [2] = convert (ctx, values [ins->sreg3], LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FMA, args, dname); break; } case OP_FMAF: { LLVMValueRef args [3]; args [0] = convert (ctx, values [ins->sreg1], LLVMFloatType ()); args [1] = convert (ctx, values [ins->sreg2], LLVMFloatType ()); args [2] = convert (ctx, values [ins->sreg3], LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FMAF, args, dname); break; } case OP_ABS: { LLVMValueRef args [1]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname); break; } case OP_ABSF: { LLVMValueRef args [1]; #ifdef TARGET_AMD64 args [0] = convert (ctx, lhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_ABSF, args, dname); #else /* llvm.fabs not supported on all platforms */ args [0] = convert (ctx, lhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_FABS, args, dname); values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMFloatType ()); #endif break; } case OP_RPOW: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMFloatType ()); args [1] = convert (ctx, rhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_POWF, args, dname); break; } case OP_FPOW: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); args [1] = convert (ctx, rhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_POW, args, dname); break; } case OP_FCOPYSIGN: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMDoubleType ()); args [1] = convert (ctx, rhs, LLVMDoubleType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGN, args, dname); break; } case OP_RCOPYSIGN: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, LLVMFloatType ()); args [1] = convert (ctx, rhs, LLVMFloatType ()); values [ins->dreg] = call_intrins (ctx, INTRINS_COPYSIGNF, args, dname); break; } case OP_IMIN: case OP_LMIN: case OP_IMAX: case OP_LMAX: case OP_IMIN_UN: case OP_LMIN_UN: case OP_IMAX_UN: case OP_LMAX_UN: case OP_FMIN: case OP_FMAX: case OP_RMIN: case OP_RMAX: { LLVMValueRef v; lhs = convert (ctx, lhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); rhs = convert (ctx, rhs, regtype_to_llvm_type (spec [MONO_INST_DEST])); switch (ins->opcode) { case OP_IMIN: case OP_LMIN: v = LLVMBuildICmp (builder, LLVMIntSLE, lhs, rhs, ""); break; case OP_IMAX: case OP_LMAX: v = LLVMBuildICmp (builder, LLVMIntSGE, lhs, rhs, ""); break; case OP_IMIN_UN: case OP_LMIN_UN: v = LLVMBuildICmp (builder, LLVMIntULE, lhs, rhs, ""); break; case OP_IMAX_UN: case OP_LMAX_UN: v = LLVMBuildICmp (builder, LLVMIntUGE, lhs, rhs, ""); break; case OP_FMAX: case OP_RMAX: v = LLVMBuildFCmp (builder, LLVMRealUGE, lhs, rhs, ""); break; case OP_FMIN: case OP_RMIN: v = LLVMBuildFCmp (builder, LLVMRealULE, lhs, rhs, ""); break; default: g_assert_not_reached (); break; } values [ins->dreg] = LLVMBuildSelect (builder, v, lhs, rhs, dname); break; } /* * See the ARM64 comment in mono/utils/atomic.h for an explanation of why this * hack is necessary (for now). */ #ifdef TARGET_ARM64 #define ARM64_ATOMIC_FENCE_FIX mono_llvm_build_fence (builder, LLVM_BARRIER_SEQ) #else #define ARM64_ATOMIC_FENCE_FIX #endif case OP_ATOMIC_EXCHANGE_I4: case OP_ATOMIC_EXCHANGE_I8: { LLVMValueRef args [2]; LLVMTypeRef t; if (ins->opcode == OP_ATOMIC_EXCHANGE_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); g_assert (ins->inst_offset == 0); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); args [1] = convert (ctx, rhs, t); ARM64_ATOMIC_FENCE_FIX; values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_XCHG, args [0], args [1]); ARM64_ATOMIC_FENCE_FIX; break; } case OP_ATOMIC_ADD_I4: case OP_ATOMIC_ADD_I8: case OP_ATOMIC_AND_I4: case OP_ATOMIC_AND_I8: case OP_ATOMIC_OR_I4: case OP_ATOMIC_OR_I8: { LLVMValueRef args [2]; LLVMTypeRef t; if (ins->type == STACK_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); g_assert (ins->inst_offset == 0); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); args [1] = convert (ctx, rhs, t); ARM64_ATOMIC_FENCE_FIX; if (ins->opcode == OP_ATOMIC_ADD_I4 || ins->opcode == OP_ATOMIC_ADD_I8) // Interlocked.Add returns new value (that's why we emit additional Add here) // see https://github.com/dotnet/runtime/pull/33102 values [ins->dreg] = LLVMBuildAdd (builder, mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_ADD, args [0], args [1]), args [1], dname); else if (ins->opcode == OP_ATOMIC_AND_I4 || ins->opcode == OP_ATOMIC_AND_I8) values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_AND, args [0], args [1]); else if (ins->opcode == OP_ATOMIC_OR_I4 || ins->opcode == OP_ATOMIC_OR_I8) values [ins->dreg] = mono_llvm_build_atomic_rmw (builder, LLVM_ATOMICRMW_OP_OR, args [0], args [1]); else g_assert_not_reached (); ARM64_ATOMIC_FENCE_FIX; break; } case OP_ATOMIC_CAS_I4: case OP_ATOMIC_CAS_I8: { LLVMValueRef args [3], val; LLVMTypeRef t; if (ins->opcode == OP_ATOMIC_CAS_I4) t = LLVMInt32Type (); else t = LLVMInt64Type (); args [0] = convert (ctx, lhs, LLVMPointerType (t, 0)); /* comparand */ args [1] = convert (ctx, values [ins->sreg3], t); /* new value */ args [2] = convert (ctx, values [ins->sreg2], t); ARM64_ATOMIC_FENCE_FIX; val = mono_llvm_build_cmpxchg (builder, args [0], args [1], args [2]); ARM64_ATOMIC_FENCE_FIX; /* cmpxchg returns a pair */ values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, ""); break; } case OP_MEMORY_BARRIER: { mono_llvm_build_fence (builder, (BarrierKind) ins->backend.memory_barrier_kind); break; } case OP_ATOMIC_LOAD_I1: case OP_ATOMIC_LOAD_I2: case OP_ATOMIC_LOAD_I4: case OP_ATOMIC_LOAD_I8: case OP_ATOMIC_LOAD_U1: case OP_ATOMIC_LOAD_U2: case OP_ATOMIC_LOAD_U4: case OP_ATOMIC_LOAD_U8: case OP_ATOMIC_LOAD_R4: case OP_ATOMIC_LOAD_R8: { int size; gboolean sext, zext; LLVMTypeRef t; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind; LLVMValueRef index, addr; t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); if (sext || zext) dname = (char *)""; if (ins->inst_offset != 0) { index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, lhs, LLVMPointerType (t, 0)), &index, 1, ""); } else { addr = lhs; } addr = convert (ctx, addr, LLVMPointerType (t, 0)); ARM64_ATOMIC_FENCE_FIX; values [ins->dreg] = emit_load (ctx, bb, &builder, size, addr, lhs, dname, is_faulting, is_volatile, barrier); ARM64_ATOMIC_FENCE_FIX; if (sext) values [ins->dreg] = LLVMBuildSExt (builder, values [ins->dreg], LLVMInt32Type (), dname); else if (zext) values [ins->dreg] = LLVMBuildZExt (builder, values [ins->dreg], LLVMInt32Type (), dname); break; } case OP_ATOMIC_STORE_I1: case OP_ATOMIC_STORE_I2: case OP_ATOMIC_STORE_I4: case OP_ATOMIC_STORE_I8: case OP_ATOMIC_STORE_U1: case OP_ATOMIC_STORE_U2: case OP_ATOMIC_STORE_U4: case OP_ATOMIC_STORE_U8: case OP_ATOMIC_STORE_R4: case OP_ATOMIC_STORE_R8: { int size; gboolean sext, zext; LLVMTypeRef t; gboolean is_faulting = (ins->flags & MONO_INST_FAULT) != 0; gboolean is_volatile = (ins->flags & MONO_INST_VOLATILE) != 0; BarrierKind barrier = (BarrierKind) ins->backend.memory_barrier_kind; LLVMValueRef index, addr, value, base; if (!values [ins->inst_destbasereg]) { set_failure (ctx, "inst_destbasereg"); break; } t = load_store_to_llvm_type (ins->opcode, &size, &sext, &zext); base = values [ins->inst_destbasereg]; index = LLVMConstInt (LLVMInt32Type (), ins->inst_offset / size, FALSE); addr = LLVMBuildGEP (builder, convert (ctx, base, LLVMPointerType (t, 0)), &index, 1, ""); value = convert (ctx, values [ins->sreg1], t); ARM64_ATOMIC_FENCE_FIX; emit_store_general (ctx, bb, &builder, size, value, addr, base, is_faulting, is_volatile, barrier); ARM64_ATOMIC_FENCE_FIX; break; } case OP_RELAXED_NOP: { #if defined(TARGET_AMD64) || defined(TARGET_X86) call_intrins (ctx, INTRINS_SSE_PAUSE, NULL, ""); break; #else break; #endif } case OP_TLS_GET: { #if (defined(TARGET_AMD64) || defined(TARGET_X86)) && defined(__linux__) #ifdef TARGET_AMD64 // 257 == FS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 257); #else // 256 == GS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256); #endif // FIXME: XEN values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), ins->inst_offset, TRUE), ptrtype, ""), ""); #elif defined(TARGET_AMD64) && defined(TARGET_OSX) /* See mono_amd64_emit_tls_get () */ int offset = mono_amd64_get_tls_gs_offset () + (ins->inst_offset * 8); // 256 == GS segment register LLVMTypeRef ptrtype = LLVMPointerType (IntPtrType (), 256); values [ins->dreg] = LLVMBuildLoad (builder, LLVMBuildIntToPtr (builder, LLVMConstInt (IntPtrType (), offset, TRUE), ptrtype, ""), ""); #else set_failure (ctx, "opcode tls-get"); break; #endif break; } case OP_GC_SAFE_POINT: { LLVMValueRef val, cmp, callee, call; LLVMBasicBlockRef poll_bb, cont_bb; LLVMValueRef args [2]; static LLVMTypeRef sig; const char *icall_name = "mono_threads_state_poll"; /* * Create the cold wrapper around the icall, along with a managed method for it so * unwinding works. */ if (!cfg->compile_aot && !ctx->module->gc_poll_cold_wrapper_compiled) { ERROR_DECL (error); /* Compiling a method here is a bit ugly, but it works */ MonoMethod *wrapper = mono_marshal_get_llvm_func_wrapper (LLVM_FUNC_WRAPPER_GC_POLL); ctx->module->gc_poll_cold_wrapper_compiled = mono_jit_compile_method (wrapper, error); mono_error_assert_ok (error); } if (!sig) sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); /* * if (!*sreg1) * mono_threads_state_poll (); */ val = mono_llvm_build_load (builder, convert (ctx, lhs, LLVMPointerType (IntPtrType (), 0)), "", TRUE); cmp = LLVMBuildICmp (builder, LLVMIntEQ, val, LLVMConstNull (LLVMTypeOf (val)), ""); poll_bb = gen_bb (ctx, "POLL_BB"); cont_bb = gen_bb (ctx, "CONT_BB"); args [0] = cmp; args [1] = LLVMConstInt (LLVMInt1Type (), 1, FALSE); cmp = call_intrins (ctx, INTRINS_EXPECT_I1, args, ""); mono_llvm_build_weighted_branch (builder, cmp, cont_bb, poll_bb, 1000, 1); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, poll_bb); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_threads_state_poll)); call = LLVMBuildCall (builder, callee, NULL, 0, ""); } else { callee = get_jit_callee (ctx, icall_name, sig, MONO_PATCH_INFO_ABS, ctx->module->gc_poll_cold_wrapper_compiled); call = LLVMBuildCall (builder, callee, NULL, 0, ""); set_call_cold_cconv (call); } LLVMBuildBr (builder, cont_bb); ctx->builder = builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, cont_bb); ctx->bblocks [bb->block_num].end_bblock = cont_bb; break; } /* * Overflow opcodes. */ case OP_IADD_OVF: case OP_IADD_OVF_UN: case OP_ISUB_OVF: case OP_ISUB_OVF_UN: case OP_IMUL_OVF: case OP_IMUL_OVF_UN: case OP_LADD_OVF: case OP_LADD_OVF_UN: case OP_LSUB_OVF: case OP_LSUB_OVF_UN: case OP_LMUL_OVF: case OP_LMUL_OVF_UN: { LLVMValueRef args [2], val, ovf; IntrinsicId intrins; args [0] = convert (ctx, lhs, op_to_llvm_type (ins->opcode)); args [1] = convert (ctx, rhs, op_to_llvm_type (ins->opcode)); intrins = ovf_op_to_intrins (ins->opcode); val = call_intrins (ctx, intrins, args, ""); values [ins->dreg] = LLVMBuildExtractValue (builder, val, 0, dname); ovf = LLVMBuildExtractValue (builder, val, 1, ""); emit_cond_system_exception (ctx, bb, ins->inst_exc_name, ovf, FALSE); if (!ctx_ok (ctx)) break; builder = ctx->builder; break; } /* * Valuetypes. * We currently model them using arrays. Promotion to local vregs is * disabled for them in mono_handle_global_vregs () in the LLVM case, * so we always have an entry in cfg->varinfo for them. * FIXME: Is this needed ? */ case OP_VZERO: { MonoClass *klass = ins->klass; if (!klass) { // FIXME: set_failure (ctx, "!klass"); break; } if (!addresses [ins->dreg]) addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (klass), "vzero"); LLVMValueRef ptr = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); emit_memset (ctx, builder, ptr, const_int32 (mono_class_value_size (klass, NULL)), 0); break; } case OP_DUMMY_VZERO: break; case OP_STOREV_MEMBASE: case OP_LOADV_MEMBASE: case OP_VMOVE: { MonoClass *klass = ins->klass; LLVMValueRef src = NULL, dst, args [5]; gboolean done = FALSE; gboolean is_volatile = FALSE; if (!klass) { // FIXME: set_failure (ctx, "!klass"); break; } if (mini_is_gsharedvt_klass (klass)) { // FIXME: set_failure (ctx, "gsharedvt"); break; } switch (ins->opcode) { case OP_STOREV_MEMBASE: if (cfg->gen_write_barriers && m_class_has_references (klass) && ins->inst_destbasereg != cfg->frame_reg && LLVMGetInstructionOpcode (values [ins->inst_destbasereg]) != LLVMAlloca) { /* Decomposed earlier */ g_assert_not_reached (); break; } if (!addresses [ins->sreg1]) { /* SIMD */ g_assert (values [ins->sreg1]); dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (klass)), 0)); LLVMBuildStore (builder, values [ins->sreg1], dst); done = TRUE; } else { src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), ""); dst = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0)); } break; case OP_LOADV_MEMBASE: if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass)); src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (LLVMInt8Type (), 0)); dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); break; case OP_VMOVE: if (!addresses [ins->sreg1]) addresses [ins->sreg1] = build_alloca (ctx, m_class_get_byval_arg (klass)); if (!addresses [ins->dreg]) addresses [ins->dreg] = build_alloca (ctx, m_class_get_byval_arg (klass)); src = LLVMBuildBitCast (builder, addresses [ins->sreg1], LLVMPointerType (LLVMInt8Type (), 0), ""); dst = LLVMBuildBitCast (builder, addresses [ins->dreg], LLVMPointerType (LLVMInt8Type (), 0), ""); break; default: g_assert_not_reached (); } if (!ctx_ok (ctx)) break; if (done) break; #ifdef TARGET_WASM is_volatile = m_class_has_references (klass); #endif int aindex = 0; args [aindex ++] = dst; args [aindex ++] = src; args [aindex ++] = LLVMConstInt (LLVMInt32Type (), mono_class_value_size (klass, NULL), FALSE); args [aindex ++] = LLVMConstInt (LLVMInt1Type (), is_volatile ? 1 : 0, FALSE); call_intrins (ctx, INTRINS_MEMCPY, args, ""); break; } case OP_LLVM_OUTARG_VT: { LLVMArgInfo *ainfo = (LLVMArgInfo*)ins->inst_p0; MonoType *t = mini_get_underlying_type (ins->inst_vtype); if (ainfo->storage == LLVMArgGsharedvtVariable) { MonoInst *var = get_vreg_to_inst (cfg, ins->sreg1); if (var && var->opcode == OP_GSHAREDVT_LOCAL) { addresses [ins->dreg] = convert (ctx, emit_gsharedvt_ldaddr (ctx, var->dreg), LLVMPointerType (IntPtrType (), 0)); } else { g_assert (addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } } else if (ainfo->storage == LLVMArgGsharedvtFixed) { if (!addresses [ins->sreg1]) { addresses [ins->sreg1] = build_alloca (ctx, t); g_assert (values [ins->sreg1]); } LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], LLVMGetElementType (LLVMTypeOf (addresses [ins->sreg1]))), addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } else { if (!addresses [ins->sreg1]) { addresses [ins->sreg1] = build_named_alloca (ctx, t, "llvm_outarg_vt"); g_assert (values [ins->sreg1]); LLVMBuildStore (builder, convert (ctx, values [ins->sreg1], type_to_llvm_type (ctx, t)), addresses [ins->sreg1]); addresses [ins->dreg] = addresses [ins->sreg1]; } else if (ainfo->storage == LLVMArgVtypeAddr || values [ins->sreg1] == addresses [ins->sreg1]) { /* LLVMArgVtypeByRef/LLVMArgVtypeAddr, have to make a copy */ addresses [ins->dreg] = build_alloca (ctx, t); LLVMValueRef v = LLVMBuildLoad (builder, addresses [ins->sreg1], "llvm_outarg_vt_copy"); LLVMBuildStore (builder, convert (ctx, v, type_to_llvm_type (ctx, t)), addresses [ins->dreg]); } else { if (values [ins->sreg1]) { LLVMTypeRef src_t = LLVMTypeOf (values [ins->sreg1]); LLVMValueRef dst = convert (ctx, addresses [ins->sreg1], LLVMPointerType (src_t, 0)); LLVMBuildStore (builder, values [ins->sreg1], dst); } addresses [ins->dreg] = addresses [ins->sreg1]; } } break; } case OP_OBJC_GET_SELECTOR: { const char *name = (const char*)ins->inst_p0; LLVMValueRef var; if (!ctx->module->objc_selector_to_var) { ctx->module->objc_selector_to_var = g_hash_table_new_full (g_str_hash, g_str_equal, g_free, NULL); LLVMValueRef info_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), 8), "@OBJC_IMAGE_INFO"); int32_t objc_imageinfo [] = { 0, 16 }; LLVMSetInitializer (info_var, mono_llvm_create_constant_data_array ((uint8_t *) &objc_imageinfo, 8)); LLVMSetLinkage (info_var, LLVMPrivateLinkage); LLVMSetExternallyInitialized (info_var, TRUE); LLVMSetSection (info_var, "__DATA, __objc_imageinfo,regular,no_dead_strip"); LLVMSetAlignment (info_var, sizeof (target_mgreg_t)); mark_as_used (ctx->module, info_var); } var = (LLVMValueRef)g_hash_table_lookup (ctx->module->objc_selector_to_var, name); if (!var) { LLVMValueRef indexes [16]; LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, LLVMArrayType (LLVMInt8Type (), strlen (name) + 1), "@OBJC_METH_VAR_NAME_"); LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((const uint8_t*)name, strlen (name) + 1)); LLVMSetLinkage (name_var, LLVMPrivateLinkage); LLVMSetSection (name_var, "__TEXT,__objc_methname,cstring_literals"); mark_as_used (ctx->module, name_var); LLVMValueRef ref_var = LLVMAddGlobal (ctx->lmodule, LLVMPointerType (LLVMInt8Type (), 0), "@OBJC_SELECTOR_REFERENCES_"); indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, 0); indexes [1] = LLVMConstInt (LLVMInt32Type (), 0, 0); LLVMSetInitializer (ref_var, LLVMConstGEP (name_var, indexes, 2)); LLVMSetLinkage (ref_var, LLVMPrivateLinkage); LLVMSetExternallyInitialized (ref_var, TRUE); LLVMSetSection (ref_var, "__DATA, __objc_selrefs, literal_pointers, no_dead_strip"); LLVMSetAlignment (ref_var, sizeof (target_mgreg_t)); mark_as_used (ctx->module, ref_var); g_hash_table_insert (ctx->module->objc_selector_to_var, g_strdup (name), ref_var); var = ref_var; } values [ins->dreg] = LLVMBuildLoad (builder, var, ""); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM) case OP_EXTRACTX_U2: case OP_XEXTRACT_I1: case OP_XEXTRACT_I2: case OP_XEXTRACT_I4: case OP_XEXTRACT_I8: case OP_XEXTRACT_R4: case OP_XEXTRACT_R8: case OP_EXTRACT_I1: case OP_EXTRACT_I2: case OP_EXTRACT_I4: case OP_EXTRACT_I8: case OP_EXTRACT_R4: case OP_EXTRACT_R8: { MonoTypeEnum mono_elt_t = inst_c1_type (ins); LLVMTypeRef elt_t = primitive_type_to_llvm_type (mono_elt_t); gboolean sext = FALSE; gboolean zext = FALSE; switch (mono_elt_t) { case MONO_TYPE_I1: case MONO_TYPE_I2: sext = TRUE; break; case MONO_TYPE_U1: case MONO_TYPE_U2: zext = TRUE; break; } LLVMValueRef element_ix = NULL; switch (ins->opcode) { case OP_XEXTRACT_I1: case OP_XEXTRACT_I2: case OP_XEXTRACT_I4: case OP_XEXTRACT_R4: case OP_XEXTRACT_R8: case OP_XEXTRACT_I8: element_ix = rhs; break; default: element_ix = const_int32 (ins->inst_c0); } LLVMTypeRef lhs_t = LLVMTypeOf (lhs); int vec_width = mono_llvm_get_prim_size_bits (lhs_t); int elem_width = mono_llvm_get_prim_size_bits (elt_t); int elements = vec_width / elem_width; element_ix = LLVMBuildAnd (builder, element_ix, const_int32 (elements - 1), "extract"); LLVMTypeRef ret_t = LLVMVectorType (elt_t, elements); LLVMValueRef src = LLVMBuildBitCast (builder, lhs, ret_t, "extract"); LLVMValueRef result = LLVMBuildExtractElement (builder, src, element_ix, "extract"); if (zext) result = LLVMBuildZExt (builder, result, i4_t, "extract_zext"); else if (sext) result = LLVMBuildSExt (builder, result, i4_t, "extract_sext"); values [ins->dreg] = result; break; } case OP_XINSERT_I1: case OP_XINSERT_I2: case OP_XINSERT_I4: case OP_XINSERT_I8: case OP_XINSERT_R4: case OP_XINSERT_R8: { MonoTypeEnum primty = inst_c1_type (ins); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); int elements = LLVMGetVectorSize (ret_t); LLVMValueRef element_ix = LLVMBuildAnd (builder, arg3, const_int32 (elements - 1), "xinsert"); LLVMValueRef vec = convert (ctx, lhs, ret_t); LLVMValueRef val = convert_full (ctx, rhs, elem_t, primitive_type_is_unsigned (primty)); LLVMValueRef result = LLVMBuildInsertElement (builder, vec, val, element_ix, "xinsert"); values [ins->dreg] = result; break; } case OP_EXPAND_I1: case OP_EXPAND_I2: case OP_EXPAND_I4: case OP_EXPAND_I8: case OP_EXPAND_R4: case OP_EXPAND_R8: { LLVMTypeRef t; LLVMValueRef mask [MAX_VECTOR_ELEMS], v; int i; t = simd_class_to_llvm_type (ctx, ins->klass); for (i = 0; i < MAX_VECTOR_ELEMS; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); v = convert (ctx, values [ins->sreg1], LLVMGetElementType (t)); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (t), v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->dreg], LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), ""); break; } case OP_XZERO: { values [ins->dreg] = LLVMConstNull (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass))); break; } case OP_XONES: { values [ins->dreg] = LLVMConstAllOnes (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass))); break; } case OP_LOADX_MEMBASE: { LLVMTypeRef t = type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)); LLVMValueRef src; src = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_basereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0)); values [ins->dreg] = mono_llvm_build_aligned_load (builder, src, "", FALSE, 1); break; } case OP_STOREX_MEMBASE: { LLVMTypeRef t = LLVMTypeOf (values [ins->sreg1]); LLVMValueRef dest; dest = convert (ctx, LLVMBuildAdd (builder, convert (ctx, values [ins->inst_destbasereg], IntPtrType ()), LLVMConstInt (IntPtrType (), ins->inst_offset, FALSE), ""), LLVMPointerType (t, 0)); mono_llvm_build_aligned_store (builder, values [ins->sreg1], dest, FALSE, 1); break; } case OP_XBINOP: case OP_XBINOP_SCALAR: case OP_XBINOP_BYSCALAR: { gboolean scalar = ins->opcode == OP_XBINOP_SCALAR; gboolean byscalar = ins->opcode == OP_XBINOP_BYSCALAR; LLVMValueRef result = NULL; LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); if (byscalar) { LLVMTypeRef t = LLVMTypeOf (args [0]); unsigned int elems = LLVMGetVectorSize (t); args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems); } LLVMValueRef l = args [0]; LLVMValueRef r = args [1]; switch (ins->inst_c0) { case OP_IADD: result = LLVMBuildAdd (builder, l, r, ""); break; case OP_ISUB: result = LLVMBuildSub (builder, l, r, ""); break; case OP_IMUL: result = LLVMBuildMul (builder, l, r, ""); break; case OP_IAND: result = LLVMBuildAnd (builder, l, r, ""); break; case OP_IOR: result = LLVMBuildOr (builder, l, r, ""); break; case OP_IXOR: result = LLVMBuildXor (builder, l, r, ""); break; case OP_FADD: result = LLVMBuildFAdd (builder, l, r, ""); break; case OP_FSUB: result = LLVMBuildFSub (builder, l, r, ""); break; case OP_FMUL: result = LLVMBuildFMul (builder, l, r, ""); break; case OP_FDIV: result = LLVMBuildFDiv (builder, l, r, ""); break; case OP_FMAX: case OP_FMIN: { #if defined(TARGET_X86) || defined(TARGET_AMD64) LLVMValueRef args [] = { l, r }; LLVMTypeRef t = LLVMTypeOf (l); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); unsigned int v_size = elems * elem_bits; if (v_size == 128) { gboolean is_r4 = ins->inst_c1 == MONO_TYPE_R4; int iid = -1; if (ins->inst_c0 == OP_FMAX) { if (elems == 1) iid = is_r4 ? INTRINS_SSE_MAXSS : INTRINS_SSE_MAXSD; else iid = is_r4 ? INTRINS_SSE_MAXPS : INTRINS_SSE_MAXPD; } else { if (elems == 1) iid = is_r4 ? INTRINS_SSE_MINSS : INTRINS_SSE_MINSD; else iid = is_r4 ? INTRINS_SSE_MINPS : INTRINS_SSE_MINPD; } result = call_intrins (ctx, iid, args, dname); } else { LLVMRealPredicate op = ins->inst_c0 == OP_FMAX ? LLVMRealUGE : LLVMRealULE; LLVMValueRef cmp = LLVMBuildFCmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); } #elif defined(TARGET_ARM64) LLVMValueRef args [] = { l, r }; IntrinsicId iid = ins->inst_c0 == OP_FMAX ? INTRINS_AARCH64_ADV_SIMD_FMAX : INTRINS_AARCH64_ADV_SIMD_FMIN; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); #else NOT_IMPLEMENTED; #endif break; } case OP_IMAX: case OP_IMIN: { gboolean is_unsigned = ins->inst_c1 == MONO_TYPE_U1 || ins->inst_c1 == MONO_TYPE_U2 || ins->inst_c1 == MONO_TYPE_U4 || ins->inst_c1 == MONO_TYPE_U8; LLVMIntPredicate op; switch (ins->inst_c0) { case OP_IMAX: op = is_unsigned ? LLVMIntUGT : LLVMIntSGT; break; case OP_IMIN: op = is_unsigned ? LLVMIntULT : LLVMIntSLT; break; default: g_assert_not_reached (); } #if defined(TARGET_ARM64) if ((ins->inst_c1 == MONO_TYPE_U8) || (ins->inst_c1 == MONO_TYPE_I8)) { LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); } else { IntrinsicId iid; switch (ins->inst_c0) { case OP_IMAX: iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMAX : INTRINS_AARCH64_ADV_SIMD_SMAX; break; case OP_IMIN: iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMIN : INTRINS_AARCH64_ADV_SIMD_SMIN; break; default: g_assert_not_reached (); } LLVMValueRef args [] = { l, r }; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); } #else LLVMValueRef cmp = LLVMBuildICmp (builder, op, l, r, ""); result = LLVMBuildSelect (builder, cmp, l, r, ""); #endif break; } default: g_assert_not_reached (); } if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_XBINOP_FORCEINT: { LLVMTypeRef t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef intermediate_elem_t = LLVMIntType (elem_bits); LLVMTypeRef intermediate_t = LLVMVectorType (intermediate_elem_t, elems); LLVMValueRef lhs_int = convert (ctx, lhs, intermediate_t); LLVMValueRef rhs_int = convert (ctx, rhs, intermediate_t); LLVMValueRef result = NULL; switch (ins->inst_c0) { case XBINOP_FORCEINT_AND: result = LLVMBuildAnd (builder, lhs_int, rhs_int, ""); break; case XBINOP_FORCEINT_OR: result = LLVMBuildOr (builder, lhs_int, rhs_int, ""); break; case XBINOP_FORCEINT_ORNOT: result = LLVMBuildNot (builder, rhs_int, ""); result = LLVMBuildOr (builder, result, lhs_int, ""); break; case XBINOP_FORCEINT_XOR: result = LLVMBuildXor (builder, lhs_int, rhs_int, ""); break; } values [ins->dreg] = LLVMBuildBitCast (builder, result, t, ""); break; } case OP_CREATE_SCALAR: case OP_CREATE_SCALAR_UNSAFE: { MonoTypeEnum primty = inst_c1_type (ins); LLVMTypeRef type = simd_class_to_llvm_type (ctx, ins->klass); // use undef vector (most likely empty but may contain garbage values) for OP_CREATE_SCALAR_UNSAFE // and zero one for OP_CREATE_SCALAR LLVMValueRef vector = (ins->opcode == OP_CREATE_SCALAR) ? LLVMConstNull (type) : LLVMGetUndef (type); LLVMValueRef val = convert_full (ctx, lhs, primitive_type_to_llvm_type (primty), primitive_type_is_unsigned (primty)); values [ins->dreg] = LLVMBuildInsertElement (builder, vector, val, const_int32 (0), ""); break; } case OP_INSERT_I1: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt8Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I2: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt16Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I4: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt32Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_I8: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMInt64Type ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_R4: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMFloatType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_INSERT_R8: values [ins->dreg] = LLVMBuildInsertElement (builder, values [ins->sreg1], convert (ctx, values [ins->sreg2], LLVMDoubleType ()), LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE), dname); break; case OP_XCAST: { LLVMTypeRef t = simd_class_to_llvm_type (ctx, ins->klass); values [ins->dreg] = LLVMBuildBitCast (builder, lhs, t, ""); break; } case OP_XCONCAT: { values [ins->dreg] = concatenate_vectors (ctx, lhs, rhs); break; } case OP_XINSERT_LOWER: case OP_XINSERT_UPPER: { const char *oname = ins->opcode == OP_XINSERT_LOWER ? "xinsert_lower" : "xinsert_upper"; int ix = ins->opcode == OP_XINSERT_LOWER ? 0 : 1; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int width = mono_llvm_get_prim_size_bits (src_t); LLVMTypeRef int_t = LLVMIntType (width / 2); LLVMTypeRef intvec_t = LLVMVectorType (int_t, 2); LLVMValueRef insval = LLVMBuildBitCast (builder, rhs, int_t, oname); LLVMValueRef val = LLVMBuildBitCast (builder, lhs, intvec_t, oname); val = LLVMBuildInsertElement (builder, val, insval, const_int32 (ix), oname); val = LLVMBuildBitCast (builder, val, src_t, oname); values [ins->dreg] = val; break; } case OP_XLOWER: case OP_XUPPER: { const char *oname = ins->opcode == OP_XLOWER ? "xlower" : "xupper"; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (src_t); g_assert (elems >= 2 && elems <= MAX_VECTOR_ELEMS); unsigned int ret_elems = elems / 2; int startix = ins->opcode == OP_XLOWER ? 0 : ret_elems; LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (src_t), create_const_vector_i32 (&mask_0_incr_1 [startix], ret_elems), oname); values [ins->dreg] = val; break; } case OP_XWIDEN: case OP_XWIDEN_UNSAFE: { const char *oname = ins->opcode == OP_XWIDEN ? "xwiden" : "xwiden_unsafe"; LLVMTypeRef src_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (src_t); g_assert (elems <= MAX_VECTOR_ELEMS / 2); unsigned int ret_elems = elems * 2; LLVMValueRef upper = ins->opcode == OP_XWIDEN ? LLVMConstNull (src_t) : LLVMGetUndef (src_t); LLVMValueRef val = LLVMBuildShuffleVector (builder, lhs, upper, create_const_vector_i32 (mask_0_incr_1, ret_elems), oname); values [ins->dreg] = val; break; } #endif // defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_ARM64) || defined(TARGET_WASM) #if defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM) case OP_PADDB: case OP_PADDW: case OP_PADDD: case OP_PADDQ: values [ins->dreg] = LLVMBuildAdd (builder, lhs, rhs, ""); break; case OP_ADDPD: case OP_ADDPS: values [ins->dreg] = LLVMBuildFAdd (builder, lhs, rhs, ""); break; case OP_PSUBB: case OP_PSUBW: case OP_PSUBD: case OP_PSUBQ: values [ins->dreg] = LLVMBuildSub (builder, lhs, rhs, ""); break; case OP_SUBPD: case OP_SUBPS: values [ins->dreg] = LLVMBuildFSub (builder, lhs, rhs, ""); break; case OP_MULPD: case OP_MULPS: values [ins->dreg] = LLVMBuildFMul (builder, lhs, rhs, ""); break; case OP_DIVPD: case OP_DIVPS: values [ins->dreg] = LLVMBuildFDiv (builder, lhs, rhs, ""); break; case OP_PAND: values [ins->dreg] = LLVMBuildAnd (builder, lhs, rhs, ""); break; case OP_POR: values [ins->dreg] = LLVMBuildOr (builder, lhs, rhs, ""); break; case OP_PXOR: values [ins->dreg] = LLVMBuildXor (builder, lhs, rhs, ""); break; case OP_PMULW: case OP_PMULD: values [ins->dreg] = LLVMBuildMul (builder, lhs, rhs, ""); break; case OP_ANDPS: case OP_ANDNPS: case OP_ORPS: case OP_XORPS: case OP_ANDPD: case OP_ANDNPD: case OP_ORPD: case OP_XORPD: { LLVMTypeRef t, rt; LLVMValueRef v = NULL; switch (ins->opcode) { case OP_ANDPS: case OP_ANDNPS: case OP_ORPS: case OP_XORPS: t = LLVMVectorType (LLVMInt32Type (), 4); rt = LLVMVectorType (LLVMFloatType (), 4); break; case OP_ANDPD: case OP_ANDNPD: case OP_ORPD: case OP_XORPD: t = LLVMVectorType (LLVMInt64Type (), 2); rt = LLVMVectorType (LLVMDoubleType (), 2); break; default: t = LLVMInt32Type (); rt = LLVMInt32Type (); g_assert_not_reached (); } lhs = LLVMBuildBitCast (builder, lhs, t, ""); rhs = LLVMBuildBitCast (builder, rhs, t, ""); switch (ins->opcode) { case OP_ANDPS: case OP_ANDPD: v = LLVMBuildAnd (builder, lhs, rhs, ""); break; case OP_ORPS: case OP_ORPD: v = LLVMBuildOr (builder, lhs, rhs, ""); break; case OP_XORPS: case OP_XORPD: v = LLVMBuildXor (builder, lhs, rhs, ""); break; case OP_ANDNPS: case OP_ANDNPD: v = LLVMBuildAnd (builder, rhs, LLVMBuildNot (builder, lhs, ""), ""); break; } values [ins->dreg] = LLVMBuildBitCast (builder, v, rt, ""); break; } case OP_PMIND_UN: case OP_PMINW_UN: case OP_PMINB_UN: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntULT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMAXD_UN: case OP_PMAXW_UN: case OP_PMAXB_UN: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntUGT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMINW: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSLT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PMAXW: { LLVMValueRef cmp = LLVMBuildICmp (builder, LLVMIntSGT, lhs, rhs, ""); values [ins->dreg] = LLVMBuildSelect (builder, cmp, lhs, rhs, ""); break; } case OP_PAVGB_UN: case OP_PAVGW_UN: { LLVMValueRef ones_vec; LLVMValueRef ones [MAX_VECTOR_ELEMS]; int vector_size = LLVMGetVectorSize (LLVMTypeOf (lhs)); LLVMTypeRef ext_elem_type = vector_size == 16 ? LLVMInt16Type () : LLVMInt32Type (); for (int i = 0; i < MAX_VECTOR_ELEMS; ++i) ones [i] = LLVMConstInt (ext_elem_type, 1, FALSE); ones_vec = LLVMConstVector (ones, vector_size); LLVMValueRef val; LLVMTypeRef ext_type = LLVMVectorType (ext_elem_type, vector_size); /* Have to increase the vector element size to prevent overflows */ /* res = trunc ((zext (lhs) + zext (rhs) + 1) >> 1) */ val = LLVMBuildAdd (builder, LLVMBuildZExt (builder, lhs, ext_type, ""), LLVMBuildZExt (builder, rhs, ext_type, ""), ""); val = LLVMBuildAdd (builder, val, ones_vec, ""); val = LLVMBuildLShr (builder, val, ones_vec, ""); values [ins->dreg] = LLVMBuildTrunc (builder, val, LLVMTypeOf (lhs), ""); break; } case OP_PCMPEQB: case OP_PCMPEQW: case OP_PCMPEQD: case OP_PCMPEQQ: case OP_PCMPGTB: { LLVMValueRef pcmp; LLVMTypeRef retType; LLVMIntPredicate cmpOp; if (ins->opcode == OP_PCMPGTB) cmpOp = LLVMIntSGT; else cmpOp = LLVMIntEQ; if (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)) { pcmp = LLVMBuildICmp (builder, cmpOp, lhs, rhs, ""); retType = LLVMTypeOf (lhs); } else { LLVMTypeRef flatType = LLVMVectorType (LLVMInt8Type (), 16); LLVMValueRef flatRHS = convert (ctx, rhs, flatType); LLVMValueRef flatLHS = convert (ctx, lhs, flatType); pcmp = LLVMBuildICmp (builder, cmpOp, flatLHS, flatRHS, ""); retType = flatType; } values [ins->dreg] = LLVMBuildSExt (builder, pcmp, retType, ""); break; } case OP_CVTDQ2PS: { LLVMValueRef i4 = LLVMBuildBitCast (builder, lhs, sse_i4_t, ""); values [ins->dreg] = LLVMBuildSIToFP (builder, i4, sse_r4_t, dname); break; } case OP_CVTDQ2PD: { LLVMValueRef indexes [16]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMValueRef mask = LLVMConstVector (indexes, 2); LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, ""); values [ins->dreg] = LLVMBuildSIToFP (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname); break; } case OP_SSE2_CVTSS2SD: { LLVMValueRef rhs_elem = LLVMBuildExtractElement (builder, rhs, const_int32 (0), ""); LLVMValueRef fpext = LLVMBuildFPExt (builder, rhs_elem, LLVMDoubleType (), dname); values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fpext, const_int32 (0), ""); break; } case OP_CVTPS2PD: { LLVMValueRef indexes [16]; indexes [0] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); indexes [1] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMValueRef mask = LLVMConstVector (indexes, 2); LLVMValueRef shuffle = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), mask, ""); values [ins->dreg] = LLVMBuildFPExt (builder, shuffle, LLVMVectorType (LLVMDoubleType (), 2), dname); break; } case OP_CVTTPS2DQ: values [ins->dreg] = LLVMBuildFPToSI (builder, lhs, LLVMVectorType (LLVMInt32Type (), 4), dname); break; case OP_CVTPD2DQ: case OP_CVTPS2DQ: case OP_CVTPD2PS: case OP_CVTTPD2DQ: { LLVMValueRef v; v = convert (ctx, values [ins->sreg1], simd_op_to_llvm_type (ins->opcode)); values [ins->dreg] = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &v, dname); break; } case OP_COMPPS: case OP_COMPPD: { LLVMRealPredicate op; switch (ins->inst_c0) { case SIMD_COMP_EQ: op = LLVMRealOEQ; break; case SIMD_COMP_LT: op = LLVMRealOLT; break; case SIMD_COMP_LE: op = LLVMRealOLE; break; case SIMD_COMP_UNORD: op = LLVMRealUNO; break; case SIMD_COMP_NEQ: op = LLVMRealUNE; break; case SIMD_COMP_NLT: op = LLVMRealUGE; break; case SIMD_COMP_NLE: op = LLVMRealUGT; break; case SIMD_COMP_ORD: op = LLVMRealORD; break; default: g_assert_not_reached (); } LLVMValueRef cmp = LLVMBuildFCmp (builder, op, lhs, rhs, ""); if (ins->opcode == OP_COMPPD) values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), 2), ""), LLVMTypeOf (lhs), ""); else values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), 4), ""), LLVMTypeOf (lhs), ""); break; } case OP_ICONV_TO_X: /* This is only used for implementing shifts by non-immediate */ values [ins->dreg] = lhs; break; case OP_SHUFPS: case OP_SHUFPD: case OP_PSHUFLED: case OP_PSHUFLEW_LOW: case OP_PSHUFLEW_HIGH: { int mask [16]; LLVMValueRef v1 = NULL, v2 = NULL, mask_values [16]; int i, mask_size = 0; int imask = ins->inst_c0; /* Convert the x86 shuffle mask to LLVM's */ switch (ins->opcode) { case OP_SHUFPS: mask_size = 4; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3) + 4; mask [3] = ((imask >> 6) & 3) + 4; v1 = values [ins->sreg1]; v2 = values [ins->sreg2]; break; case OP_SHUFPD: mask_size = 2; mask [0] = ((imask >> 0) & 1); mask [1] = ((imask >> 1) & 1) + 2; v1 = values [ins->sreg1]; v2 = values [ins->sreg2]; break; case OP_PSHUFLEW_LOW: mask_size = 8; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3); mask [3] = ((imask >> 6) & 3); mask [4] = 4 + 0; mask [5] = 4 + 1; mask [6] = 4 + 2; mask [7] = 4 + 3; v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; case OP_PSHUFLEW_HIGH: mask_size = 8; mask [0] = 0; mask [1] = 1; mask [2] = 2; mask [3] = 3; mask [4] = 4 + ((imask >> 0) & 3); mask [5] = 4 + ((imask >> 2) & 3); mask [6] = 4 + ((imask >> 4) & 3); mask [7] = 4 + ((imask >> 6) & 3); v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; case OP_PSHUFLED: mask_size = 4; mask [0] = ((imask >> 0) & 3); mask [1] = ((imask >> 2) & 3); mask [2] = ((imask >> 4) & 3); mask [3] = ((imask >> 6) & 3); v1 = values [ins->sreg1]; v2 = LLVMGetUndef (LLVMTypeOf (v1)); break; default: g_assert_not_reached (); } for (i = 0; i < mask_size; ++i) mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE); values [ins->dreg] = LLVMBuildShuffleVector (builder, v1, v2, LLVMConstVector (mask_values, mask_size), dname); break; } case OP_UNPACK_LOWB: case OP_UNPACK_LOWW: case OP_UNPACK_LOWD: case OP_UNPACK_LOWQ: case OP_UNPACK_LOWPS: case OP_UNPACK_LOWPD: case OP_UNPACK_HIGHB: case OP_UNPACK_HIGHW: case OP_UNPACK_HIGHD: case OP_UNPACK_HIGHQ: case OP_UNPACK_HIGHPS: case OP_UNPACK_HIGHPD: { int mask [16]; LLVMValueRef mask_values [16]; int i, mask_size = 0; gboolean low = FALSE; switch (ins->opcode) { case OP_UNPACK_LOWB: mask_size = 16; low = TRUE; break; case OP_UNPACK_LOWW: mask_size = 8; low = TRUE; break; case OP_UNPACK_LOWD: case OP_UNPACK_LOWPS: mask_size = 4; low = TRUE; break; case OP_UNPACK_LOWQ: case OP_UNPACK_LOWPD: mask_size = 2; low = TRUE; break; case OP_UNPACK_HIGHB: mask_size = 16; break; case OP_UNPACK_HIGHW: mask_size = 8; break; case OP_UNPACK_HIGHD: case OP_UNPACK_HIGHPS: mask_size = 4; break; case OP_UNPACK_HIGHQ: case OP_UNPACK_HIGHPD: mask_size = 2; break; default: g_assert_not_reached (); } if (low) { for (i = 0; i < (mask_size / 2); ++i) { mask [(i * 2)] = i; mask [(i * 2) + 1] = mask_size + i; } } else { for (i = 0; i < (mask_size / 2); ++i) { mask [(i * 2)] = (mask_size / 2) + i; mask [(i * 2) + 1] = mask_size + (mask_size / 2) + i; } } for (i = 0; i < mask_size; ++i) mask_values [i] = LLVMConstInt (LLVMInt32Type (), mask [i], FALSE); values [ins->dreg] = LLVMBuildShuffleVector (builder, values [ins->sreg1], values [ins->sreg2], LLVMConstVector (mask_values, mask_size), dname); break; } case OP_DUPPD: { LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode); LLVMValueRef v, val; v = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMConstNull (t); val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v, LLVMConstInt (LLVMInt32Type (), 1, FALSE), dname); values [ins->dreg] = val; break; } case OP_DUPPS_LOW: case OP_DUPPS_HIGH: { LLVMTypeRef t = simd_op_to_llvm_type (ins->opcode); LLVMValueRef v1, v2, val; if (ins->opcode == OP_DUPPS_LOW) { v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 2, FALSE), ""); } else { v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); v2 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 3, FALSE), ""); } val = LLVMConstNull (t); val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v1, LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 2, FALSE), ""); val = LLVMBuildInsertElement (builder, val, v2, LLVMConstInt (LLVMInt32Type (), 3, FALSE), ""); values [ins->dreg] = val; break; } case OP_FCONV_TO_R8_X: { values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r8_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_FCONV_TO_R4_X: { values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (sse_r4_t), lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } #if defined(TARGET_X86) || defined(TARGET_AMD64) case OP_SSE_MOVMSK: { LLVMValueRef args [1]; if (ins->inst_c1 == MONO_TYPE_R4) { args [0] = lhs; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PS, args, dname); } else if (ins->inst_c1 == MONO_TYPE_R8) { args [0] = lhs; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_MOVMSK_PD, args, dname); } else { args [0] = convert (ctx, lhs, sse_i1_t); values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PMOVMSKB, args, dname); } break; } case OP_SSE_MOVS: case OP_SSE_MOVS2: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_4_i32 (0, 5, 6, 7), ""); else if (ins->inst_c1 == MONO_TYPE_R8) values [ins->dreg] = LLVMBuildShuffleVector (builder, rhs, lhs, create_const_vector_2_i32 (0, 3), ""); else if (ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, LLVMConstInt (LLVMInt64Type (), 0, FALSE), LLVMConstInt (LLVMInt32Type (), 1, FALSE), ""); else g_assert_not_reached (); // will be needed for other types later break; } case OP_SSE_MOVEHL: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (6, 7, 2, 3), ""); else g_assert_not_reached (); break; } case OP_SSE_MOVELH: { if (ins->inst_c1 == MONO_TYPE_R4) values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 1, 4, 5), ""); else g_assert_not_reached (); break; } case OP_SSE_UNPACKLO: { if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (0, 2), ""); } else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (0, 4, 1, 5), ""); } else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) { const int mask_values [] = { 0, 8, 1, 9, 2, 10, 3, 11 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i2_t), convert (ctx, rhs, sse_i2_t), create_const_vector_i32 (mask_values, 8), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) { const int mask_values [] = { 0, 16, 1, 17, 2, 18, 3, 19, 4, 20, 5, 21, 6, 22, 7, 23 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), create_const_vector_i32 (mask_values, 16), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else { g_assert_not_reached (); } break; } case OP_SSE_UNPACKHI: { if (ins->inst_c1 == MONO_TYPE_R8 || ins->inst_c1 == MONO_TYPE_I8 || ins->inst_c1 == MONO_TYPE_U8) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_2_i32 (1, 3), ""); } else if (ins->inst_c1 == MONO_TYPE_R4 || ins->inst_c1 == MONO_TYPE_I4 || ins->inst_c1 == MONO_TYPE_U4) { values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_4_i32 (2, 6, 3, 7), ""); } else if (ins->inst_c1 == MONO_TYPE_I2 || ins->inst_c1 == MONO_TYPE_U2) { const int mask_values [] = { 4, 12, 5, 13, 6, 14, 7, 15 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i2_t), convert (ctx, rhs, sse_i2_t), create_const_vector_i32 (mask_values, 8), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else if (ins->inst_c1 == MONO_TYPE_I1 || ins->inst_c1 == MONO_TYPE_U1) { const int mask_values [] = { 8, 24, 9, 25, 10, 26, 11, 27, 12, 28, 13, 29, 14, 30, 15, 31 }; LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), create_const_vector_i32 (mask_values, 16), ""); values [ins->dreg] = convert (ctx, shuffled, type_to_sse_type (ins->inst_c1)); } else { g_assert_not_reached (); } break; } case OP_SSE_LOADU: { LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0)); LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), ""); values [ins->dreg] = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, ins->inst_c0); // inst_c0 is alignment break; } case OP_SSE_MOVSS: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMConstNull (type_to_sse_type (ins->inst_c1)), val, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_SSE_MOVSS_STORE: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE2_MOVD: case OP_SSE2_MOVQ: case OP_SSE2_MOVUPD: { LLVMTypeRef rty = NULL; switch (ins->opcode) { case OP_SSE2_MOVD: rty = sse_i4_t; break; case OP_SSE2_MOVQ: rty = sse_i8_t; break; case OP_SSE2_MOVUPD: rty = sse_r8_t; break; } LLVMTypeRef srcty = LLVMGetElementType (rty); LLVMValueRef zero = LLVMConstNull (rty); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (srcty, 0)); LLVMValueRef val = mono_llvm_build_aligned_load (builder, addr, "", FALSE, 1); values [ins->dreg] = LLVMBuildInsertElement (builder, zero, val, const_int32 (0), dname); break; } case OP_SSE_MOVLPS_LOAD: case OP_SSE_MOVHPS_LOAD: { LLVMTypeRef t = LLVMFloatType (); int size = 4; gboolean high = ins->opcode == OP_SSE_MOVHPS_LOAD; /* Load two floats from rhs and store them in the low/high part of lhs */ LLVMValueRef addr = rhs; LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (t, 0)); LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), size, FALSE), IntPtrType ()), ""), LLVMPointerType (t, 0)); LLVMValueRef val1 = mono_llvm_build_load (builder, addr1, "", FALSE); LLVMValueRef val2 = mono_llvm_build_load (builder, addr2, "", FALSE); int index1, index2; index1 = high ? 2: 0; index2 = high ? 3 : 1; values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMBuildInsertElement (builder, lhs, val1, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""), val2, LLVMConstInt (LLVMInt32Type (), index2, FALSE), ""); break; } case OP_SSE2_MOVLPD_LOAD: case OP_SSE2_MOVHPD_LOAD: { LLVMTypeRef t = LLVMDoubleType (); LLVMValueRef addr = convert (ctx, rhs, LLVMPointerType (t, 0)); LLVMValueRef val = mono_llvm_build_load (builder, addr, "", FALSE); int index = ins->opcode == OP_SSE2_MOVHPD_LOAD ? 1 : 0; values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, val, const_int32 (index), ""); break; } case OP_SSE_MOVLPS_STORE: case OP_SSE_MOVHPS_STORE: { /* Store two floats from the low/hight part of rhs into lhs */ LLVMValueRef addr = lhs; LLVMValueRef addr1 = convert (ctx, addr, LLVMPointerType (LLVMFloatType (), 0)); LLVMValueRef addr2 = convert (ctx, LLVMBuildAdd (builder, convert (ctx, addr, IntPtrType ()), convert (ctx, LLVMConstInt (LLVMInt32Type (), 4, FALSE), IntPtrType ()), ""), LLVMPointerType (LLVMFloatType (), 0)); int index1 = ins->opcode == OP_SSE_MOVLPS_STORE ? 0 : 2; int index2 = ins->opcode == OP_SSE_MOVLPS_STORE ? 1 : 3; LLVMValueRef val1 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index1, FALSE), ""); LLVMValueRef val2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), index2, FALSE), ""); mono_llvm_build_store (builder, val1, addr1, FALSE, LLVM_BARRIER_NONE); mono_llvm_build_store (builder, val2, addr2, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE2_MOVLPD_STORE: case OP_SSE2_MOVHPD_STORE: { LLVMTypeRef t = LLVMDoubleType (); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (t, 0)); int index = ins->opcode == OP_SSE2_MOVHPD_STORE ? 1 : 0; LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, const_int32 (index), ""); mono_llvm_build_store (builder, val, addr, FALSE, LLVM_BARRIER_NONE); break; } case OP_SSE_STORE: { LLVMValueRef dst_vec = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0)); mono_llvm_build_aligned_store (builder, rhs, dst_vec, FALSE, ins->inst_c0); break; } case OP_SSE_STORES: { LLVMValueRef first_elem = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef dst = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (first_elem), 0)); mono_llvm_build_aligned_store (builder, first_elem, dst, FALSE, 1); break; } case OP_SSE_MOVNTPS: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMTypeOf (rhs), 0)); LLVMValueRef store = mono_llvm_build_aligned_store (builder, rhs, addr, FALSE, ins->inst_c0); set_nontemporal_flag (store); break; } case OP_SSE_PREFETCHT0: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (3), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHT1: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (2), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHT2: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (1), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_PREFETCHNTA: { LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (LLVMInt8Type (), 0)); LLVMValueRef args [] = { addr, const_int32 (0), const_int32 (0), const_int32 (1) }; call_intrins (ctx, INTRINS_PREFETCH, args, ""); break; } case OP_SSE_OR: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildOr (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_XOR: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildXor (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_AND: { LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_lhs_i64, vec_rhs_i64, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_ANDN: { LLVMValueRef minus_one [2]; minus_one [0] = LLVMConstInt (LLVMInt64Type (), -1, FALSE); minus_one [1] = LLVMConstInt (LLVMInt64Type (), -1, FALSE); LLVMValueRef vec_lhs_i64 = convert (ctx, lhs, sse_i8_t); LLVMValueRef vec_xor = LLVMBuildXor (builder, vec_lhs_i64, LLVMConstVector (minus_one, 2), ""); LLVMValueRef vec_rhs_i64 = convert (ctx, rhs, sse_i8_t); LLVMValueRef vec_and = LLVMBuildAnd (builder, vec_rhs_i64, vec_xor, ""); values [ins->dreg] = LLVMBuildBitCast (builder, vec_and, type_to_sse_type (ins->inst_c1), ""); break; } case OP_SSE_ADDSS: case OP_SSE_SUBSS: case OP_SSE_DIVSS: case OP_SSE_MULSS: case OP_SSE2_ADDSD: case OP_SSE2_SUBSD: case OP_SSE2_DIVSD: case OP_SSE2_MULSD: { LLVMValueRef v1 = LLVMBuildExtractElement (builder, lhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef v2 = LLVMBuildExtractElement (builder, rhs, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); LLVMValueRef v = NULL; switch (ins->opcode) { case OP_SSE_ADDSS: case OP_SSE2_ADDSD: v = LLVMBuildFAdd (builder, v1, v2, ""); break; case OP_SSE_SUBSS: case OP_SSE2_SUBSD: v = LLVMBuildFSub (builder, v1, v2, ""); break; case OP_SSE_DIVSS: case OP_SSE2_DIVSD: v = LLVMBuildFDiv (builder, v1, v2, ""); break; case OP_SSE_MULSS: case OP_SSE2_MULSD: v = LLVMBuildFMul (builder, v1, v2, ""); break; default: g_assert_not_reached (); } values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, v, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); break; } case OP_SSE_CMPSS: case OP_SSE2_CMPSD: { int imm = -1; gboolean swap = FALSE; switch (ins->inst_c0) { case CMP_EQ: imm = SSE_eq_ord_nosignal; break; case CMP_GT: imm = SSE_lt_ord_signal; swap = TRUE; break; case CMP_GE: imm = SSE_le_ord_signal; swap = TRUE; break; case CMP_LT: imm = SSE_lt_ord_signal; break; case CMP_LE: imm = SSE_le_ord_signal; break; case CMP_GT_UN: imm = SSE_nle_unord_signal; break; case CMP_GE_UN: imm = SSE_nlt_unord_signal; break; case CMP_LT_UN: imm = SSE_nle_unord_signal; swap = TRUE; break; case CMP_LE_UN: imm = SSE_nlt_unord_signal; swap = TRUE; break; case CMP_NE: imm = SSE_neq_unord_nosignal; break; case CMP_ORD: imm = SSE_ord_nosignal; break; case CMP_UNORD: imm = SSE_unord_nosignal; break; default: g_assert_not_reached (); break; } LLVMValueRef cmp = LLVMConstInt (LLVMInt8Type (), imm, FALSE); LLVMValueRef args [] = { lhs, rhs, cmp }; if (swap) { args [0] = rhs; args [1] = lhs; } IntrinsicId id = (IntrinsicId) 0; switch (ins->opcode) { case OP_SSE_CMPSS: id = INTRINS_SSE_CMPSS; break; case OP_SSE2_CMPSD: id = INTRINS_SSE_CMPSD; break; default: g_assert_not_reached (); break; } int elements = LLVMGetVectorSize (LLVMTypeOf (lhs)); int mask_values [MAX_VECTOR_ELEMS] = { 0 }; for (int i = 1; i < elements; ++i) { mask_values [i] = elements + i; } LLVMValueRef result = call_intrins (ctx, id, args, ""); result = LLVMBuildShuffleVector (builder, result, lhs, create_const_vector_i32 (mask_values, elements), ""); values [ins->dreg] = result; break; } case OP_SSE_COMISS: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_COMIEQ_SS; break; case CMP_GT: id = INTRINS_SSE_COMIGT_SS; break; case CMP_GE: id = INTRINS_SSE_COMIGE_SS; break; case CMP_LT: id = INTRINS_SSE_COMILT_SS; break; case CMP_LE: id = INTRINS_SSE_COMILE_SS; break; case CMP_NE: id = INTRINS_SSE_COMINEQ_SS; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_UCOMISS: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SS; break; case CMP_GT: id = INTRINS_SSE_UCOMIGT_SS; break; case CMP_GE: id = INTRINS_SSE_UCOMIGE_SS; break; case CMP_LT: id = INTRINS_SSE_UCOMILT_SS; break; case CMP_LE: id = INTRINS_SSE_UCOMILE_SS; break; case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SS; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE2_COMISD: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_COMIEQ_SD; break; case CMP_GT: id = INTRINS_SSE_COMIGT_SD; break; case CMP_GE: id = INTRINS_SSE_COMIGE_SD; break; case CMP_LT: id = INTRINS_SSE_COMILT_SD; break; case CMP_LE: id = INTRINS_SSE_COMILE_SD; break; case CMP_NE: id = INTRINS_SSE_COMINEQ_SD; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE2_UCOMISD: { LLVMValueRef args [] = { lhs, rhs }; IntrinsicId id = (IntrinsicId)0; switch (ins->inst_c0) { case CMP_EQ: id = INTRINS_SSE_UCOMIEQ_SD; break; case CMP_GT: id = INTRINS_SSE_UCOMIGT_SD; break; case CMP_GE: id = INTRINS_SSE_UCOMIGE_SD; break; case CMP_LT: id = INTRINS_SSE_UCOMILT_SD; break; case CMP_LE: id = INTRINS_SSE_UCOMILE_SD; break; case CMP_NE: id = INTRINS_SSE_UCOMINEQ_SD; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_CVTSI2SS: case OP_SSE_CVTSI2SS64: case OP_SSE2_CVTSI2SD: case OP_SSE2_CVTSI2SD64: { LLVMTypeRef ty = LLVMFloatType (); switch (ins->opcode) { case OP_SSE2_CVTSI2SD: case OP_SSE2_CVTSI2SD64: ty = LLVMDoubleType (); break; } LLVMValueRef fp = LLVMBuildSIToFP (builder, rhs, ty, ""); values [ins->dreg] = LLVMBuildInsertElement (builder, lhs, fp, const_int32 (0), dname); break; } case OP_SSE2_PMULUDQ: { LLVMValueRef i32_max = LLVMConstInt (LLVMInt64Type (), UINT32_MAX, FALSE); LLVMValueRef maskvals [] = { i32_max, i32_max }; LLVMValueRef mask = LLVMConstVector (maskvals, 2); LLVMValueRef l = LLVMBuildAnd (builder, convert (ctx, lhs, sse_i8_t), mask, ""); LLVMValueRef r = LLVMBuildAnd (builder, convert (ctx, rhs, sse_i8_t), mask, ""); values [ins->dreg] = LLVMBuildNUWMul (builder, l, r, dname); break; } case OP_SSE_SQRTSS: case OP_SSE2_SQRTSD: { LLVMValueRef upper = values [ins->sreg1]; LLVMValueRef lower = values [ins->sreg2]; LLVMValueRef scalar = LLVMBuildExtractElement (builder, lower, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, simd_ins_to_intrins (ins->opcode), &scalar, dname); values [ins->dreg] = LLVMBuildInsertElement (builder, upper, result, const_int32 (0), ""); break; } case OP_SSE_RCPSS: case OP_SSE_RSQRTSS: { IntrinsicId id = (IntrinsicId)0; switch (ins->opcode) { case OP_SSE_RCPSS: id = INTRINS_SSE_RCP_SS; break; case OP_SSE_RSQRTSS: id = INTRINS_SSE_RSQRT_SS; break; default: g_assert_not_reached (); break; }; LLVMValueRef result = call_intrins (ctx, id, &rhs, dname); const int mask[] = { 0, 5, 6, 7 }; LLVMValueRef shufmask = create_const_vector_i32 (mask, 4); values [ins->dreg] = LLVMBuildShuffleVector (builder, result, lhs, shufmask, ""); break; } case OP_XOP: { IntrinsicId id = (IntrinsicId)ins->inst_c0; call_intrins (ctx, id, NULL, ""); break; } case OP_XOP_X_I: case OP_XOP_X_X: case OP_XOP_I4_X: case OP_XOP_I8_X: case OP_XOP_X_X_X: case OP_XOP_X_X_I4: case OP_XOP_X_X_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_XOP_I4_X_X: { gboolean to_i8_t = FALSE; gboolean ret_bool = FALSE; IntrinsicId id = (IntrinsicId)ins->inst_c0; switch (ins->inst_c0) { case INTRINS_SSE_TESTC: to_i8_t = TRUE; ret_bool = TRUE; break; case INTRINS_SSE_TESTZ: to_i8_t = TRUE; ret_bool = TRUE; break; case INTRINS_SSE_TESTNZ: to_i8_t = TRUE; ret_bool = TRUE; break; default: g_assert_not_reached (); break; } LLVMValueRef args [] = { lhs, rhs }; if (to_i8_t) { args [0] = convert (ctx, args [0], sse_i8_t); args [1] = convert (ctx, args [1], sse_i8_t); } LLVMValueRef call = call_intrins (ctx, id, args, ""); if (ret_bool) { // if return type is bool (it's still i32) we need to normalize it to 1/0 LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, call, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), ""); } else { values [ins->dreg] = call; } break; } case OP_SSE2_MASKMOVDQU: { LLVMTypeRef i8ptr = LLVMPointerType (LLVMInt8Type (), 0); LLVMValueRef dstaddr = convert (ctx, values [ins->sreg3], i8ptr); LLVMValueRef src = convert (ctx, lhs, sse_i1_t); LLVMValueRef mask = convert (ctx, rhs, sse_i1_t); LLVMValueRef args[] = { src, mask, dstaddr }; call_intrins (ctx, INTRINS_SSE_MASKMOVDQU, args, ""); break; } case OP_PADDB_SAT: case OP_PADDW_SAT: case OP_PSUBB_SAT: case OP_PSUBW_SAT: case OP_PADDB_SAT_UN: case OP_PADDW_SAT_UN: case OP_PSUBB_SAT_UN: case OP_PSUBW_SAT_UN: case OP_SSE2_ADDS: case OP_SSE2_SUBS: { IntrinsicId id = (IntrinsicId)0; int type = 0; gboolean is_add = TRUE; switch (ins->opcode) { case OP_PADDB_SAT: type = MONO_TYPE_I1; break; case OP_PADDW_SAT: type = MONO_TYPE_I2; break; case OP_PSUBB_SAT: type = MONO_TYPE_I1; is_add = FALSE; break; case OP_PSUBW_SAT: type = MONO_TYPE_I2; is_add = FALSE; break; case OP_PADDB_SAT_UN: type = MONO_TYPE_U1; break; case OP_PADDW_SAT_UN: type = MONO_TYPE_U2; break; case OP_PSUBB_SAT_UN: type = MONO_TYPE_U1; is_add = FALSE; break; case OP_PSUBW_SAT_UN: type = MONO_TYPE_U2; is_add = FALSE; break; case OP_SSE2_ADDS: type = ins->inst_c1; break; case OP_SSE2_SUBS: type = ins->inst_c1; is_add = FALSE; break; default: g_assert_not_reached (); } if (is_add) { switch (type) { case MONO_TYPE_I1: id = INTRINS_SSE_SADD_SATI8; break; case MONO_TYPE_U1: id = INTRINS_SSE_UADD_SATI8; break; case MONO_TYPE_I2: id = INTRINS_SSE_SADD_SATI16; break; case MONO_TYPE_U2: id = INTRINS_SSE_UADD_SATI16; break; default: g_assert_not_reached (); break; } } else { switch (type) { case MONO_TYPE_I1: id = INTRINS_SSE_SSUB_SATI8; break; case MONO_TYPE_U1: id = INTRINS_SSE_USUB_SATI8; break; case MONO_TYPE_I2: id = INTRINS_SSE_SSUB_SATI16; break; case MONO_TYPE_U2: id = INTRINS_SSE_USUB_SATI16; break; default: g_assert_not_reached (); break; } } LLVMTypeRef vecty = type_to_sse_type (type); LLVMValueRef args [] = { convert (ctx, lhs, vecty), convert (ctx, rhs, vecty) }; LLVMValueRef result = call_intrins (ctx, id, args, dname); values [ins->dreg] = convert (ctx, result, vecty); break; } case OP_SSE2_PACKUS: { LLVMValueRef args [2]; args [0] = convert (ctx, lhs, sse_i2_t); args [1] = convert (ctx, rhs, sse_i2_t); values [ins->dreg] = convert (ctx, call_intrins (ctx, INTRINS_SSE_PACKUSWB, args, dname), type_to_sse_type (ins->inst_c1)); break; } case OP_SSE2_SRLI: { LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = convert (ctx, call_intrins (ctx, INTRINS_SSE_PSRLI_W, args, dname), type_to_sse_type (ins->inst_c1)); break; } case OP_SSE2_PSLLDQ: case OP_SSE2_PSRLDQ: { LLVMBasicBlockRef bbs [16 + 1]; LLVMValueRef switch_ins; LLVMValueRef value = lhs; LLVMValueRef index = rhs; LLVMValueRef phi_values [16 + 1]; LLVMTypeRef t = sse_i1_t; int nelems = 16; int i; gboolean shift_right = (ins->opcode == OP_SSE2_PSRLDQ); value = convert (ctx, value, t); // No corresponding LLVM intrinsics // FIXME: Optimize const count for (i = 0; i < nelems; ++i) bbs [i] = gen_bb (ctx, "PSLLDQ_CASE_BB"); bbs [nelems] = gen_bb (ctx, "PSLLDQ_DEF_BB"); cbb = gen_bb (ctx, "PSLLDQ_COND_BB"); switch_ins = LLVMBuildSwitch (builder, index, bbs [nelems], 0); for (i = 0; i < nelems; ++i) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i, FALSE), bbs [i]); LLVMPositionBuilderAtEnd (builder, bbs [i]); int mask_values [16]; // Implement shift using a shuffle if (shift_right) { for (int j = 0; j < nelems - i; ++j) mask_values [j] = i + j; for (int j = nelems -i ; j < nelems; ++j) mask_values [j] = nelems; } else { for (int j = 0; j < i; ++j) mask_values [j] = nelems; for (int j = 0; j < nelems - i; ++j) mask_values [j + i] = j; } phi_values [i] = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (t), create_const_vector_i32 (mask_values, nelems), ""); LLVMBuildBr (builder, cbb); } /* Default case */ LLVMPositionBuilderAtEnd (builder, bbs [nelems]); phi_values [nelems] = LLVMConstNull (t); LLVMBuildBr (builder, cbb); LLVMPositionBuilderAtEnd (builder, cbb); values [ins->dreg] = LLVMBuildPhi (builder, LLVMTypeOf (phi_values [0]), ""); LLVMAddIncoming (values [ins->dreg], phi_values, bbs, nelems + 1); values [ins->dreg] = convert (ctx, values [ins->dreg], type_to_sse_type (ins->inst_c1)); ctx->bblocks [bb->block_num].end_bblock = cbb; break; } case OP_SSE2_PSRAW_IMM: case OP_SSE2_PSRAD_IMM: case OP_SSE2_PSRLW_IMM: case OP_SSE2_PSRLD_IMM: case OP_SSE2_PSRLQ_IMM: { LLVMValueRef value = lhs; LLVMValueRef index = rhs; IntrinsicId id; // FIXME: Optimize const index case /* Use the non-immediate version */ switch (ins->opcode) { case OP_SSE2_PSRAW_IMM: id = INTRINS_SSE_PSRA_W; break; case OP_SSE2_PSRAD_IMM: id = INTRINS_SSE_PSRA_D; break; case OP_SSE2_PSRLW_IMM: id = INTRINS_SSE_PSRL_W; break; case OP_SSE2_PSRLD_IMM: id = INTRINS_SSE_PSRL_D; break; case OP_SSE2_PSRLQ_IMM: id = INTRINS_SSE_PSRL_Q; break; default: g_assert_not_reached (); break; } LLVMTypeRef t = LLVMTypeOf (value); LLVMValueRef index_vect = LLVMBuildInsertElement (builder, LLVMConstNull (t), convert (ctx, index, LLVMGetElementType (t)), const_int32 (0), ""); LLVMValueRef args [] = { value, index_vect }; values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_SSE_SHUFPS: case OP_SSE2_SHUFPD: case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef l = lhs; LLVMValueRef r = rhs; LLVMValueRef ctl = arg3; const char *oname = ""; int ncases = 0; switch (ins->opcode) { case OP_SSE_SHUFPS: ncases = 256; break; case OP_SSE2_SHUFPD: ncases = 4; break; case OP_SSE2_PSHUFD: case OP_SSE2_PSHUFHW: case OP_SSE2_PSHUFLW: ncases = 256; r = lhs; ctl = rhs; break; } switch (ins->opcode) { case OP_SSE_SHUFPS: oname = "sse_shufps"; break; case OP_SSE2_SHUFPD: oname = "sse2_shufpd"; break; case OP_SSE2_PSHUFD: oname = "sse2_pshufd"; break; case OP_SSE2_PSHUFHW: oname = "sse2_pshufhw"; break; case OP_SSE2_PSHUFLW: oname = "sse2_pshuflw"; break; } ctl = LLVMBuildAnd (builder, ctl, const_int32 (ncases - 1), ""); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, ncases, ctl, ret_t, oname); int mask_values [8]; int mask_len = 0; int i = 0; while (immediate_unroll_next (&ictx, &i)) { switch (ins->opcode) { case OP_SSE_SHUFPS: mask_len = 4; mask_values [0] = ((i >> 0) & 0x3) + 0; // take two elements from lhs mask_values [1] = ((i >> 2) & 0x3) + 0; mask_values [2] = ((i >> 4) & 0x3) + 4; // and two from rhs mask_values [3] = ((i >> 6) & 0x3) + 4; break; case OP_SSE2_SHUFPD: mask_len = 2; mask_values [0] = ((i >> 0) & 0x1) + 0; mask_values [1] = ((i >> 1) & 0x1) + 2; break; case OP_SSE2_PSHUFD: /* * Each 2 bits in mask selects 1 dword from the the source and copies it to the * destination. */ mask_len = 4; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j] = windex; } break; case OP_SSE2_PSHUFHW: /* * Each 2 bits in mask selects 1 word from the high quadword of the source and copies it to the * high quadword of the destination. */ mask_len = 8; /* The low quadword stays the same */ for (int j = 0; j < 4; ++j) mask_values [j] = j; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j + 4] = 4 + windex; } break; case OP_SSE2_PSHUFLW: mask_len = 8; /* The high quadword stays the same */ for (int j = 0; j < 4; ++j) mask_values [j + 4] = j + 4; for (int j = 0; j < 4; ++j) { int windex = (i >> (j * 2)) & 0x3; mask_values [j] = windex; } break; } LLVMValueRef mask = create_const_vector_i32 (mask_values, mask_len); LLVMValueRef result = LLVMBuildShuffleVector (builder, l, r, mask, oname); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE3_MOVDDUP: { int mask [] = { 0, 0 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMGetUndef (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 2), ""); break; } case OP_SSE3_MOVDDUP_MEM: { LLVMValueRef undef = LLVMGetUndef (v128_r8_t); LLVMValueRef addr = convert (ctx, lhs, LLVMPointerType (r8_t, 0)); LLVMValueRef elem = mono_llvm_build_aligned_load (builder, addr, "sse3_movddup_mem", FALSE, 1); LLVMValueRef val = LLVMBuildInsertElement (builder, undef, elem, const_int32 (0), "sse3_movddup_mem"); values [ins->dreg] = LLVMBuildShuffleVector (builder, val, undef, LLVMConstNull (LLVMVectorType (i4_t, 2)), "sse3_movddup_mem"); break; } case OP_SSE3_MOVSHDUP: { int mask [] = { 1, 1, 3, 3 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), ""); break; } case OP_SSE3_MOVSLDUP: { int mask [] = { 0, 0, 2, 2 }; values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, LLVMConstNull (LLVMTypeOf (lhs)), create_const_vector_i32 (mask, 4), ""); break; } case OP_SSSE3_SHUFFLE: { LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PSHUFB, args, dname); break; } case OP_SSSE3_ABS: { // %sub = sub <16 x i8> zeroinitializer, %arg // %cmp = icmp sgt <16 x i8> %arg, zeroinitializer // %abs = select <16 x i1> %cmp, <16 x i8> %arg, <16 x i8> %sub LLVMTypeRef typ = type_to_sse_type (ins->inst_c1); LLVMValueRef sub = LLVMBuildSub(builder, LLVMConstNull(typ), lhs, ""); LLVMValueRef cmp = LLVMBuildICmp(builder, LLVMIntSGT, lhs, LLVMConstNull(typ), ""); LLVMValueRef abs = LLVMBuildSelect (builder, cmp, lhs, sub, ""); values [ins->dreg] = convert (ctx, abs, typ); break; } case OP_SSSE3_ALIGNR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef zero = LLVMConstNull (v128_i1_t); LLVMValueRef hivec = convert (ctx, lhs, v128_i1_t); LLVMValueRef lovec = convert (ctx, rhs, v128_i1_t); LLVMValueRef rshift_amount = convert (ctx, arg3, i1_t); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 32, rshift_amount, v128_i1_t, "ssse3_alignr"); LLVMValueRef mask_values [16]; // 128-bit vector, 8-bit elements, 16 total elements int i = 0; while (immediate_unroll_next (&ictx, &i)) { LLVMValueRef hi = NULL; LLVMValueRef lo = NULL; if (i <= 16) { for (int j = 0; j < 16; j++) mask_values [j] = const_int32 (i + j); lo = lovec; hi = hivec; } else { for (int j = 0; j < 16; j++) mask_values [j] = const_int32 (i + j - 16); lo = hivec; hi = zero; } LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, lo, hi, LLVMConstVector (mask_values, 16), "ssse3_alignr"); immediate_unroll_commit (&ictx, i, shuffled); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, zero); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = convert (ctx, result, ret_t); break; } case OP_SSE41_ROUNDP: { LLVMValueRef args [] = { lhs, LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE) }; values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDPS : INTRINS_SSE_ROUNDPD, args, dname); break; } case OP_SSE41_ROUNDS: { LLVMValueRef args [3]; args [0] = lhs; args [1] = rhs; args [2] = LLVMConstInt (LLVMInt32Type (), ins->inst_c0, FALSE); values [ins->dreg] = call_intrins (ctx, ins->inst_c1 == MONO_TYPE_R4 ? INTRINS_SSE_ROUNDSS : INTRINS_SSE_ROUNDSD, args, dname); break; } case OP_SSE41_DPPS: case OP_SSE41_DPPD: { /* Bits 0, 1, 4, 5 are meaningful for the control mask * in dppd; all bits are meaningful for dpps. */ LLVMTypeRef ret_t = NULL; LLVMValueRef mask = NULL; int mask_bits = 0; int high_shift = 0; int low_mask = 0; IntrinsicId iid = (IntrinsicId) 0; const char *oname = ""; switch (ins->opcode) { case OP_SSE41_DPPS: ret_t = v128_r4_t; mask = const_int8 (0xff); // 0b11111111 mask_bits = 8; high_shift = 4; low_mask = 0xf; iid = INTRINS_SSE_DPPS; oname = "sse41_dpps"; break; case OP_SSE41_DPPD: ret_t = v128_r8_t; mask = const_int8 (0x33); // 0b00110011 mask_bits = 4; high_shift = 2; low_mask = 0x3; iid = INTRINS_SSE_DPPD; oname = "sse41_dppd"; break; } LLVMValueRef args [] = { lhs, rhs, NULL }; LLVMValueRef index = LLVMBuildAnd (builder, convert (ctx, arg3, i1_t), mask, oname); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << mask_bits, index, ret_t, oname); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int imm = ((i >> high_shift) << 4) | (i & low_mask); args [2] = const_int8 (imm); LLVMValueRef result = call_intrins (ctx, iid, args, dname); immediate_unroll_commit (&ictx, imm, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_MPSADBW: { LLVMValueRef args [] = { convert (ctx, lhs, sse_i1_t), convert (ctx, rhs, sse_i1_t), NULL, }; LLVMValueRef ctl = convert (ctx, arg3, i1_t); // Only 3 bits (bits 0-2) are used by mpsadbw and llvm.x86.sse41.mpsadbw int used_bits = 0x7; ctl = LLVMBuildAnd (builder, ctl, const_int8 (used_bits), "sse41_mpsadbw"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, used_bits + 1, ctl, v128_i2_t, "sse41_mpsadbw"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [2] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_MPSADBW, args, "sse41_mpsadbw"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_INSERTPS: { LLVMValueRef ctl = convert (ctx, arg3, i1_t); LLVMValueRef args [] = { lhs, rhs, NULL }; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, ctl, v128_r4_t, "sse41_insertps"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [2] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_SSE_INSERTPS, args, dname); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_BLEND: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); int nelem = LLVMGetVectorSize (ret_t); g_assert (nelem >= 2 && nelem <= 8); // I2, U2, R4, R8 int unique_ctl_patterns = 1 << nelem; int ctlmask = unique_ctl_patterns - 1; LLVMValueRef ctl = convert (ctx, arg3, i1_t); ctl = LLVMBuildAnd (builder, ctl, const_int8 (ctlmask), "sse41_blend"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, unique_ctl_patterns, ctl, ret_t, "sse41_blend"); int i = 0; int mask_values [MAX_VECTOR_ELEMS] = { 0 }; while (immediate_unroll_next (&ictx, &i)) { for (int lane = 0; lane < nelem; ++lane) { // n-bit in inst_c0 (control byte) is set to 1 gboolean bit_set = (i & (1 << lane)) >> lane; mask_values [lane] = lane + (bit_set ? nelem : 0); } LLVMValueRef mask = create_const_vector_i32 (mask_values, nelem); LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "sse41_blend"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, LLVMGetUndef (ret_t)); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_SSE41_BLENDV: { LLVMValueRef args [] = { lhs, rhs, values [ins->sreg3] }; if (ins->inst_c1 == MONO_TYPE_R4) { values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPS, args, dname); } else if (ins->inst_c1 == MONO_TYPE_R8) { values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_BLENDVPD, args, dname); } else { // for other non-fp type just convert to <16 x i8> and pass to @llvm.x86.sse41.pblendvb args [0] = LLVMBuildBitCast (ctx->builder, args [0], sse_i1_t, ""); args [1] = LLVMBuildBitCast (ctx->builder, args [1], sse_i1_t, ""); args [2] = LLVMBuildBitCast (ctx->builder, args [2], sse_i1_t, ""); values [ins->dreg] = call_intrins (ctx, INTRINS_SSE_PBLENDVB, args, dname); } break; } case OP_SSE_CVTII: { gboolean is_signed = (ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_I4); LLVMTypeRef vec_type; if ((ins->inst_c1 == MONO_TYPE_I1) || (ins->inst_c1 == MONO_TYPE_U1)) vec_type = sse_i1_t; else if ((ins->inst_c1 == MONO_TYPE_I2) || (ins->inst_c1 == MONO_TYPE_U2)) vec_type = sse_i2_t; else vec_type = sse_i4_t; LLVMValueRef value; if (LLVMGetTypeKind (LLVMTypeOf (lhs)) != LLVMVectorTypeKind) { LLVMValueRef bitcasted = LLVMBuildBitCast (ctx->builder, lhs, LLVMPointerType (vec_type, 0), ""); value = mono_llvm_build_aligned_load (builder, bitcasted, "", FALSE, 1); } else { value = LLVMBuildBitCast (ctx->builder, lhs, vec_type, ""); } LLVMValueRef mask_vec; LLVMTypeRef dst_type; if (ins->inst_c0 == MONO_TYPE_I2) { mask_vec = create_const_vector_i32 (mask_0_incr_1, 8); dst_type = sse_i2_t; } else if (ins->inst_c0 == MONO_TYPE_I4) { mask_vec = create_const_vector_i32 (mask_0_incr_1, 4); dst_type = sse_i4_t; } else { g_assert (ins->inst_c0 == MONO_TYPE_I8); mask_vec = create_const_vector_i32 (mask_0_incr_1, 2); dst_type = sse_i8_t; } LLVMValueRef shuffled = LLVMBuildShuffleVector (builder, value, LLVMGetUndef (vec_type), mask_vec, ""); if (is_signed) values [ins->dreg] = LLVMBuildSExt (ctx->builder, shuffled, dst_type, ""); else values [ins->dreg] = LLVMBuildZExt (ctx->builder, shuffled, dst_type, ""); break; } case OP_SSE41_LOADANT: { LLVMValueRef dst_ptr = convert (ctx, lhs, LLVMPointerType (primitive_type_to_llvm_type (inst_c1_type (ins)), 0)); LLVMValueRef dst_vec = LLVMBuildBitCast (builder, dst_ptr, LLVMPointerType (type_to_sse_type (ins->inst_c1), 0), ""); LLVMValueRef load = mono_llvm_build_aligned_load (builder, dst_vec, "", FALSE, 16); set_nontemporal_flag (load); values [ins->dreg] = load; break; } case OP_SSE41_MUL: { const int shift_vals [] = { 32, 32 }; const LLVMValueRef args [] = { convert (ctx, lhs, sse_i8_t), convert (ctx, rhs, sse_i8_t), }; LLVMValueRef mul_args [2] = { 0 }; LLVMValueRef shift_vec = create_const_vector (LLVMInt64Type (), shift_vals, 2); for (int i = 0; i < 2; ++i) { LLVMValueRef padded = LLVMBuildShl (builder, args [i], shift_vec, ""); mul_args[i] = mono_llvm_build_exact_ashr (builder, padded, shift_vec); } values [ins->dreg] = LLVMBuildNSWMul (builder, mul_args [0], mul_args [1], dname); break; } case OP_SSE41_MULLO: { values [ins->dreg] = LLVMBuildMul (ctx->builder, lhs, rhs, ""); break; } case OP_SSE42_CRC32: case OP_SSE42_CRC64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = convert (ctx, rhs, primitive_type_to_llvm_type (ins->inst_c0)); IntrinsicId id; switch (ins->inst_c0) { case MONO_TYPE_U1: id = INTRINS_SSE_CRC32_32_8; break; case MONO_TYPE_U2: id = INTRINS_SSE_CRC32_32_16; break; case MONO_TYPE_U4: id = INTRINS_SSE_CRC32_32_32; break; case MONO_TYPE_U8: id = INTRINS_SSE_CRC32_64_64; break; default: g_assert_not_reached (); break; } values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_PCLMULQDQ: { LLVMValueRef args [] = { lhs, rhs, NULL }; LLVMValueRef ctl = convert (ctx, arg3, i1_t); // Only bits 0 and 4 of the immediate operand are used by PCLMULQDQ. ctl = LLVMBuildAnd (builder, ctl, const_int8 (0x11), "pclmulqdq"); ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 1 << 2, ctl, v128_i8_t, "pclmulqdq"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int imm = ((i & 0x2) << 3) | (i & 0x1); args [2] = const_int8 (imm); LLVMValueRef result = call_intrins (ctx, INTRINS_PCLMULQDQ, args, "pclmulqdq"); immediate_unroll_commit (&ictx, imm, result); } immediate_unroll_unreachable_default (&ictx); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_AES_KEYGENASSIST: { LLVMValueRef roundconstant = convert (ctx, rhs, i1_t); LLVMValueRef args [] = { convert (ctx, lhs, v128_i8_t), NULL }; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, 256, roundconstant, v128_i8_t, "aes_keygenassist"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { args [1] = const_int8 (i); LLVMValueRef result = call_intrins (ctx, INTRINS_AESNI_AESKEYGENASSIST, args, "aes_keygenassist"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_unreachable_default (&ictx); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = convert (ctx, result, v128_i1_t); break; } #endif case OP_XCOMPARE_FP: { LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0]; LLVMValueRef cmp = LLVMBuildFCmp (builder, pred, lhs, rhs, ""); int nelems = LLVMGetVectorSize (LLVMTypeOf (cmp)); g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); if (ins->inst_c1 == MONO_TYPE_R8) values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt64Type (), nelems), ""), LLVMTypeOf (lhs), ""); else values [ins->dreg] = LLVMBuildBitCast (builder, LLVMBuildSExt (builder, cmp, LLVMVectorType (LLVMInt32Type (), nelems), ""), LLVMTypeOf (lhs), ""); break; } case OP_XCOMPARE: { LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0]; LLVMValueRef cmp = LLVMBuildICmp (builder, pred, lhs, rhs, ""); g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); values [ins->dreg] = LLVMBuildSExt (builder, cmp, LLVMTypeOf (lhs), ""); break; } case OP_POPCNT32: values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I32, &lhs, ""); break; case OP_POPCNT64: values [ins->dreg] = call_intrins (ctx, INTRINS_CTPOP_I64, &lhs, ""); break; case OP_CTTZ32: case OP_CTTZ64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_CTTZ32 ? INTRINS_CTTZ_I32 : INTRINS_CTTZ_I64, args, ""); break; } case OP_BMI1_BEXTR32: case OP_BMI1_BEXTR64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = convert (ctx, rhs, ins->opcode == OP_BMI1_BEXTR32 ? i4_t : i8_t); // cast ushort to u32/u64 values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BMI1_BEXTR32 ? INTRINS_BEXTR_I32 : INTRINS_BEXTR_I64, args, ""); break; } case OP_BZHI32: case OP_BZHI64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_BZHI32 ? INTRINS_BZHI_I32 : INTRINS_BZHI_I64, args, ""); break; } case OP_MULX_H32: case OP_MULX_H64: case OP_MULX_HL32: case OP_MULX_HL64: { gboolean is_64 = ins->opcode == OP_MULX_H64 || ins->opcode == OP_MULX_HL64; gboolean only_high = ins->opcode == OP_MULX_H32 || ins->opcode == OP_MULX_H64; LLVMValueRef lx = LLVMBuildZExt (ctx->builder, lhs, LLVMInt128Type (), ""); LLVMValueRef rx = LLVMBuildZExt (ctx->builder, rhs, LLVMInt128Type (), ""); LLVMValueRef mulx = LLVMBuildMul (ctx->builder, lx, rx, ""); if (!only_high) { LLVMValueRef addr = convert (ctx, arg3, LLVMPointerType (is_64 ? i8_t : i4_t, 0)); LLVMValueRef lowx = LLVMBuildTrunc (ctx->builder, mulx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), ""); LLVMBuildStore (ctx->builder, lowx, addr); } LLVMValueRef shift = LLVMConstInt (LLVMInt128Type (), is_64 ? 64 : 32, FALSE); LLVMValueRef highx = LLVMBuildLShr (ctx->builder, mulx, shift, ""); values [ins->dreg] = LLVMBuildTrunc (ctx->builder, highx, is_64 ? LLVMInt64Type () : LLVMInt32Type (), ""); break; } case OP_PEXT32: case OP_PEXT64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PEXT32 ? INTRINS_PEXT_I32 : INTRINS_PEXT_I64, args, ""); break; } case OP_PDEP32: case OP_PDEP64: { LLVMValueRef args [2]; args [0] = lhs; args [1] = rhs; values [ins->dreg] = call_intrins (ctx, ins->opcode == OP_PDEP32 ? INTRINS_PDEP_I32 : INTRINS_PDEP_I64, args, ""); break; } #endif /* defined(TARGET_X86) || defined(TARGET_AMD64) */ // Shared between ARM64 and X86 #if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) case OP_LZCNT32: case OP_LZCNT64: { IntrinsicId iid = ins->opcode == OP_LZCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64; LLVMValueRef args [] = { lhs, const_int1 (FALSE) }; values [ins->dreg] = call_intrins (ctx, iid, args, ""); break; } #endif #if defined(TARGET_ARM64) || defined(TARGET_X86) || defined(TARGET_AMD64) || defined(TARGET_WASM) case OP_XEQUAL: { LLVMTypeRef t; LLVMValueRef cmp, mask [MAX_VECTOR_ELEMS], shuffle; int nelems; #if defined(TARGET_WASM) /* The wasm code generator doesn't understand the shuffle/and code sequence below */ LLVMValueRef val; if (LLVMIsNull (lhs) || LLVMIsNull (rhs)) { val = LLVMIsNull (lhs) ? rhs : lhs; nelems = LLVMGetVectorSize (LLVMTypeOf (lhs)); IntrinsicId intrins = (IntrinsicId)0; switch (nelems) { case 16: intrins = INTRINS_WASM_ANYTRUE_V16; break; case 8: intrins = INTRINS_WASM_ANYTRUE_V8; break; case 4: intrins = INTRINS_WASM_ANYTRUE_V4; break; case 2: intrins = INTRINS_WASM_ANYTRUE_V2; break; default: g_assert_not_reached (); } /* res = !wasm.anytrue (val) */ values [ins->dreg] = call_intrins (ctx, intrins, &val, ""); values [ins->dreg] = LLVMBuildZExt (builder, LLVMBuildICmp (builder, LLVMIntEQ, values [ins->dreg], LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""), LLVMInt32Type (), dname); break; } #endif LLVMTypeRef srcelemt = LLVMGetElementType (LLVMTypeOf (lhs)); //%c = icmp sgt <16 x i8> %a0, %a1 if (srcelemt == LLVMDoubleType () || srcelemt == LLVMFloatType ()) cmp = LLVMBuildFCmp (builder, LLVMRealOEQ, lhs, rhs, ""); else cmp = LLVMBuildICmp (builder, LLVMIntEQ, lhs, rhs, ""); nelems = LLVMGetVectorSize (LLVMTypeOf (cmp)); LLVMTypeRef elemt; if (srcelemt == LLVMDoubleType ()) elemt = LLVMInt64Type (); else if (srcelemt == LLVMFloatType ()) elemt = LLVMInt32Type (); else elemt = srcelemt; t = LLVMVectorType (elemt, nelems); cmp = LLVMBuildSExt (builder, cmp, t, ""); // cmp is a <nelems x elemt> vector, each element is either 0xff... or 0 int half = nelems / 2; while (half >= 1) { // AND the top and bottom halfes into the bottom half for (int i = 0; i < half; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), half + i, FALSE); for (int i = half; i < nelems; ++i) mask [i] = LLVMConstInt (LLVMInt32Type (), 0, FALSE); shuffle = LLVMBuildShuffleVector (builder, cmp, LLVMGetUndef (t), LLVMConstVector (mask, LLVMGetVectorSize (t)), ""); cmp = LLVMBuildAnd (builder, cmp, shuffle, ""); half = half / 2; } // Extract [0] LLVMValueRef first_elem = LLVMBuildExtractElement (builder, cmp, LLVMConstInt (LLVMInt32Type (), 0, FALSE), ""); // convert to 0/1 LLVMValueRef cmp_zero = LLVMBuildICmp (builder, LLVMIntNE, first_elem, LLVMConstInt (elemt, 0, FALSE), ""); values [ins->dreg] = LLVMBuildZExt (builder, cmp_zero, LLVMInt8Type (), ""); break; } #endif #if defined(TARGET_ARM64) case OP_XOP_I4_I4: case OP_XOP_I8_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; values [ins->dreg] = call_intrins (ctx, id, &lhs, ""); break; } case OP_XOP_X_X_X: case OP_XOP_I4_I4_I4: case OP_XOP_I4_I4_I8: { IntrinsicId id = (IntrinsicId)ins->inst_c0; gboolean zext_last = FALSE, bitcast_result = FALSE, getElement = FALSE; int element_idx = -1; switch (id) { case INTRINS_AARCH64_PMULL64: getElement = TRUE; bitcast_result = TRUE; element_idx = ins->inst_c1; break; case INTRINS_AARCH64_CRC32B: case INTRINS_AARCH64_CRC32H: case INTRINS_AARCH64_CRC32W: case INTRINS_AARCH64_CRC32CB: case INTRINS_AARCH64_CRC32CH: case INTRINS_AARCH64_CRC32CW: zext_last = TRUE; break; default: break; } LLVMValueRef arg1 = rhs; if (zext_last) arg1 = LLVMBuildZExt (ctx->builder, arg1, LLVMInt32Type (), ""); LLVMValueRef args [] = { lhs, arg1 }; if (getElement) { args [0] = LLVMBuildExtractElement (ctx->builder, args [0], const_int32 (element_idx), ""); args [1] = LLVMBuildExtractElement (ctx->builder, args [1], const_int32 (element_idx), ""); } values [ins->dreg] = call_intrins (ctx, id, args, ""); if (bitcast_result) values [ins->dreg] = convert (ctx, values [ins->dreg], LLVMVectorType (LLVMInt64Type (), 2)); break; } case OP_XOP_X_X_X_X: { IntrinsicId id = (IntrinsicId)ins->inst_c0; gboolean getLowerElement = FALSE; int arg_idx = -1; switch (id) { case INTRINS_AARCH64_SHA1C: case INTRINS_AARCH64_SHA1M: case INTRINS_AARCH64_SHA1P: getLowerElement = TRUE; arg_idx = 1; break; default: break; } LLVMValueRef args [] = { lhs, rhs, arg3 }; if (getLowerElement) args [arg_idx] = LLVMBuildExtractElement (ctx->builder, args [arg_idx], const_int32 (0), ""); values [ins->dreg] = call_intrins (ctx, id, args, ""); break; } case OP_XOP_X_X: { IntrinsicId id = (IntrinsicId)ins->inst_c0; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean getLowerElement = FALSE; switch (id) { case INTRINS_AARCH64_SHA1H: getLowerElement = TRUE; break; default: break; } LLVMValueRef arg0 = lhs; if (getLowerElement) arg0 = LLVMBuildExtractElement (ctx->builder, arg0, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, id, &arg0, ""); if (getLowerElement) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_XCOMPARE_FP_SCALAR: case OP_XCOMPARE_FP: { g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); gboolean scalar = ins->opcode == OP_XCOMPARE_FP_SCALAR; LLVMRealPredicate pred = fpcond_to_llvm_cond [ins->inst_c0]; LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMTypeRef reti_t = to_integral_vector_type (ret_t); LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); LLVMValueRef result = LLVMBuildFCmp (builder, pred, args [0], args [1], "xcompare_fp"); if (scalar) result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (reti_t)), result); result = LLVMBuildSExt (builder, result, reti_t, ""); result = LLVMBuildBitCast (builder, result, ret_t, ""); values [ins->dreg] = result; break; } case OP_XCOMPARE_SCALAR: case OP_XCOMPARE: { g_assert (LLVMTypeOf (lhs) == LLVMTypeOf (rhs)); gboolean scalar = ins->opcode == OP_XCOMPARE_SCALAR; LLVMIntPredicate pred = cond_to_llvm_cond [ins->inst_c0]; LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef args [] = { lhs, rhs }; if (scalar) for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); LLVMValueRef result = LLVMBuildICmp (builder, pred, args [0], args [1], "xcompare"); if (scalar) result = vector_from_scalar (ctx, LLVMVectorType (LLVMIntType (1), LLVMGetVectorSize (ret_t)), result); values [ins->dreg] = LLVMBuildSExt (builder, result, ret_t, ""); break; } case OP_ARM64_EXT: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (ret_t); g_assert (elems <= ARM64_MAX_VECTOR_ELEMS); LLVMValueRef index = arg3; LLVMValueRef default_value = lhs; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, elems, index, ret_t, "arm64_ext"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { LLVMValueRef mask = create_const_vector_i32 (&mask_0_incr_1 [i], elems); LLVMValueRef result = LLVMBuildShuffleVector (builder, lhs, rhs, mask, "arm64_ext"); immediate_unroll_commit (&ictx, i, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, default_value); values [ins->dreg] = immediate_unroll_end (&ictx, &cbb); break; } case OP_ARM64_MVN: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef result = bitcast_to_integral (ctx, lhs); result = LLVMBuildNot (builder, result, "arm64_mvn"); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_BIC: { LLVMTypeRef ret_t = LLVMTypeOf (lhs); LLVMValueRef result = bitcast_to_integral (ctx, lhs); LLVMValueRef mask = bitcast_to_integral (ctx, rhs); mask = LLVMBuildNot (builder, mask, ""); result = LLVMBuildAnd (builder, mask, result, "arm64_bic"); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_BSL: { LLVMTypeRef ret_t = LLVMTypeOf (rhs); LLVMValueRef select = bitcast_to_integral (ctx, lhs); LLVMValueRef left = bitcast_to_integral (ctx, rhs); LLVMValueRef right = bitcast_to_integral (ctx, arg3); LLVMValueRef result1 = LLVMBuildAnd (builder, select, left, "arm64_bsl"); LLVMValueRef result2 = LLVMBuildAnd (builder, LLVMBuildNot (builder, select, ""), right, ""); LLVMValueRef result = LLVMBuildOr (builder, result1, result2, ""); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_CMTST: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef l = bitcast_to_integral (ctx, lhs); LLVMValueRef r = bitcast_to_integral (ctx, rhs); LLVMValueRef result = LLVMBuildAnd (builder, l, r, "arm64_cmtst"); LLVMTypeRef t = LLVMTypeOf (l); result = LLVMBuildICmp (builder, LLVMIntNE, result, LLVMConstNull (t), ""); result = LLVMBuildSExt (builder, result, t, ""); result = convert (ctx, result, ret_t); values [ins->dreg] = result; break; } case OP_ARM64_FCVTL: case OP_ARM64_FCVTL2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean high = ins->opcode == OP_ARM64_FCVTL2; LLVMValueRef result = lhs; if (high) result = extract_high_elements (ctx, result); result = LLVMBuildFPExt (builder, result, ret_t, "arm64_fcvtl"); values [ins->dreg] = result; break; } case OP_ARM64_FCVTXN: case OP_ARM64_FCVTXN2: case OP_ARM64_FCVTN: case OP_ARM64_FCVTN2: { gboolean high = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_FCVTXN2: high = TRUE; case OP_ARM64_FCVTXN: iid = INTRINS_AARCH64_ADV_SIMD_FCVTXN; break; case OP_ARM64_FCVTN2: high = TRUE; break; } LLVMValueRef result = lhs; if (high) result = rhs; if (iid) result = call_intrins (ctx, iid, &result, ""); else result = LLVMBuildFPTrunc (builder, result, v64_r4_t, ""); if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_UCVTF: case OP_ARM64_SCVTF: case OP_ARM64_UCVTF_SCALAR: case OP_ARM64_SCVTF_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean scalar = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_UCVTF_SCALAR: scalar = TRUE; case OP_ARM64_UCVTF: is_unsigned = TRUE; break; case OP_ARM64_SCVTF_SCALAR: scalar = TRUE; break; } LLVMValueRef result = lhs; LLVMTypeRef cvt_t = ret_t; if (scalar) { result = scalar_from_vector (ctx, result); cvt_t = LLVMGetElementType (ret_t); } if (is_unsigned) result = LLVMBuildUIToFP (builder, result, cvt_t, "arm64_ucvtf"); else result = LLVMBuildSIToFP (builder, result, cvt_t, "arm64_scvtf"); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_FCVTZS: case OP_ARM64_FCVTZS_SCALAR: case OP_ARM64_FCVTZU: case OP_ARM64_FCVTZU_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean scalar = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_FCVTZU_SCALAR: scalar = TRUE; case OP_ARM64_FCVTZU: is_unsigned = TRUE; break; case OP_ARM64_FCVTZS_SCALAR: scalar = TRUE; break; } LLVMValueRef result = lhs; LLVMTypeRef cvt_t = ret_t; if (scalar) { result = scalar_from_vector (ctx, result); cvt_t = LLVMGetElementType (ret_t); } if (is_unsigned) result = LLVMBuildFPToUI (builder, result, cvt_t, "arm64_fcvtzu"); else result = LLVMBuildFPToSI (builder, result, cvt_t, "arm64_fcvtzs"); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_SELECT_SCALAR: { LLVMValueRef result = LLVMBuildExtractElement (builder, lhs, rhs, ""); LLVMTypeRef elem_t = LLVMTypeOf (result); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef t = LLVMVectorType (elem_t, 64 / elem_bits); result = vector_from_scalar (ctx, t, result); values [ins->dreg] = result; break; } case OP_ARM64_SELECT_QUAD: { LLVMTypeRef src_type = simd_class_to_llvm_type (ctx, ins->data.op [1].klass); LLVMTypeRef ret_type = simd_class_to_llvm_type (ctx, ins->klass); unsigned int src_type_bits = mono_llvm_get_prim_size_bits (src_type); unsigned int ret_type_bits = mono_llvm_get_prim_size_bits (ret_type); unsigned int src_intermediate_elems = src_type_bits / 32; unsigned int ret_intermediate_elems = ret_type_bits / 32; LLVMTypeRef intermediate_type = LLVMVectorType (i4_t, src_intermediate_elems); LLVMValueRef result = LLVMBuildBitCast (builder, lhs, intermediate_type, "arm64_select_quad"); result = LLVMBuildExtractElement (builder, result, rhs, "arm64_select_quad"); result = broadcast_element (ctx, result, ret_intermediate_elems); result = LLVMBuildBitCast (builder, result, ret_type, "arm64_select_quad"); values [ins->dreg] = result; break; } case OP_LSCNT32: case OP_LSCNT64: { // %shr = ashr i32 %x, 31 // %xor = xor i32 %shr, %x // %mul = shl i32 %xor, 1 // %add = or i32 %mul, 1 // %0 = tail call i32 @llvm.ctlz.i32(i32 %add, i1 false) LLVMValueRef shr = LLVMBuildAShr (builder, lhs, ins->opcode == OP_LSCNT32 ? LLVMConstInt (LLVMInt32Type (), 31, FALSE) : LLVMConstInt (LLVMInt64Type (), 63, FALSE), ""); LLVMValueRef one = ins->opcode == OP_LSCNT32 ? LLVMConstInt (LLVMInt32Type (), 1, FALSE) : LLVMConstInt (LLVMInt64Type (), 1, FALSE); LLVMValueRef xor = LLVMBuildXor (builder, shr, lhs, ""); LLVMValueRef mul = LLVMBuildShl (builder, xor, one, ""); LLVMValueRef add = LLVMBuildOr (builder, mul, one, ""); LLVMValueRef args [2]; args [0] = add; args [1] = LLVMConstInt (LLVMInt1Type (), 0, FALSE); values [ins->dreg] = LLVMBuildCall (builder, get_intrins (ctx, ins->opcode == OP_LSCNT32 ? INTRINS_CTLZ_I32 : INTRINS_CTLZ_I64), args, 2, ""); break; } case OP_ARM64_SQRDMLAH: case OP_ARM64_SQRDMLAH_BYSCALAR: case OP_ARM64_SQRDMLAH_SCALAR: case OP_ARM64_SQRDMLSH: case OP_ARM64_SQRDMLSH_BYSCALAR: case OP_ARM64_SQRDMLSH_SCALAR: { gboolean byscalar = FALSE; gboolean scalar = FALSE; gboolean subtract = FALSE; switch (ins->opcode) { case OP_ARM64_SQRDMLAH_BYSCALAR: byscalar = TRUE; break; case OP_ARM64_SQRDMLAH_SCALAR: scalar = TRUE; break; case OP_ARM64_SQRDMLSH: subtract = TRUE; break; case OP_ARM64_SQRDMLSH_BYSCALAR: subtract = TRUE; byscalar = TRUE; break; case OP_ARM64_SQRDMLSH_SCALAR: subtract = TRUE; scalar = TRUE; break; } int acc_iid = subtract ? INTRINS_AARCH64_ADV_SIMD_SQSUB : INTRINS_AARCH64_ADV_SIMD_SQADD; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t); ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins); LLVMValueRef args [] = { lhs, rhs, arg3 }; if (byscalar) { unsigned int elems = LLVMGetVectorSize (ret_t); args [2] = broadcast_element (ctx, scalar_from_vector (ctx, args [2]), elems); } if (scalar) { ovr_tag = sctx.ovr_tag; scalar_op_from_vector_op_process_args (&sctx, args, 3); } LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQRDMULH, ovr_tag, &args [1], "arm64_sqrdmlxh"); args [1] = result; result = call_overloaded_intrins (ctx, acc_iid, ovr_tag, &args [0], "arm64_sqrdmlxh"); if (scalar) result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } case OP_ARM64_SMULH: case OP_ARM64_UMULH: { LLVMValueRef op1, op2; if (ins->opcode == OP_ARM64_SMULH) { op1 = LLVMBuildSExt (builder, lhs, LLVMInt128Type (), ""); op2 = LLVMBuildSExt (builder, rhs, LLVMInt128Type (), ""); } else { op1 = LLVMBuildZExt (builder, lhs, LLVMInt128Type (), ""); op2 = LLVMBuildZExt (builder, rhs, LLVMInt128Type (), ""); } LLVMValueRef mul = LLVMBuildMul (builder, op1, op2, ""); LLVMValueRef hi64 = LLVMBuildLShr (builder, mul, LLVMConstInt (LLVMInt128Type (), 64, FALSE), ""); values [ins->dreg] = LLVMBuildTrunc (builder, hi64, LLVMInt64Type (), ""); break; } case OP_ARM64_XNARROW_SCALAR: { // Unfortunately, @llvm.aarch64.neon.scalar.sqxtun isn't available for i8 or i16. LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (ret_t); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); LLVMValueRef result = NULL; int iid = ins->inst_c0; int scalar_iid = 0; switch (iid) { case INTRINS_AARCH64_ADV_SIMD_SQXTUN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTUN; break; case INTRINS_AARCH64_ADV_SIMD_SQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_SQXTN; break; case INTRINS_AARCH64_ADV_SIMD_UQXTN: scalar_iid = INTRINS_AARCH64_ADV_SIMD_SCALAR_UQXTN; break; default: g_assert_not_reached (); } if (elem_t == i4_t) { LLVMValueRef arg = scalar_from_vector (ctx, lhs); result = call_intrins (ctx, scalar_iid, &arg, "arm64_xnarrow_scalar"); result = vector_from_scalar (ctx, ret_t, result); } else { LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef argelem_t = LLVMGetElementType (arg_t); unsigned int argelems = LLVMGetVectorSize (arg_t); LLVMValueRef arg = keep_lowest_element (ctx, LLVMVectorType (argelem_t, argelems * 2), lhs); result = call_overloaded_intrins (ctx, iid, ovr_tag, &arg, "arm64_xnarrow_scalar"); result = keep_lowest_element (ctx, LLVMTypeOf (result), result); } values [ins->dreg] = result; break; } case OP_ARM64_SQXTUN2: case OP_ARM64_UQXTN2: case OP_ARM64_SQXTN2: case OP_ARM64_XTN: case OP_ARM64_XTN2: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean high = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_SQXTUN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTUN; break; case OP_ARM64_UQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_UQXTN; break; case OP_ARM64_SQXTN2: high = TRUE; iid = INTRINS_AARCH64_ADV_SIMD_SQXTN; break; case OP_ARM64_XTN2: high = TRUE; break; } LLVMValueRef result = lhs; if (high) { result = rhs; ovr_tag = ovr_tag_smaller_vector (ovr_tag); } LLVMTypeRef t = LLVMTypeOf (result); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits / 2), elems); if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, &result, ""); else result = LLVMBuildTrunc (builder, result, result_t, "arm64_xtn"); if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_CLZ: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, const_int1 (0) }; LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_CLZ, ovr_tag, args, ""); values [ins->dreg] = result; break; } case OP_ARM64_FMSUB: case OP_ARM64_FMSUB_BYSCALAR: case OP_ARM64_FMSUB_SCALAR: case OP_ARM64_FNMSUB_SCALAR: case OP_ARM64_FMADD: case OP_ARM64_FMADD_BYSCALAR: case OP_ARM64_FMADD_SCALAR: case OP_ARM64_FNMADD_SCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean scalar = FALSE; gboolean negate = FALSE; gboolean subtract = FALSE; gboolean byscalar = FALSE; switch (ins->opcode) { case OP_ARM64_FMSUB: subtract = TRUE; break; case OP_ARM64_FMSUB_BYSCALAR: subtract = TRUE; byscalar = TRUE; break; case OP_ARM64_FMSUB_SCALAR: subtract = TRUE; scalar = TRUE; break; case OP_ARM64_FNMSUB_SCALAR: subtract = TRUE; scalar = TRUE; negate = TRUE; break; case OP_ARM64_FMADD: break; case OP_ARM64_FMADD_BYSCALAR: byscalar = TRUE; break; case OP_ARM64_FMADD_SCALAR: scalar = TRUE; break; case OP_ARM64_FNMADD_SCALAR: scalar = TRUE; negate = TRUE; break; } // llvm.fma argument order: mulop1, mulop2, addend LLVMValueRef args [] = { rhs, arg3, lhs }; if (byscalar) { unsigned int elems = LLVMGetVectorSize (LLVMTypeOf (args [0])); args [1] = broadcast_element (ctx, scalar_from_vector (ctx, args [1]), elems); } if (scalar) { ovr_tag = ovr_tag_force_scalar (ovr_tag); for (int i = 0; i < 3; ++i) args [i] = scalar_from_vector (ctx, args [i]); } if (subtract) args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_sub"); if (negate) { args [0] = LLVMBuildFNeg (builder, args [0], "arm64_fma_negate"); args [2] = LLVMBuildFNeg (builder, args [2], "arm64_fma_negate"); } LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_FMA, ovr_tag, args, "arm64_fma"); if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_SQDMULL: case OP_ARM64_SQDMULL_BYSCALAR: case OP_ARM64_SQDMULL2: case OP_ARM64_SQDMULL2_BYSCALAR: case OP_ARM64_SQDMLAL: case OP_ARM64_SQDMLAL_BYSCALAR: case OP_ARM64_SQDMLAL2: case OP_ARM64_SQDMLAL2_BYSCALAR: case OP_ARM64_SQDMLSL: case OP_ARM64_SQDMLSL_BYSCALAR: case OP_ARM64_SQDMLSL2: case OP_ARM64_SQDMLSL2_BYSCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean scalar = FALSE; gboolean add = FALSE; gboolean subtract = FALSE; gboolean high = FALSE; switch (ins->opcode) { case OP_ARM64_SQDMULL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL: break; case OP_ARM64_SQDMULL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMULL2: high = TRUE; break; case OP_ARM64_SQDMLAL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL: add = TRUE; break; case OP_ARM64_SQDMLAL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLAL2: high = TRUE; add = TRUE; break; case OP_ARM64_SQDMLSL_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL: subtract = TRUE; break; case OP_ARM64_SQDMLSL2_BYSCALAR: scalar = TRUE; case OP_ARM64_SQDMLSL2: high = TRUE; subtract = TRUE; break; } int iid = 0; if (add) iid = INTRINS_AARCH64_ADV_SIMD_SQADD; else if (subtract) iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; LLVMValueRef mul1 = lhs; LLVMValueRef mul2 = rhs; if (iid != 0) { mul1 = rhs; mul2 = arg3; } if (scalar) { LLVMTypeRef t = LLVMTypeOf (mul1); unsigned int elems = LLVMGetVectorSize (t); mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems); } LLVMValueRef args [] = { mul1, mul2 }; if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQDMULL, ovr_tag, args, ""); LLVMValueRef args2 [] = { lhs, result }; if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, args2, ""); values [ins->dreg] = result; break; } case OP_ARM64_SQDMULL_SCALAR: case OP_ARM64_SQDMLAL_SCALAR: case OP_ARM64_SQDMLSL_SCALAR: { /* * define dso_local i32 @__vqdmlslh_lane_s16(i32, i16, <4 x i16>, i32) local_unnamed_addr #0 { * %5 = insertelement <4 x i16> undef, i16 %1, i64 0 * %6 = shufflevector <4 x i16> %2, <4 x i16> undef, <4 x i32> <i32 3, i32 undef, i32 undef, i32 undef> * %7 = tail call <4 x i32> @llvm.aarch64.neon.sqdmull.v4i32(<4 x i16> %5, <4 x i16> %6) * %8 = extractelement <4 x i32> %7, i64 0 * %9 = tail call i32 @llvm.aarch64.neon.sqsub.i32(i32 %0, i32 %8) * ret i32 %9 * } * * define dso_local i64 @__vqdmlals_s32(i64, i32, i32) local_unnamed_addr #0 { * %4 = tail call i64 @llvm.aarch64.neon.sqdmulls.scalar(i32 %1, i32 %2) #2 * %5 = tail call i64 @llvm.aarch64.neon.sqadd.i64(i64 %0, i64 %4) #2 * ret i64 %5 * } */ int mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL; int iid = 0; gboolean scalar_mul_result = FALSE; gboolean scalar_acc_result = FALSE; switch (ins->opcode) { case OP_ARM64_SQDMLAL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQADD; break; case OP_ARM64_SQDMLSL_SCALAR: iid = INTRINS_AARCH64_ADV_SIMD_SQSUB; break; } LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef mularg = lhs; LLVMValueRef selected_scalar = rhs; if (iid != 0) { mularg = rhs; selected_scalar = arg3; } llvm_ovr_tag_t multag = ovr_tag_smaller_elements (ovr_tag_from_llvm_type (ret_t)); llvm_ovr_tag_t iidtag = ovr_tag_force_scalar (ovr_tag_from_llvm_type (ret_t)); LLVMTypeRef mularg_t = ovr_tag_to_llvm_type (multag); if (multag & INTRIN_int32) { /* The (i32, i32) -> i64 variant of aarch64_neon_sqdmull has * a unique, non-overloaded name. */ mulid = INTRINS_AARCH64_ADV_SIMD_SQDMULL_SCALAR; multag = 0; iidtag = INTRIN_int64 | INTRIN_scalar; scalar_mul_result = TRUE; scalar_acc_result = TRUE; } else if (multag & INTRIN_int16) { /* We were passed a (<4 x i16>, <4 x i16>) but the * widening multiplication intrinsic will yield a <4 x i32>. */ multag = INTRIN_int32 | INTRIN_vector128; } else g_assert_not_reached (); if (scalar_mul_result) { mularg = scalar_from_vector (ctx, mularg); selected_scalar = scalar_from_vector (ctx, selected_scalar); } else { mularg = keep_lowest_element (ctx, mularg_t, mularg); selected_scalar = keep_lowest_element (ctx, mularg_t, selected_scalar); } LLVMValueRef mulargs [] = { mularg, selected_scalar }; LLVMValueRef result = call_overloaded_intrins (ctx, mulid, multag, mulargs, "arm64_sqdmull_scalar"); if (iid != 0) { LLVMValueRef acc = scalar_from_vector (ctx, lhs); if (!scalar_mul_result) result = scalar_from_vector (ctx, result); LLVMValueRef subargs [] = { acc, result }; result = call_overloaded_intrins (ctx, iid, iidtag, subargs, "arm64_sqdmlxl_scalar"); scalar_acc_result = TRUE; } if (scalar_acc_result) result = vector_from_scalar (ctx, ret_t, result); else result = keep_lowest_element (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_FMUL_SEL: { LLVMValueRef mul2 = LLVMBuildExtractElement (builder, rhs, arg3, ""); LLVMValueRef mul1 = scalar_from_vector (ctx, lhs); LLVMValueRef result = LLVMBuildFMul (builder, mul1, mul2, "arm64_fmul_sel"); result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_MLA: case OP_ARM64_MLA_SCALAR: case OP_ARM64_MLS: case OP_ARM64_MLS_SCALAR: { gboolean scalar = FALSE; gboolean add = FALSE; switch (ins->opcode) { case OP_ARM64_MLA_SCALAR: scalar = TRUE; case OP_ARM64_MLA: add = TRUE; break; case OP_ARM64_MLS_SCALAR: scalar = TRUE; case OP_ARM64_MLS: break; } LLVMTypeRef mul_t = LLVMTypeOf (rhs); unsigned int elems = LLVMGetVectorSize (mul_t); LLVMValueRef mul2 = arg3; if (scalar) mul2 = broadcast_element (ctx, scalar_from_vector (ctx, mul2), elems); LLVMValueRef result = LLVMBuildMul (builder, rhs, mul2, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, ""); else result = LLVMBuildSub (builder, lhs, result, ""); values [ins->dreg] = result; break; } case OP_ARM64_SMULL: case OP_ARM64_SMULL_SCALAR: case OP_ARM64_SMULL2: case OP_ARM64_SMULL2_SCALAR: case OP_ARM64_UMULL: case OP_ARM64_UMULL_SCALAR: case OP_ARM64_UMULL2: case OP_ARM64_UMULL2_SCALAR: case OP_ARM64_SMLAL: case OP_ARM64_SMLAL_SCALAR: case OP_ARM64_SMLAL2: case OP_ARM64_SMLAL2_SCALAR: case OP_ARM64_UMLAL: case OP_ARM64_UMLAL_SCALAR: case OP_ARM64_UMLAL2: case OP_ARM64_UMLAL2_SCALAR: case OP_ARM64_SMLSL: case OP_ARM64_SMLSL_SCALAR: case OP_ARM64_SMLSL2: case OP_ARM64_SMLSL2_SCALAR: case OP_ARM64_UMLSL: case OP_ARM64_UMLSL_SCALAR: case OP_ARM64_UMLSL2: case OP_ARM64_UMLSL2_SCALAR: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean add = FALSE; gboolean subtract = FALSE; gboolean scalar = FALSE; int opcode = ins->opcode; switch (opcode) { case OP_ARM64_SMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL; break; case OP_ARM64_UMULL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL; break; case OP_ARM64_SMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL; break; case OP_ARM64_UMLAL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL; break; case OP_ARM64_SMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL; break; case OP_ARM64_UMLSL_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL; break; case OP_ARM64_SMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMULL2; break; case OP_ARM64_UMULL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMULL2; break; case OP_ARM64_SMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLAL2; break; case OP_ARM64_UMLAL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLAL2; break; case OP_ARM64_SMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_SMLSL2; break; case OP_ARM64_UMLSL2_SCALAR: scalar = TRUE; opcode = OP_ARM64_UMLSL2; break; } switch (opcode) { case OP_ARM64_SMULL2: high = TRUE; case OP_ARM64_SMULL: break; case OP_ARM64_UMULL2: high = TRUE; case OP_ARM64_UMULL: is_unsigned = TRUE; break; case OP_ARM64_SMLAL2: high = TRUE; case OP_ARM64_SMLAL: add = TRUE; break; case OP_ARM64_UMLAL2: high = TRUE; case OP_ARM64_UMLAL: add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SMLSL2: high = TRUE; case OP_ARM64_SMLSL: subtract = TRUE; break; case OP_ARM64_UMLSL2: high = TRUE; case OP_ARM64_UMLSL: subtract = TRUE; is_unsigned = TRUE; break; } int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UMULL : INTRINS_AARCH64_ADV_SIMD_SMULL; LLVMValueRef intrin_args [] = { lhs, rhs }; if (add || subtract) { intrin_args [0] = rhs; intrin_args [1] = arg3; } if (scalar) { LLVMValueRef sarg = intrin_args [1]; LLVMTypeRef t = LLVMTypeOf (intrin_args [0]); unsigned int elems = LLVMGetVectorSize (t); sarg = broadcast_element (ctx, scalar_from_vector (ctx, sarg), elems); intrin_args [1] = sarg; } if (high) for (int i = 0; i < 2; ++i) intrin_args [i] = extract_high_elements (ctx, intrin_args [i]); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, ""); if (subtract) result = LLVMBuildSub (builder, lhs, result, ""); values [ins->dreg] = result; break; } case OP_ARM64_XNEG: case OP_ARM64_XNEG_SCALAR: { gboolean scalar = ins->opcode == OP_ARM64_XNEG_SCALAR; gboolean is_float = FALSE; switch (inst_c1_type (ins)) { case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE; } LLVMValueRef result = lhs; if (scalar) result = scalar_from_vector (ctx, result); if (is_float) result = LLVMBuildFNeg (builder, result, "arm64_xneg"); else result = LLVMBuildNeg (builder, result, "arm64_xneg"); if (scalar) result = vector_from_scalar (ctx, LLVMTypeOf (lhs), result); values [ins->dreg] = result; break; } case OP_ARM64_PMULL: case OP_ARM64_PMULL2: { gboolean high = ins->opcode == OP_ARM64_PMULL2; LLVMValueRef args [] = { lhs, rhs }; if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); LLVMValueRef result = call_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_PMULL, args, "arm64_pmull"); values [ins->dreg] = result; break; } case OP_ARM64_REVN: { LLVMTypeRef t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (t); unsigned int group_bits = mono_llvm_get_prim_size_bits (elem_t); unsigned int vec_bits = mono_llvm_get_prim_size_bits (t); unsigned int tmp_bits = ins->inst_c0; unsigned int tmp_elements = vec_bits / tmp_bits; const int cycle8 [] = { 7, 6, 5, 4, 3, 2, 1, 0, 15, 14, 13, 12, 11, 10, 9, 8 }; const int cycle4 [] = { 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12 }; const int cycle2 [] = { 1, 0, 3, 2, 5, 4, 7, 6, 9, 8, 11, 10, 13, 12, 15, 14 }; const int *cycle = NULL; switch (group_bits / tmp_bits) { case 2: cycle = cycle2; break; case 4: cycle = cycle4; break; case 8: cycle = cycle8; break; default: g_assert_not_reached (); } g_assert (tmp_elements <= ARM64_MAX_VECTOR_ELEMS); LLVMTypeRef tmp_t = LLVMVectorType (LLVMIntType (tmp_bits), tmp_elements); LLVMValueRef tmp = LLVMBuildBitCast (builder, lhs, tmp_t, "arm64_revn"); LLVMValueRef result = LLVMBuildShuffleVector (builder, tmp, LLVMGetUndef (tmp_t), create_const_vector_i32 (cycle, tmp_elements), ""); result = LLVMBuildBitCast (builder, result, t, ""); values [ins->dreg] = result; break; } case OP_ARM64_SHL: case OP_ARM64_SSHR: case OP_ARM64_SSRA: case OP_ARM64_USHR: case OP_ARM64_USRA: { gboolean right = FALSE; gboolean add = FALSE; gboolean arith = FALSE; switch (ins->opcode) { case OP_ARM64_USHR: right = TRUE; break; case OP_ARM64_USRA: right = TRUE; add = TRUE; break; case OP_ARM64_SSHR: arith = TRUE; break; case OP_ARM64_SSRA: arith = TRUE; add = TRUE; break; } LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; if (add) { shiftarg = rhs; shift = arg3; } shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef result = NULL; if (right) result = LLVMBuildLShr (builder, shiftarg, shift, ""); else if (arith) result = LLVMBuildAShr (builder, shiftarg, shift, ""); else result = LLVMBuildShl (builder, shiftarg, shift, ""); if (add) result = LLVMBuildAdd (builder, lhs, result, "arm64_usra"); values [ins->dreg] = result; break; } case OP_ARM64_SHRN: case OP_ARM64_SHRN2: { LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; gboolean high = ins->opcode == OP_ARM64_SHRN2; if (high) { shiftarg = rhs; shift = arg3; } LLVMTypeRef arg_t = LLVMTypeOf (shiftarg); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); unsigned int elems = LLVMGetVectorSize (arg_t); unsigned int bits = mono_llvm_get_prim_size_bits (elem_t); LLVMTypeRef trunc_t = LLVMVectorType (LLVMIntType (bits / 2), elems); shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef result = LLVMBuildLShr (builder, shiftarg, shift, "shrn"); result = LLVMBuildTrunc (builder, result, trunc_t, ""); if (high) { result = concatenate_vectors (ctx, lhs, result); } values [ins->dreg] = result; break; } case OP_ARM64_SRSHR: case OP_ARM64_SRSRA: case OP_ARM64_URSHR: case OP_ARM64_URSRA: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef shiftarg = lhs; LLVMValueRef shift = rhs; gboolean right = FALSE; gboolean add = FALSE; switch (ins->opcode) { case OP_ARM64_URSRA: add = TRUE; case OP_ARM64_URSHR: right = TRUE; break; case OP_ARM64_SRSRA: add = TRUE; case OP_ARM64_SRSHR: right = TRUE; break; } int iid = 0; switch (ins->opcode) { case OP_ARM64_URSRA: case OP_ARM64_URSHR: iid = INTRINS_AARCH64_ADV_SIMD_URSHL; break; case OP_ARM64_SRSRA: case OP_ARM64_SRSHR: iid = INTRINS_AARCH64_ADV_SIMD_SRSHL; break; } if (add) { shiftarg = rhs; shift = arg3; } if (right) shift = LLVMBuildNeg (builder, shift, ""); shift = create_shift_vector (ctx, shiftarg, shift); LLVMValueRef args [] = { shiftarg, shift }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); if (add) result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_XNSHIFT_SCALAR: case OP_ARM64_XNSHIFT: case OP_ARM64_XNSHIFT2: { LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); LLVMValueRef shift_arg = lhs; LLVMValueRef shift_amount = rhs; gboolean high = FALSE; gboolean scalar = FALSE; int iid = ins->inst_c0; switch (ins->opcode) { case OP_ARM64_XNSHIFT_SCALAR: scalar = TRUE; break; case OP_ARM64_XNSHIFT2: high = TRUE; break; } if (high) { shift_arg = rhs; shift_amount = arg3; ovr_tag = ovr_tag_smaller_vector (ovr_tag); intrin_result_t = ovr_tag_to_llvm_type (ovr_tag); } LLVMTypeRef shift_arg_t = LLVMTypeOf (shift_arg); LLVMTypeRef shift_arg_elem_t = LLVMGetElementType (shift_arg_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (shift_arg_elem_t); int range_min = 1; int range_max = element_bits / 2; if (scalar) { unsigned int elems = LLVMGetVectorSize (shift_arg_t); LLVMValueRef lo = scalar_from_vector (ctx, shift_arg); shift_arg = vector_from_scalar (ctx, LLVMVectorType (shift_arg_elem_t, elems * 2), lo); } int max_index = range_max - range_min + 1; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, shift_amount, intrin_result_t, "arm64_xnshift"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i + range_min; LLVMValueRef intrin_args [] = { shift_arg, const_int32 (shift_const) }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit (&ictx, shift_const, result); } { immediate_unroll_default (&ictx); LLVMValueRef intrin_args [] = { shift_arg, const_int32 (range_max) }; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit_default (&ictx, result); } LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); if (high) result = concatenate_vectors (ctx, lhs, result); if (scalar) result = keep_lowest_element (ctx, LLVMTypeOf (result), result); values [ins->dreg] = result; break; } case OP_ARM64_SQSHLU: case OP_ARM64_SQSHLU_SCALAR: { gboolean scalar = ins->opcode == OP_ARM64_SQSHLU_SCALAR; LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (intrin_result_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (elem_t); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); int max_index = element_bits; ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, intrin_result_t, ins); intrin_result_t = scalar ? sctx.intermediate_type : intrin_result_t; ovr_tag = scalar ? sctx.ovr_tag : ovr_tag; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, rhs, intrin_result_t, "arm64_sqshlu"); int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i; LLVMValueRef args [2] = { lhs, create_shift_vector (ctx, lhs, const_int32 (shift_const)) }; if (scalar) scalar_op_from_vector_op_process_args (&sctx, args, 2); LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_SQSHLU, ovr_tag, args, ""); immediate_unroll_commit (&ictx, shift_const, result); } { immediate_unroll_default (&ictx); LLVMValueRef srcarg = lhs; if (scalar) scalar_op_from_vector_op_process_args (&sctx, &srcarg, 1); immediate_unroll_commit_default (&ictx, srcarg); } LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); if (scalar) result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } case OP_ARM64_SSHLL: case OP_ARM64_SSHLL2: case OP_ARM64_USHLL: case OP_ARM64_USHLL2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean high = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_SSHLL2: high = TRUE; break; case OP_ARM64_USHLL2: high = TRUE; case OP_ARM64_USHLL: is_unsigned = TRUE; break; } LLVMValueRef result = lhs; if (high) result = extract_high_elements (ctx, result); if (is_unsigned) result = LLVMBuildZExt (builder, result, ret_t, "arm64_ushll"); else result = LLVMBuildSExt (builder, result, ret_t, "arm64_ushll"); result = LLVMBuildShl (builder, result, create_shift_vector (ctx, result, rhs), ""); values [ins->dreg] = result; break; } case OP_ARM64_SLI: case OP_ARM64_SRI: { LLVMTypeRef intrin_result_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (intrin_result_t); unsigned int element_bits = mono_llvm_get_prim_size_bits (LLVMGetElementType (intrin_result_t)); int range_min = 0; int range_max = element_bits - 1; if (ins->opcode == OP_ARM64_SRI) { ++range_min; ++range_max; } int iid = ins->opcode == OP_ARM64_SRI ? INTRINS_AARCH64_ADV_SIMD_SRI : INTRINS_AARCH64_ADV_SIMD_SLI; int max_index = range_max - range_min + 1; ImmediateUnrollCtx ictx = immediate_unroll_begin (ctx, bb, max_index, arg3, intrin_result_t, "arm64_ext"); LLVMValueRef intrin_args [3] = { lhs, rhs, arg3 }; int i = 0; while (immediate_unroll_next (&ictx, &i)) { int shift_const = i + range_min; intrin_args [2] = const_int32 (shift_const); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, intrin_args, ""); immediate_unroll_commit (&ictx, shift_const, result); } immediate_unroll_default (&ictx); immediate_unroll_commit_default (&ictx, lhs); LLVMValueRef result = immediate_unroll_end (&ictx, &cbb); values [ins->dreg] = result; break; } case OP_ARM64_SQRT_SCALAR: { int iid = ins->inst_c0 == MONO_TYPE_R8 ? INTRINS_SQRT : INTRINS_SQRTF; LLVMTypeRef t = LLVMTypeOf (lhs); LLVMValueRef scalar = LLVMBuildExtractElement (builder, lhs, const_int32 (0), ""); LLVMValueRef result = call_intrins (ctx, iid, &scalar, "arm64_sqrt_scalar"); values [ins->dreg] = LLVMBuildInsertElement (builder, LLVMGetUndef (t), result, const_int32 (0), ""); break; } case OP_ARM64_STP: case OP_ARM64_STP_SCALAR: case OP_ARM64_STNP: case OP_ARM64_STNP_SCALAR: { gboolean nontemporal = FALSE; gboolean scalar = FALSE; switch (ins->opcode) { case OP_ARM64_STNP: nontemporal = TRUE; break; case OP_ARM64_STNP_SCALAR: nontemporal = TRUE; scalar = TRUE; break; case OP_ARM64_STP_SCALAR: scalar = TRUE; break; } LLVMTypeRef rhs_t = LLVMTypeOf (rhs); LLVMValueRef val = NULL; LLVMTypeRef dst_t = LLVMPointerType (rhs_t, 0); if (scalar) val = LLVMBuildShuffleVector (builder, rhs, arg3, create_const_vector_2_i32 (0, 2), ""); else { unsigned int rhs_elems = LLVMGetVectorSize (rhs_t); LLVMTypeRef rhs_elt_t = LLVMGetElementType (rhs_t); dst_t = LLVMPointerType (LLVMVectorType (rhs_elt_t, rhs_elems * 2), 0); val = concatenate_vectors (ctx, rhs, arg3); } LLVMValueRef address = convert (ctx, lhs, dst_t); LLVMValueRef store = mono_llvm_build_store (builder, val, address, FALSE, LLVM_BARRIER_NONE); if (nontemporal) set_nontemporal_flag (store); break; } case OP_ARM64_LD1_INSERT: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); LLVMValueRef address = convert (ctx, arg3, LLVMPointerType (elem_t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8; LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1_insert", FALSE, alignment); result = LLVMBuildInsertElement (builder, lhs, result, rhs, "arm64_ld1_insert"); values [ins->dreg] = result; break; } case OP_ARM64_LD1R: case OP_ARM64_LD1: { gboolean replicate = ins->opcode == OP_ARM64_LD1R; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); unsigned int alignment = mono_llvm_get_prim_size_bits (ret_t) / 8; LLVMValueRef address = lhs; LLVMTypeRef address_t = LLVMPointerType (ret_t, 0); if (replicate) { LLVMTypeRef elem_t = LLVMGetElementType (ret_t); address_t = LLVMPointerType (elem_t, 0); } address = convert (ctx, address, address_t); LLVMValueRef result = mono_llvm_build_aligned_load (builder, address, "arm64_ld1", FALSE, alignment); if (replicate) { unsigned int elems = LLVMGetVectorSize (ret_t); result = broadcast_element (ctx, result, elems); } values [ins->dreg] = result; break; } case OP_ARM64_LDNP: case OP_ARM64_LDNP_SCALAR: case OP_ARM64_LDP: case OP_ARM64_LDP_SCALAR: { const char *oname = NULL; gboolean nontemporal = FALSE; gboolean scalar = FALSE; switch (ins->opcode) { case OP_ARM64_LDNP: oname = "arm64_ldnp"; nontemporal = TRUE; break; case OP_ARM64_LDNP_SCALAR: oname = "arm64_ldnp_scalar"; nontemporal = TRUE; scalar = TRUE; break; case OP_ARM64_LDP: oname = "arm64_ldp"; break; case OP_ARM64_LDP_SCALAR: oname = "arm64_ldp_scalar"; scalar = TRUE; break; } if (!addresses [ins->dreg]) addresses [ins->dreg] = build_named_alloca (ctx, m_class_get_byval_arg (ins->klass), oname); LLVMTypeRef ret_t = simd_valuetuple_to_llvm_type (ctx, ins->klass); LLVMTypeRef vec_t = LLVMGetElementType (ret_t); LLVMValueRef ix = const_int32 (1); LLVMTypeRef src_t = LLVMPointerType (scalar ? LLVMGetElementType (vec_t) : vec_t, 0); LLVMValueRef src0 = convert (ctx, lhs, src_t); LLVMValueRef src1 = LLVMBuildGEP (builder, src0, &ix, 1, oname); LLVMValueRef vals [] = { src0, src1 }; for (int i = 0; i < 2; ++i) { vals [i] = LLVMBuildLoad (builder, vals [i], oname); if (nontemporal) set_nontemporal_flag (vals [i]); } unsigned int vec_sz = mono_llvm_get_prim_size_bits (vec_t); if (scalar) { g_assert (vec_sz == 64); LLVMValueRef undef = LLVMGetUndef (vec_t); for (int i = 0; i < 2; ++i) vals [i] = LLVMBuildInsertElement (builder, undef, vals [i], const_int32 (0), oname); } LLVMValueRef val = LLVMGetUndef (ret_t); for (int i = 0; i < 2; ++i) val = LLVMBuildInsertValue (builder, val, vals [i], i, oname); LLVMTypeRef retptr_t = LLVMPointerType (ret_t, 0); LLVMValueRef dst = convert (ctx, addresses [ins->dreg], retptr_t); LLVMBuildStore (builder, val, dst); values [ins->dreg] = vec_sz == 64 ? val : NULL; break; } case OP_ARM64_ST1: { LLVMTypeRef t = LLVMTypeOf (rhs); LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8; mono_llvm_build_aligned_store (builder, rhs, address, FALSE, alignment); break; } case OP_ARM64_ST1_SCALAR: { LLVMTypeRef t = LLVMGetElementType (LLVMTypeOf (rhs)); LLVMValueRef val = LLVMBuildExtractElement (builder, rhs, arg3, "arm64_st1_scalar"); LLVMValueRef address = convert (ctx, lhs, LLVMPointerType (t, 0)); unsigned int alignment = mono_llvm_get_prim_size_bits (t) / 8; mono_llvm_build_aligned_store (builder, val, address, FALSE, alignment); break; } case OP_ARM64_ADDHN: case OP_ARM64_ADDHN2: case OP_ARM64_SUBHN: case OP_ARM64_SUBHN2: case OP_ARM64_RADDHN: case OP_ARM64_RADDHN2: case OP_ARM64_RSUBHN: case OP_ARM64_RSUBHN2: { LLVMValueRef args [2] = { lhs, rhs }; gboolean high = FALSE; gboolean subtract = FALSE; int iid = 0; switch (ins->opcode) { case OP_ARM64_ADDHN2: high = TRUE; case OP_ARM64_ADDHN: break; case OP_ARM64_SUBHN2: high = TRUE; case OP_ARM64_SUBHN: subtract = TRUE; break; case OP_ARM64_RSUBHN2: high = TRUE; case OP_ARM64_RSUBHN: iid = INTRINS_AARCH64_ADV_SIMD_RSUBHN; break; case OP_ARM64_RADDHN2: high = TRUE; case OP_ARM64_RADDHN: iid = INTRINS_AARCH64_ADV_SIMD_RADDHN; break; } llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); if (high) { args [0] = rhs; args [1] = arg3; ovr_tag = ovr_tag_smaller_vector (ovr_tag); } LLVMValueRef result = NULL; if (iid != 0) result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); else { LLVMTypeRef t = LLVMTypeOf (args [0]); LLVMTypeRef elt_t = LLVMGetElementType (t); unsigned int elems = LLVMGetVectorSize (t); unsigned int elem_bits = mono_llvm_get_prim_size_bits (elt_t); if (subtract) result = LLVMBuildSub (builder, args [0], args [1], ""); else result = LLVMBuildAdd (builder, args [0], args [1], ""); result = LLVMBuildLShr (builder, result, broadcast_constant (elem_bits / 2, elt_t, elems), ""); result = LLVMBuildTrunc (builder, result, LLVMVectorType (LLVMIntType (elem_bits / 2), elems), ""); } if (high) result = concatenate_vectors (ctx, lhs, result); values [ins->dreg] = result; break; } case OP_ARM64_SADD: case OP_ARM64_UADD: case OP_ARM64_SADD2: case OP_ARM64_UADD2: case OP_ARM64_SSUB: case OP_ARM64_USUB: case OP_ARM64_SSUB2: case OP_ARM64_USUB2: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean subtract = FALSE; switch (ins->opcode) { case OP_ARM64_SADD2: high = TRUE; case OP_ARM64_SADD: break; case OP_ARM64_UADD2: high = TRUE; case OP_ARM64_UADD: is_unsigned = TRUE; break; case OP_ARM64_SSUB2: high = TRUE; case OP_ARM64_SSUB: subtract = TRUE; break; case OP_ARM64_USUB2: high = TRUE; case OP_ARM64_USUB: subtract = TRUE; is_unsigned = TRUE; break; } LLVMValueRef args [] = { lhs, rhs }; for (int i = 0; i < 2; ++i) { LLVMValueRef arg = args [i]; LLVMTypeRef arg_t = LLVMTypeOf (arg); if (high && arg_t != ret_t) arg = extract_high_elements (ctx, arg); if (is_unsigned) arg = LLVMBuildZExt (builder, arg, ret_t, ""); else arg = LLVMBuildSExt (builder, arg, ret_t, ""); args [i] = arg; } LLVMValueRef result = NULL; if (subtract) result = LLVMBuildSub (builder, args [0], args [1], "arm64_sub"); else result = LLVMBuildAdd (builder, args [0], args [1], "arm64_add"); values [ins->dreg] = result; break; } case OP_ARM64_SABAL: case OP_ARM64_SABAL2: case OP_ARM64_UABAL: case OP_ARM64_UABAL2: case OP_ARM64_SABDL: case OP_ARM64_SABDL2: case OP_ARM64_UABDL: case OP_ARM64_UABDL2: case OP_ARM64_SABA: case OP_ARM64_UABA: case OP_ARM64_SABD: case OP_ARM64_UABD: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); gboolean is_unsigned = FALSE; gboolean high = FALSE; gboolean add = FALSE; gboolean widen = FALSE; switch (ins->opcode) { case OP_ARM64_SABAL2: high = TRUE; case OP_ARM64_SABAL: widen = TRUE; add = TRUE; break; case OP_ARM64_UABAL2: high = TRUE; case OP_ARM64_UABAL: widen = TRUE; add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SABDL2: high = TRUE; case OP_ARM64_SABDL: widen = TRUE; break; case OP_ARM64_UABDL2: high = TRUE; case OP_ARM64_UABDL: widen = TRUE; is_unsigned = TRUE; break; case OP_ARM64_SABA: add = TRUE; break; case OP_ARM64_UABA: add = TRUE; is_unsigned = TRUE; break; case OP_ARM64_UABD: is_unsigned = TRUE; break; } LLVMValueRef args [] = { lhs, rhs }; if (add) { args [0] = rhs; args [1] = arg3; } if (high) for (int i = 0; i < 2; ++i) args [i] = extract_high_elements (ctx, args [i]); int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UABD : INTRINS_AARCH64_ADV_SIMD_SABD; llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (LLVMTypeOf (args [0])); LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); if (widen) result = LLVMBuildZExt (builder, result, ret_t, ""); if (add) result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_XHORIZ: { gboolean truncate = FALSE; LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t); if (elem_t == i1_t || elem_t == i2_t) truncate = TRUE; LLVMValueRef result = call_overloaded_intrins (ctx, ins->inst_c0, ovr_tag, &lhs, ""); if (truncate) { // @llvm.aarch64.neon.saddv.i32.v8i16 ought to return an i16, but doesn't in LLVM 9. result = LLVMBuildTrunc (builder, result, elem_t, ""); } result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_SADDLV: case OP_ARM64_UADDLV: { LLVMTypeRef arg_t = LLVMTypeOf (lhs); LLVMTypeRef elem_t = LLVMGetElementType (arg_t); LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); llvm_ovr_tag_t ovr_tag = ovr_tag_from_llvm_type (arg_t); gboolean truncate = elem_t == i1_t; int iid = ins->opcode == OP_ARM64_UADDLV ? INTRINS_AARCH64_ADV_SIMD_UADDLV : INTRINS_AARCH64_ADV_SIMD_SADDLV; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, ""); if (truncate) { // @llvm.aarch64.neon.saddlv.i32.v16i8 ought to return an i16, but doesn't in LLVM 9. result = LLVMBuildTrunc (builder, result, i2_t, ""); } result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_ARM64_UADALP: case OP_ARM64_SADALP: { llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); int iid = ins->opcode == OP_ARM64_UADALP ? INTRINS_AARCH64_ADV_SIMD_UADDLP : INTRINS_AARCH64_ADV_SIMD_SADDLP; LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, &rhs, ""); result = LLVMBuildAdd (builder, result, lhs, ""); values [ins->dreg] = result; break; } case OP_ARM64_ADDP_SCALAR: { llvm_ovr_tag_t ovr_tag = INTRIN_vector128 | INTRIN_int64; LLVMValueRef result = call_overloaded_intrins (ctx, INTRINS_AARCH64_ADV_SIMD_UADDV, ovr_tag, &lhs, "arm64_addp_scalar"); result = LLVMBuildInsertElement (builder, LLVMConstNull (v64_i8_t), result, const_int32 (0), ""); values [ins->dreg] = result; break; } case OP_ARM64_FADDP_SCALAR: { LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMValueRef hi = LLVMBuildExtractElement (builder, lhs, const_int32 (0), ""); LLVMValueRef lo = LLVMBuildExtractElement (builder, lhs, const_int32 (1), ""); LLVMValueRef result = LLVMBuildFAdd (builder, hi, lo, "arm64_faddp_scalar"); result = LLVMBuildInsertElement (builder, LLVMConstNull (ret_t), result, const_int32 (0), ""); values [ins->dreg] = result; break; } case OP_ARM64_SXTL: case OP_ARM64_SXTL2: case OP_ARM64_UXTL: case OP_ARM64_UXTL2: { gboolean high = FALSE; gboolean is_unsigned = FALSE; switch (ins->opcode) { case OP_ARM64_SXTL2: high = TRUE; break; case OP_ARM64_UXTL2: high = TRUE; case OP_ARM64_UXTL: is_unsigned = TRUE; break; } LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int elem_bits = LLVMGetIntTypeWidth (LLVMGetElementType (t)); unsigned int src_elems = LLVMGetVectorSize (t); unsigned int dst_elems = src_elems; LLVMValueRef arg = lhs; if (high) { arg = extract_high_elements (ctx, lhs); dst_elems = LLVMGetVectorSize (LLVMTypeOf (arg)); } LLVMTypeRef result_t = LLVMVectorType (LLVMIntType (elem_bits * 2), dst_elems); LLVMValueRef result = NULL; if (is_unsigned) result = LLVMBuildZExt (builder, arg, result_t, "arm64_uxtl"); else result = LLVMBuildSExt (builder, arg, result_t, "arm64_sxtl"); values [ins->dreg] = result; break; } case OP_ARM64_TRN1: case OP_ARM64_TRN2: { gboolean high = ins->opcode == OP_ARM64_TRN2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? 1 : 0; for (unsigned int i = 0; i < src_elems; i += 2) { mask [i] = laneix; mask [i + 1] = laneix + src_elems; laneix += 2; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp"); break; } case OP_ARM64_UZP1: case OP_ARM64_UZP2: { gboolean high = ins->opcode == OP_ARM64_UZP2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? 1 : 0; for (unsigned int i = 0; i < src_elems; ++i) { mask [i] = laneix; laneix += 2; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_uzp"); break; } case OP_ARM64_ZIP1: case OP_ARM64_ZIP2: { gboolean high = ins->opcode == OP_ARM64_ZIP2; LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int src_elems = LLVMGetVectorSize (t); int mask [MAX_VECTOR_ELEMS] = { 0 }; int laneix = high ? src_elems / 2 : 0; for (unsigned int i = 0; i < src_elems; i += 2) { mask [i] = laneix; mask [i + 1] = laneix + src_elems; ++laneix; } values [ins->dreg] = LLVMBuildShuffleVector (builder, lhs, rhs, create_const_vector_i32 (mask, src_elems), "arm64_zip"); break; } case OP_ARM64_ABSCOMPARE: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; gboolean scalar = ins->inst_c1; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); LLVMTypeRef elem_t = LLVMGetElementType (ret_t); llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); ovr_tag = ovr_tag_corresponding_integer (ovr_tag); LLVMValueRef args [] = { lhs, rhs }; LLVMTypeRef result_t = ret_t; if (scalar) { ovr_tag = ovr_tag_force_scalar (ovr_tag); result_t = elem_t; for (int i = 0; i < 2; ++i) args [i] = scalar_from_vector (ctx, args [i]); } LLVMValueRef result = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); result = LLVMBuildBitCast (builder, result, result_t, ""); if (scalar) result = vector_from_scalar (ctx, ret_t, result); values [ins->dreg] = result; break; } case OP_XOP_OVR_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, &lhs, ""); break; } case OP_XOP_OVR_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, rhs }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_X_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMValueRef args [] = { lhs, rhs, arg3 }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_BYSCALAR_X_X_X: { IntrinsicId iid = (IntrinsicId) ins->inst_c0; llvm_ovr_tag_t ovr_tag = ovr_tag_from_mono_vector_class (ins->klass); LLVMTypeRef t = LLVMTypeOf (lhs); unsigned int elems = LLVMGetVectorSize (t); LLVMValueRef arg2 = broadcast_element (ctx, scalar_from_vector (ctx, rhs), elems); LLVMValueRef args [] = { lhs, arg2 }; values [ins->dreg] = call_overloaded_intrins (ctx, iid, ovr_tag, args, ""); break; } case OP_XOP_OVR_SCALAR_X_X: case OP_XOP_OVR_SCALAR_X_X_X: case OP_XOP_OVR_SCALAR_X_X_X_X: { int num_args = 0; IntrinsicId iid = (IntrinsicId) ins->inst_c0; LLVMTypeRef ret_t = simd_class_to_llvm_type (ctx, ins->klass); switch (ins->opcode) { case OP_XOP_OVR_SCALAR_X_X: num_args = 1; break; case OP_XOP_OVR_SCALAR_X_X_X: num_args = 2; break; case OP_XOP_OVR_SCALAR_X_X_X_X: num_args = 3; break; } /* LLVM 9 NEON intrinsic functions have scalar overloads. Unfortunately * only overloads for 32 and 64-bit integers and floating point types are * supported. 8 and 16-bit integers are unsupported, and will fail during * instruction selection. This is worked around by using a vector * operation and then explicitly clearing the upper bits of the register. */ ScalarOpFromVectorOpCtx sctx = scalar_op_from_vector_op (ctx, ret_t, ins); LLVMValueRef args [3] = { lhs, rhs, arg3 }; scalar_op_from_vector_op_process_args (&sctx, args, num_args); LLVMValueRef result = call_overloaded_intrins (ctx, iid, sctx.ovr_tag, args, ""); result = scalar_op_from_vector_op_process_result (&sctx, result); values [ins->dreg] = result; break; } #endif case OP_DUMMY_USE: break; /* * EXCEPTION HANDLING */ case OP_IMPLICIT_EXCEPTION: /* This marks a place where an implicit exception can happen */ if (bb->region != -1) set_failure (ctx, "implicit-exception"); break; case OP_THROW: case OP_RETHROW: { gboolean rethrow = (ins->opcode == OP_RETHROW); if (ctx->llvm_only) { emit_llvmonly_throw (ctx, bb, rethrow, lhs); has_terminator = TRUE; ctx->unreachable [bb->block_num] = TRUE; } else { emit_throw (ctx, bb, rethrow, lhs); builder = ctx->builder; } break; } case OP_CALL_HANDLER: { /* * We don't 'call' handlers, but instead simply branch to them. * The code generated by ENDFINALLY will branch back to us. */ LLVMBasicBlockRef noex_bb; GSList *bb_list; BBInfo *info = &bblocks [ins->inst_target_bb->block_num]; bb_list = info->call_handler_return_bbs; /* * Set the indicator variable for the finally clause. */ lhs = info->finally_ind; g_assert (lhs); LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), g_slist_length (bb_list) + 1, FALSE), lhs); /* Branch to the finally clause */ LLVMBuildBr (builder, info->call_handler_target_bb); noex_bb = gen_bb (ctx, "CALL_HANDLER_CONT_BB"); info->call_handler_return_bbs = g_slist_append_mempool (cfg->mempool, info->call_handler_return_bbs, noex_bb); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, noex_bb); bblocks [bb->block_num].end_bblock = noex_bb; break; } case OP_START_HANDLER: { break; } case OP_ENDFINALLY: { LLVMBasicBlockRef resume_bb; MonoBasicBlock *handler_bb; LLVMValueRef val, switch_ins, callee; GSList *bb_list; BBInfo *info; gboolean is_fault = MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FAULT; /* * Fault clauses are like finally clauses, but they are only called if an exception is thrown. */ if (!is_fault) { handler_bb = (MonoBasicBlock*)g_hash_table_lookup (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region))); g_assert (handler_bb); info = &bblocks [handler_bb->block_num]; lhs = info->finally_ind; g_assert (lhs); bb_list = info->call_handler_return_bbs; resume_bb = gen_bb (ctx, "ENDFINALLY_RESUME_BB"); /* Load the finally variable */ val = LLVMBuildLoad (builder, lhs, ""); /* Reset the variable */ LLVMBuildStore (builder, LLVMConstInt (LLVMInt32Type (), 0, FALSE), lhs); /* Branch to either resume_bb, or to the bblocks in bb_list */ switch_ins = LLVMBuildSwitch (builder, val, resume_bb, g_slist_length (bb_list)); /* * The other targets are added at the end to handle OP_CALL_HANDLER * opcodes processed later. */ info->endfinally_switch_ins_list = g_slist_append_mempool (cfg->mempool, info->endfinally_switch_ins_list, switch_ins); builder = ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, resume_bb); } if (ctx->llvm_only) { if (!cfg->deopt) { emit_resume_eh (ctx, bb); } else { /* Not needed */ LLVMBuildUnreachable (builder); } } else { LLVMTypeRef icall_sig = LLVMFunctionType (LLVMVoidType (), NULL, 0, FALSE); if (ctx->cfg->compile_aot) { callee = get_callee (ctx, icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline)); } else { callee = get_jit_callee (ctx, "llvm_resume_unwind_trampoline", icall_sig, MONO_PATCH_INFO_JIT_ICALL_ID, GUINT_TO_POINTER (MONO_JIT_ICALL_mono_llvm_resume_unwind_trampoline)); } LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildUnreachable (builder); } has_terminator = TRUE; break; } case OP_ENDFILTER: { g_assert (cfg->llvm_only && cfg->deopt); LLVMBuildUnreachable (builder); has_terminator = TRUE; break; } case OP_IL_SEQ_POINT: break; default: { char reason [128]; sprintf (reason, "opcode %s", mono_inst_name (ins->opcode)); set_failure (ctx, reason); break; } } if (!ctx_ok (ctx)) break; /* Convert the value to the type required by phi nodes */ if (spec [MONO_INST_DEST] != ' ' && !MONO_IS_STORE_MEMBASE (ins) && ctx->vreg_types [ins->dreg]) { if (ctx->is_vphi [ins->dreg]) /* vtypes */ values [ins->dreg] = addresses [ins->dreg]; else values [ins->dreg] = convert (ctx, values [ins->dreg], ctx->vreg_types [ins->dreg]); } /* Add stores for volatile/ref variables */ if (spec [MONO_INST_DEST] != ' ' && spec [MONO_INST_DEST] != 'v' && !MONO_IS_STORE_MEMBASE (ins)) { if (!skip_volatile_store) emit_volatile_store (ctx, ins->dreg); #ifdef TARGET_WASM if (vreg_is_ref (cfg, ins->dreg) && ctx->values [ins->dreg]) emit_gc_pin (ctx, builder, ins->dreg); #endif } } if (!ctx_ok (ctx)) return; if (!has_terminator && bb->next_bb && (bb == cfg->bb_entry || bb->in_count > 0)) { LLVMBuildBr (builder, get_bb (ctx, bb->next_bb)); } if (bb == cfg->bb_exit && sig->ret->type == MONO_TYPE_VOID) { emit_dbg_loc (ctx, builder, cfg->header->code + cfg->header->code_size - 1); LLVMBuildRetVoid (builder); } if (bb == cfg->bb_entry) ctx->last_alloca = LLVMGetLastInstruction (get_bb (ctx, cfg->bb_entry)); } /* * mono_llvm_check_method_supported: * * Do some quick checks to decide whenever cfg->method can be compiled by LLVM, to avoid * compiling a method twice. */ void mono_llvm_check_method_supported (MonoCompile *cfg) { int i, j; #ifdef TARGET_WASM if (mono_method_signature_internal (cfg->method)->call_convention == MONO_CALL_VARARG) { cfg->exception_message = g_strdup ("vararg callconv"); cfg->disable_llvm = TRUE; return; } #endif if (cfg->llvm_only) return; if (cfg->method->save_lmf) { cfg->exception_message = g_strdup ("lmf"); cfg->disable_llvm = TRUE; } if (cfg->disable_llvm) return; /* * Nested clauses where one of the clauses is a finally clause is * not supported, because LLVM can't figure out the control flow, * probably because we resume exception handling by calling our * own function instead of using the 'resume' llvm instruction. */ for (i = 0; i < cfg->header->num_clauses; ++i) { for (j = 0; j < cfg->header->num_clauses; ++j) { MonoExceptionClause *clause1 = &cfg->header->clauses [i]; MonoExceptionClause *clause2 = &cfg->header->clauses [j]; // FIXME: Nested try clauses fail in some cases too, i.e. #37273 if (i != j && clause1->try_offset >= clause2->try_offset && clause1->handler_offset <= clause2->handler_offset) { //(clause1->flags == MONO_EXCEPTION_CLAUSE_FINALLY || clause2->flags == MONO_EXCEPTION_CLAUSE_FINALLY)) { cfg->exception_message = g_strdup ("nested clauses"); cfg->disable_llvm = TRUE; break; } } } if (cfg->disable_llvm) return; /* FIXME: */ if (cfg->method->dynamic) { cfg->exception_message = g_strdup ("dynamic."); cfg->disable_llvm = TRUE; } if (cfg->disable_llvm) return; } static LLVMCallInfo* get_llvm_call_info (MonoCompile *cfg, MonoMethodSignature *sig) { LLVMCallInfo *linfo; int i; if (cfg->gsharedvt && cfg->llvm_only && mini_is_gsharedvt_variable_signature (sig)) { int i, n, pindex; /* * Gsharedvt methods have the following calling convention: * - all arguments are passed by ref, even non generic ones * - the return value is returned by ref too, using a vret * argument passed after 'this'. */ n = sig->param_count + sig->hasthis; linfo = (LLVMCallInfo*)mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMCallInfo) + (sizeof (LLVMArgInfo) * n)); pindex = 0; if (sig->hasthis) linfo->args [pindex ++].storage = LLVMArgNormal; if (sig->ret->type != MONO_TYPE_VOID) { if (mini_is_gsharedvt_variable_type (sig->ret)) linfo->ret.storage = LLVMArgGsharedvtVariable; else if (mini_type_is_vtype (sig->ret)) linfo->ret.storage = LLVMArgGsharedvtFixedVtype; else linfo->ret.storage = LLVMArgGsharedvtFixed; linfo->vret_arg_index = pindex; } else { linfo->ret.storage = LLVMArgNone; } for (i = 0; i < sig->param_count; ++i) { if (m_type_is_byref (sig->params [i])) linfo->args [pindex].storage = LLVMArgNormal; else if (mini_is_gsharedvt_variable_type (sig->params [i])) linfo->args [pindex].storage = LLVMArgGsharedvtVariable; else if (mini_type_is_vtype (sig->params [i])) linfo->args [pindex].storage = LLVMArgGsharedvtFixedVtype; else linfo->args [pindex].storage = LLVMArgGsharedvtFixed; linfo->args [pindex].type = sig->params [i]; pindex ++; } return linfo; } linfo = mono_arch_get_llvm_call_info (cfg, sig); linfo->dummy_arg_pindex = -1; for (i = 0; i < sig->param_count; ++i) linfo->args [i + sig->hasthis].type = sig->params [i]; return linfo; } static void emit_method_inner (EmitContext *ctx); static void free_ctx (EmitContext *ctx) { GSList *l; g_free (ctx->values); g_free (ctx->addresses); g_free (ctx->vreg_types); g_free (ctx->is_vphi); g_free (ctx->vreg_cli_types); g_free (ctx->is_dead); g_free (ctx->unreachable); g_free (ctx->gc_var_indexes); g_ptr_array_free (ctx->phi_values, TRUE); g_free (ctx->bblocks); g_hash_table_destroy (ctx->region_to_handler); g_hash_table_destroy (ctx->clause_to_handler); g_hash_table_destroy (ctx->jit_callees); g_ptr_array_free (ctx->callsite_list, TRUE); g_free (ctx->method_name); g_ptr_array_free (ctx->bblock_list, TRUE); for (l = ctx->builders; l; l = l->next) { LLVMBuilderRef builder = (LLVMBuilderRef)l->data; LLVMDisposeBuilder (builder); } g_free (ctx); } static gboolean is_linkonce_method (MonoMethod *method) { #ifdef TARGET_WASM /* * Under wasm, linkonce works, so use it instead of the dedup pass for wrappers at least. * FIXME: Use for everything, i.e. can_dedup (). * FIXME: Fails System.Core tests * -> amodule->sorted_methods contains duplicates, screwing up jit tables. */ // FIXME: This works, but the aot data for the methods is still kept, so size still increases #if 0 if (method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); if (info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG || info->subtype == WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG) return TRUE; } #endif #endif return FALSE; } /* * mono_llvm_emit_method: * * Emit LLVM IL from the mono IL, and compile it to native code using LLVM. */ void mono_llvm_emit_method (MonoCompile *cfg) { EmitContext *ctx; char *method_name; gboolean is_linkonce = FALSE; int i; if (cfg->skip) return; /* The code below might acquire the loader lock, so use it for global locking */ mono_loader_lock (); ctx = g_new0 (EmitContext, 1); ctx->cfg = cfg; ctx->mempool = cfg->mempool; /* * This maps vregs to the LLVM instruction defining them */ ctx->values = g_new0 (LLVMValueRef, cfg->next_vreg); /* * This maps vregs for volatile variables to the LLVM instruction defining their * address. */ ctx->addresses = g_new0 (LLVMValueRef, cfg->next_vreg); ctx->vreg_types = g_new0 (LLVMTypeRef, cfg->next_vreg); ctx->is_vphi = g_new0 (gboolean, cfg->next_vreg); ctx->vreg_cli_types = g_new0 (MonoType*, cfg->next_vreg); ctx->phi_values = g_ptr_array_sized_new (256); /* * This signals whenever the vreg was defined by a phi node with no input vars * (i.e. all its input bblocks end with NOT_REACHABLE). */ ctx->is_dead = g_new0 (gboolean, cfg->next_vreg); /* Whenever the bblock is unreachable */ ctx->unreachable = g_new0 (gboolean, cfg->max_block_num); ctx->bblock_list = g_ptr_array_sized_new (256); ctx->region_to_handler = g_hash_table_new (NULL, NULL); ctx->clause_to_handler = g_hash_table_new (NULL, NULL); ctx->callsite_list = g_ptr_array_new (); ctx->jit_callees = g_hash_table_new (NULL, NULL); if (cfg->compile_aot) { ctx->module = &aot_module; /* * Allow the linker to discard duplicate copies of wrappers, generic instances etc. by using the 'linkonce' * linkage for them. This requires the following: * - the method needs to have a unique mangled name * - llvmonly mode, since the code in aot-runtime.c would initialize got slots in the wrong aot image etc. */ if (ctx->module->llvm_only && ctx->module->static_link && is_linkonce_method (cfg->method)) is_linkonce = TRUE; if (is_linkonce || mono_aot_is_externally_callable (cfg->method)) method_name = mono_aot_get_mangled_method_name (cfg->method); else method_name = mono_aot_get_method_name (cfg); cfg->llvm_method_name = g_strdup (method_name); } else { ctx->module = init_jit_module (); method_name = mono_method_full_name (cfg->method, TRUE); } ctx->method_name = method_name; ctx->is_linkonce = is_linkonce; if (cfg->compile_aot) { ctx->lmodule = ctx->module->lmodule; } else { ctx->lmodule = LLVMModuleCreateWithName (g_strdup_printf ("jit-module-%s", cfg->method->name)); } ctx->llvm_only = ctx->module->llvm_only; #ifdef TARGET_WASM ctx->emit_dummy_arg = TRUE; #endif emit_method_inner (ctx); if (!ctx_ok (ctx)) { if (ctx->lmethod) { /* Need to add unused phi nodes as they can be referenced by other values */ LLVMBasicBlockRef phi_bb = LLVMAppendBasicBlock (ctx->lmethod, "PHI_BB"); LLVMBuilderRef builder; builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, phi_bb); for (i = 0; i < ctx->phi_values->len; ++i) { LLVMValueRef v = (LLVMValueRef)g_ptr_array_index (ctx->phi_values, i); if (LLVMGetInstructionParent (v) == NULL) LLVMInsertIntoBuilder (builder, v); } if (ctx->module->llvm_only && ctx->module->static_link && cfg->interp) { /* The caller will retry compilation */ LLVMDeleteFunction (ctx->lmethod); } else if (ctx->module->llvm_only && ctx->module->static_link) { // Keep a stub for the function since it might be called directly int nbbs = LLVMCountBasicBlocks (ctx->lmethod); LLVMBasicBlockRef *bblocks = g_new0 (LLVMBasicBlockRef, nbbs); LLVMGetBasicBlocks (ctx->lmethod, bblocks); for (int i = 0; i < nbbs; ++i) LLVMRemoveBasicBlockFromParent (bblocks [i]); LLVMBasicBlockRef entry_bb = LLVMAppendBasicBlock (ctx->lmethod, "ENTRY"); builder = create_builder (ctx); LLVMPositionBuilderAtEnd (builder, entry_bb); ctx->builder = builder; LLVMTypeRef sig = LLVMFunctionType0 (LLVMVoidType (), FALSE); LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_nullref_exception)); LLVMBuildCall (builder, callee, NULL, 0, ""); LLVMBuildUnreachable (builder); /* Clean references to instructions inside the method */ for (int i = 0; i < ctx->callsite_list->len; ++i) { CallSite *callsite = (CallSite*)g_ptr_array_index (ctx->callsite_list, i); if (callsite->lmethod == ctx->lmethod) callsite->load = NULL; } } else { LLVMDeleteFunction (ctx->lmethod); } } } free_ctx (ctx); mono_loader_unlock (); } static void emit_method_inner (EmitContext *ctx) { MonoCompile *cfg = ctx->cfg; MonoMethodSignature *sig; MonoBasicBlock *bb; LLVMTypeRef method_type; LLVMValueRef method = NULL; LLVMValueRef *values = ctx->values; int i, max_block_num, bb_index; gboolean llvmonly_fail = FALSE; LLVMCallInfo *linfo; LLVMModuleRef lmodule = ctx->lmodule; BBInfo *bblocks; GPtrArray *bblock_list = ctx->bblock_list; MonoMethodHeader *header; MonoExceptionClause *clause; char **names; LLVMBuilderRef entry_builder = NULL; LLVMBasicBlockRef entry_bb = NULL; if (cfg->gsharedvt && !cfg->llvm_only) { set_failure (ctx, "gsharedvt"); return; } #if 0 { static int count = 0; count ++; char *llvm_count_str = g_getenv ("LLVM_COUNT"); if (llvm_count_str) { int lcount = atoi (llvm_count_str); g_free (llvm_count_str); if (count == lcount) { printf ("LAST: %s\n", mono_method_full_name (cfg->method, TRUE)); fflush (stdout); } if (count > lcount) { set_failure (ctx, "count"); return; } } } #endif // If we come upon one of the init_method wrappers, we need to find // the method that we have already emitted and tell LLVM that this // managed method info for the wrapper is associated with this method // we constructed ourselves from LLVM IR. // // This is necessary to unwind through the init_method, in the case that // it has to run a static cctor that throws an exception if (cfg->method->wrapper_type == MONO_WRAPPER_OTHER) { WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method); if (info->subtype == WRAPPER_SUBTYPE_AOT_INIT) { method = get_init_func (ctx->module, info->d.aot_init.subtype); ctx->lmethod = method; ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); const char *init_name = mono_marshal_get_aot_init_wrapper_name (info->d.aot_init.subtype); ctx->method_name = g_strdup_printf ("%s_%s", ctx->module->global_prefix, init_name); ctx->cfg->asm_symbol = g_strdup (ctx->method_name); if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } /* Not looked up at runtime */ g_hash_table_insert (ctx->module->no_method_table_lmethods, method, method); goto after_codegen; } else if (info->subtype == WRAPPER_SUBTYPE_LLVM_FUNC) { g_assert (info->d.llvm_func.subtype == LLVM_FUNC_WRAPPER_GC_POLL); if (cfg->compile_aot) { method = ctx->module->gc_poll_cold_wrapper; g_assert (method); } else { method = emit_icall_cold_wrapper (ctx->module, lmodule, MONO_JIT_ICALL_mono_threads_state_poll, FALSE); } ctx->lmethod = method; ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); ctx->method_name = g_strdup (LLVMGetValueName (method)); //g_strdup_printf ("%s_%s", ctx->module->global_prefix, LLVMGetValueName (method)); ctx->cfg->asm_symbol = g_strdup (ctx->method_name); if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } goto after_codegen; } } sig = mono_method_signature_internal (cfg->method); ctx->sig = sig; linfo = get_llvm_call_info (cfg, sig); ctx->linfo = linfo; if (!ctx_ok (ctx)) return; if (cfg->rgctx_var) linfo->rgctx_arg = TRUE; else if (needs_extra_arg (ctx, cfg->method)) linfo->dummy_arg = TRUE; ctx->method_type = method_type = sig_to_llvm_sig_full (ctx, sig, linfo); if (!ctx_ok (ctx)) return; method = LLVMAddFunction (lmodule, ctx->method_name, method_type); ctx->lmethod = method; if (!cfg->llvm_only) LLVMSetFunctionCallConv (method, LLVMMono1CallConv); /* if the method doesn't contain * (1) a call (so it's a leaf method) * (2) and no loops * we can skip the GC safepoint on method entry. */ gboolean requires_safepoint; requires_safepoint = cfg->has_calls; if (!requires_safepoint) { for (bb = cfg->bb_entry->next_bb; bb; bb = bb->next_bb) { if (bb->loop_body_start || (bb->flags & BB_EXCEPTION_HANDLER)) { requires_safepoint = TRUE; } } } if (cfg->method->wrapper_type) { if (cfg->method->wrapper_type == MONO_WRAPPER_ALLOC || cfg->method->wrapper_type == MONO_WRAPPER_WRITE_BARRIER) { requires_safepoint = FALSE; } else { WrapperInfo *info = mono_marshal_get_wrapper_info (cfg->method); switch (info->subtype) { case WRAPPER_SUBTYPE_GSHAREDVT_IN: case WRAPPER_SUBTYPE_GSHAREDVT_OUT: case WRAPPER_SUBTYPE_GSHAREDVT_IN_SIG: case WRAPPER_SUBTYPE_GSHAREDVT_OUT_SIG: /* Arguments are not used after the call */ requires_safepoint = FALSE; break; } } } ctx->has_safepoints = requires_safepoint; if (!cfg->llvm_only && mono_threads_are_safepoints_enabled () && requires_safepoint) { if (!cfg->compile_aot) { LLVMSetGC (method, "coreclr"); emit_gc_safepoint_poll (ctx->module, ctx->lmodule, cfg); } else { LLVMSetGC (method, "coreclr"); } } LLVMSetLinkage (method, LLVMPrivateLinkage); mono_llvm_add_func_attr (method, LLVM_ATTR_UW_TABLE); if (cfg->disable_omit_fp) mono_llvm_add_func_attr_nv (method, "frame-pointer", "all"); if (cfg->compile_aot) { if (mono_aot_is_externally_callable (cfg->method)) { LLVMSetLinkage (method, LLVMExternalLinkage); } else { LLVMSetLinkage (method, LLVMInternalLinkage); //all methods have internal visibility when doing llvm_only if (!cfg->llvm_only && ctx->module->external_symbols) { LLVMSetLinkage (method, LLVMExternalLinkage); LLVMSetVisibility (method, LLVMHiddenVisibility); } } if (ctx->is_linkonce) { LLVMSetLinkage (method, LLVMLinkOnceAnyLinkage); LLVMSetVisibility (method, LLVMDefaultVisibility); } } else { LLVMSetLinkage (method, LLVMExternalLinkage); } if (cfg->method->save_lmf && !cfg->llvm_only) { set_failure (ctx, "lmf"); return; } if (sig->pinvoke && cfg->method->wrapper_type != MONO_WRAPPER_RUNTIME_INVOKE && !cfg->llvm_only) { set_failure (ctx, "pinvoke signature"); return; } #ifdef TARGET_WASM if (ctx->module->interp && cfg->header->code_size > 100000 && !cfg->interp_entry_only) { /* Large methods slow down llvm too much */ set_failure (ctx, "il code too large."); return; } #endif header = cfg->header; for (i = 0; i < header->num_clauses; ++i) { clause = &header->clauses [i]; if (clause->flags != MONO_EXCEPTION_CLAUSE_FINALLY && clause->flags != MONO_EXCEPTION_CLAUSE_FAULT && clause->flags != MONO_EXCEPTION_CLAUSE_NONE) { if (cfg->llvm_only) { if (!cfg->deopt && !cfg->interp_entry_only) llvmonly_fail = TRUE; } else { set_failure (ctx, "non-finally/catch/fault clause."); return; } } } if (header->num_clauses || (cfg->method->iflags & METHOD_IMPL_ATTRIBUTE_NOINLINING) || cfg->no_inline) /* We can't handle inlined methods with clauses */ mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); for (int i = 0; i < cfg->header->num_clauses; i++) { MonoExceptionClause *clause = &cfg->header->clauses [i]; if (clause->flags == MONO_EXCEPTION_CLAUSE_NONE || clause->flags == MONO_EXCEPTION_CLAUSE_FILTER) ctx->has_catch = TRUE; } if (linfo->rgctx_arg) { ctx->rgctx_arg = LLVMGetParam (method, linfo->rgctx_arg_pindex); ctx->rgctx_arg_pindex = linfo->rgctx_arg_pindex; /* * We mark the rgctx parameter with the inreg attribute, which is mapped to * MONO_ARCH_RGCTX_REG in the Mono calling convention in llvm, i.e. * CC_X86_64_Mono in X86CallingConv.td. */ if (!ctx->llvm_only) mono_llvm_add_param_attr (ctx->rgctx_arg, LLVM_ATTR_IN_REG); LLVMSetValueName (ctx->rgctx_arg, "rgctx"); } else { ctx->rgctx_arg_pindex = -1; } if (cfg->vret_addr) { values [cfg->vret_addr->dreg] = LLVMGetParam (method, linfo->vret_arg_pindex); LLVMSetValueName (values [cfg->vret_addr->dreg], "vret"); if (linfo->ret.storage == LLVMArgVtypeByRef) { mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_STRUCT_RET); mono_llvm_add_param_attr (LLVMGetParam (method, linfo->vret_arg_pindex), LLVM_ATTR_NO_ALIAS); } } if (sig->hasthis) { ctx->this_arg_pindex = linfo->this_arg_pindex; ctx->this_arg = LLVMGetParam (method, linfo->this_arg_pindex); values [cfg->args [0]->dreg] = ctx->this_arg; LLVMSetValueName (values [cfg->args [0]->dreg], "this"); } if (linfo->dummy_arg) LLVMSetValueName (LLVMGetParam (method, linfo->dummy_arg_pindex), "dummy_arg"); names = g_new (char *, sig->param_count); mono_method_get_param_names (cfg->method, (const char **) names); /* Set parameter names/attributes */ for (i = 0; i < sig->param_count; ++i) { LLVMArgInfo *ainfo = &linfo->args [i + sig->hasthis]; char *name; int pindex = ainfo->pindex + ainfo->ndummy_fpargs; int j; for (j = 0; j < ainfo->ndummy_fpargs; ++j) { name = g_strdup_printf ("dummy_%d_%d", i, j); LLVMSetValueName (LLVMGetParam (method, ainfo->pindex + j), name); g_free (name); } if (ainfo->storage == LLVMArgVtypeInReg && ainfo->pair_storage [0] == LLVMArgNone && ainfo->pair_storage [1] == LLVMArgNone) continue; values [cfg->args [i + sig->hasthis]->dreg] = LLVMGetParam (method, pindex); if (ainfo->storage == LLVMArgGsharedvtFixed || ainfo->storage == LLVMArgGsharedvtFixedVtype) { if (names [i] && names [i][0] != '\0') name = g_strdup_printf ("p_arg_%s", names [i]); else name = g_strdup_printf ("p_arg_%d", i); } else { if (names [i] && names [i][0] != '\0') name = g_strdup_printf ("arg_%s", names [i]); else name = g_strdup_printf ("arg_%d", i); } LLVMSetValueName (LLVMGetParam (method, pindex), name); g_free (name); if (ainfo->storage == LLVMArgVtypeByVal) mono_llvm_add_param_attr (LLVMGetParam (method, pindex), LLVM_ATTR_BY_VAL); if (ainfo->storage == LLVMArgVtypeByRef || ainfo->storage == LLVMArgVtypeAddr) { /* For OP_LDADDR */ cfg->args [i + sig->hasthis]->opcode = OP_VTARG_ADDR; } #ifdef TARGET_WASM if (ainfo->storage == LLVMArgVtypeByRef) { /* This causes llvm to make a copy of the value which is what we need */ mono_llvm_add_param_byval_attr (LLVMGetParam (method, pindex), LLVMGetElementType (LLVMTypeOf (LLVMGetParam (method, pindex)))); } #endif } g_free (names); if (ctx->module->emit_dwarf && cfg->compile_aot && mono_debug_enabled ()) { ctx->minfo = mono_debug_lookup_method (cfg->method); ctx->dbg_md = emit_dbg_subprogram (ctx, cfg, method, ctx->method_name); } max_block_num = 0; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) max_block_num = MAX (max_block_num, bb->block_num); ctx->bblocks = bblocks = g_new0 (BBInfo, max_block_num + 1); /* Add branches between non-consecutive bblocks */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { if (bb->last_ins && MONO_IS_COND_BRANCH_OP (bb->last_ins) && bb->next_bb != bb->last_ins->inst_false_bb) { MonoInst *inst = (MonoInst*)mono_mempool_alloc0 (cfg->mempool, sizeof (MonoInst)); inst->opcode = OP_BR; inst->inst_target_bb = bb->last_ins->inst_false_bb; mono_bblock_add_inst (bb, inst); } } /* * Make a first pass over the code to precreate PHI nodes/set INDIRECT flags. */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { MonoInst *ins; LLVMBuilderRef builder; char *dname; char dname_buf[128]; builder = create_builder (ctx); for (ins = bb->code; ins; ins = ins->next) { switch (ins->opcode) { case OP_PHI: case OP_FPHI: case OP_VPHI: case OP_XPHI: { LLVMTypeRef phi_type = llvm_type_to_stack_type (cfg, type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass))); if (!ctx_ok (ctx)) return; if (cfg->interp_entry_only) break; if (ins->opcode == OP_VPHI) { /* Treat valuetype PHI nodes as operating on the address itself */ g_assert (ins->klass); phi_type = LLVMPointerType (type_to_llvm_type (ctx, m_class_get_byval_arg (ins->klass)), 0); } /* * Have to precreate these, as they can be referenced by * earlier instructions. */ sprintf (dname_buf, "t%d", ins->dreg); dname = dname_buf; values [ins->dreg] = LLVMBuildPhi (builder, phi_type, dname); if (ins->opcode == OP_VPHI) ctx->addresses [ins->dreg] = values [ins->dreg]; g_ptr_array_add (ctx->phi_values, values [ins->dreg]); /* * Set the expected type of the incoming arguments since these have * to have the same type. */ for (i = 0; i < ins->inst_phi_args [0]; i++) { int sreg1 = ins->inst_phi_args [i + 1]; if (sreg1 != -1) { if (ins->opcode == OP_VPHI) ctx->is_vphi [sreg1] = TRUE; ctx->vreg_types [sreg1] = phi_type; } } break; } case OP_LDADDR: ((MonoInst*)ins->inst_p0)->flags |= MONO_INST_INDIRECT; break; default: break; } } } /* * Create an ordering for bblocks, use the depth first order first, then * put the exception handling bblocks last. */ for (bb_index = 0; bb_index < cfg->num_bblocks; ++bb_index) { bb = cfg->bblocks [bb_index]; if (!(bb->region != -1 && !MONO_BBLOCK_IS_IN_REGION (bb, MONO_REGION_TRY))) { g_ptr_array_add (bblock_list, bb); bblocks [bb->block_num].added = TRUE; } } for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { if (!bblocks [bb->block_num].added) g_ptr_array_add (bblock_list, bb); } /* * Second pass: generate code. */ // Emit entry point entry_builder = create_builder (ctx); entry_bb = get_bb (ctx, cfg->bb_entry); LLVMPositionBuilderAtEnd (entry_builder, entry_bb); emit_entry_bb (ctx, entry_builder); if (llvmonly_fail) /* * In llvmonly mode, we want to emit an llvm method for every method even if it fails to compile, * so direct calls can be made from outside the assembly. */ goto after_codegen_1; for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { int clause_index; char name [128]; if (ctx->cfg->interp_entry_only || !(bb->region != -1 && (bb->flags & BB_EXCEPTION_HANDLER))) continue; if (ctx->cfg->deopt && MONO_REGION_FLAGS (bb->region) == MONO_EXCEPTION_CLAUSE_FILTER) continue; clause_index = MONO_REGION_CLAUSE_INDEX (bb->region); g_hash_table_insert (ctx->region_to_handler, GUINT_TO_POINTER (mono_get_block_region_notry (cfg, bb->region)), bb); g_hash_table_insert (ctx->clause_to_handler, GINT_TO_POINTER (clause_index), bb); /* * Create a new bblock which CALL_HANDLER/landing pads can branch to, because branching to the * LLVM bblock containing a landing pad causes problems for the * LLVM optimizer passes. */ sprintf (name, "BB%d_CALL_HANDLER_TARGET", bb->block_num); ctx->bblocks [bb->block_num].call_handler_target_bb = LLVMAppendBasicBlock (ctx->lmethod, name); } // Make landing pads first ctx->exc_meta = g_hash_table_new_full (NULL, NULL, NULL, NULL); if (ctx->llvm_only && !ctx->cfg->interp_entry_only) { size_t group_index = 0; while (group_index < cfg->header->num_clauses) { if (cfg->clause_is_dead [group_index]) { group_index ++; continue; } int count = 0; size_t cursor = group_index; while (cursor < cfg->header->num_clauses && CLAUSE_START (&cfg->header->clauses [cursor]) == CLAUSE_START (&cfg->header->clauses [group_index]) && CLAUSE_END (&cfg->header->clauses [cursor]) == CLAUSE_END (&cfg->header->clauses [group_index])) { count++; cursor++; } LLVMBasicBlockRef lpad_bb = emit_landing_pad (ctx, group_index, count); intptr_t key = CLAUSE_END (&cfg->header->clauses [group_index]); g_hash_table_insert (ctx->exc_meta, (gpointer)key, lpad_bb); group_index = cursor; } } for (bb_index = 0; bb_index < bblock_list->len; ++bb_index) { bb = (MonoBasicBlock*)g_ptr_array_index (bblock_list, bb_index); // Prune unreachable mono BBs. if (!(bb == cfg->bb_entry || bb->in_count > 0)) continue; process_bb (ctx, bb); if (!ctx_ok (ctx)) return; } g_hash_table_destroy (ctx->exc_meta); mono_memory_barrier (); /* Add incoming phi values */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { GSList *l, *ins_list; ins_list = bblocks [bb->block_num].phi_nodes; for (l = ins_list; l; l = l->next) { PhiNode *node = (PhiNode*)l->data; MonoInst *phi = node->phi; int sreg1 = node->sreg; LLVMBasicBlockRef in_bb; if (sreg1 == -1) continue; in_bb = get_end_bb (ctx, node->in_bb); if (ctx->unreachable [node->in_bb->block_num]) continue; if (phi->opcode == OP_VPHI) { g_assert (LLVMTypeOf (ctx->addresses [sreg1]) == LLVMTypeOf (values [phi->dreg])); LLVMAddIncoming (values [phi->dreg], &ctx->addresses [sreg1], &in_bb, 1); } else { if (!values [sreg1]) { /* Can happen with values in EH clauses */ set_failure (ctx, "incoming phi sreg1"); return; } if (LLVMTypeOf (values [sreg1]) != LLVMTypeOf (values [phi->dreg])) { set_failure (ctx, "incoming phi arg type mismatch"); return; } g_assert (LLVMTypeOf (values [sreg1]) == LLVMTypeOf (values [phi->dreg])); LLVMAddIncoming (values [phi->dreg], &values [sreg1], &in_bb, 1); } } } /* Nullify empty phi instructions */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { GSList *l, *ins_list; ins_list = bblocks [bb->block_num].phi_nodes; for (l = ins_list; l; l = l->next) { PhiNode *node = (PhiNode*)l->data; MonoInst *phi = node->phi; LLVMValueRef phi_ins = values [phi->dreg]; if (!phi_ins) /* Already removed */ continue; if (LLVMCountIncoming (phi_ins) == 0) { mono_llvm_replace_uses_of (phi_ins, LLVMConstNull (LLVMTypeOf (phi_ins))); LLVMInstructionEraseFromParent (phi_ins); values [phi->dreg] = NULL; } } } /* Create the SWITCH statements for ENDFINALLY instructions */ for (bb = cfg->bb_entry; bb; bb = bb->next_bb) { BBInfo *info = &bblocks [bb->block_num]; GSList *l; for (l = info->endfinally_switch_ins_list; l; l = l->next) { LLVMValueRef switch_ins = (LLVMValueRef)l->data; GSList *bb_list = info->call_handler_return_bbs; GSList *bb_list_iter; i = 0; for (bb_list_iter = bb_list; bb_list_iter; bb_list_iter = g_slist_next (bb_list_iter)) { LLVMAddCase (switch_ins, LLVMConstInt (LLVMInt32Type (), i + 1, FALSE), (LLVMBasicBlockRef)bb_list_iter->data); i ++; } } } ctx->module->max_method_idx = MAX (ctx->module->max_method_idx, cfg->method_index); after_codegen_1: if (llvmonly_fail) { /* * FIXME: Maybe fallback to interpreter */ static LLVMTypeRef sig; ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->inited_bb); char *name = mono_method_get_full_name (cfg->method); int len = strlen (name); LLVMTypeRef type = LLVMArrayType (LLVMInt8Type (), len + 1); LLVMValueRef name_var = LLVMAddGlobal (ctx->lmodule, type, "missing_method_name"); LLVMSetVisibility (name_var, LLVMHiddenVisibility); LLVMSetLinkage (name_var, LLVMInternalLinkage); LLVMSetInitializer (name_var, mono_llvm_create_constant_data_array ((guint8*)name, len + 1)); mono_llvm_set_is_constant (name_var); g_free (name); if (!sig) sig = LLVMFunctionType1 (LLVMVoidType (), ctx->module->ptr_type, FALSE); LLVMValueRef callee = get_callee (ctx, sig, MONO_PATCH_INFO_JIT_ICALL_ADDR, GUINT_TO_POINTER (MONO_JIT_ICALL_mini_llvmonly_throw_aot_failed_exception)); LLVMValueRef args [] = { convert (ctx, name_var, ctx->module->ptr_type) }; LLVMBuildCall (ctx->builder, callee, args, 1, ""); LLVMBuildUnreachable (ctx->builder); } /* Initialize the method if needed */ if (cfg->compile_aot) { // FIXME: Add more shared got entries ctx->builder = create_builder (ctx); LLVMPositionBuilderAtEnd (ctx->builder, ctx->init_bb); // FIXME: beforefieldinit /* * NATIVE_TO_MANAGED methods might be called on a thread not attached to the runtime, so they are initialized when loaded * in load_method (). */ gboolean needs_init = ctx->cfg->got_access_count > 0; MonoMethod *cctor = NULL; if (!needs_init && (cctor = mono_class_get_cctor (cfg->method->klass))) { /* Needs init to run the cctor */ if (cfg->method->flags & METHOD_ATTRIBUTE_STATIC) needs_init = TRUE; if (cctor == cfg->method) needs_init = FALSE; // If we are a constructor, we need to init so the static // constructor gets called. if (!strcmp (cfg->method->name, ".ctor")) needs_init = TRUE; } if (cfg->method->wrapper_type == MONO_WRAPPER_NATIVE_TO_MANAGED) needs_init = FALSE; if (needs_init) emit_method_init (ctx); else LLVMBuildBr (ctx->builder, ctx->inited_bb); // Was observing LLVM moving field accesses into the caller's method // body before the init call (the inlined one), leading to NULL derefs // after the init_method returns (GOT is filled out though) if (needs_init) mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); } if (mini_get_debug_options ()->llvm_disable_inlining) mono_llvm_add_func_attr (method, LLVM_ATTR_NO_INLINE); after_codegen: if (cfg->compile_aot) g_ptr_array_add (ctx->module->cfgs, cfg); if (cfg->llvm_only) { /* * Add the contents of ctx->callsite_list to module->callsite_list. * We can't do this earlier, as it contains llvm instructions which can be * freed if compilation fails. * FIXME: Get rid of this when all methods can be llvm compiled. */ for (int i = 0; i < ctx->callsite_list->len; ++i) g_ptr_array_add (ctx->module->callsite_list, g_ptr_array_index (ctx->callsite_list, i)); } if (cfg->verbose_level > 1) { g_print ("\n*** Unoptimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE)); if (cfg->compile_aot) { mono_llvm_dump_value (method); } else { mono_llvm_dump_module (ctx->lmodule); } g_print ("***\n\n"); } if (cfg->compile_aot && !cfg->llvm_only) mark_as_used (ctx->module, method); if (!cfg->llvm_only) { LLVMValueRef md_args [16]; LLVMValueRef md_node; int method_index; if (cfg->compile_aot) method_index = mono_aot_get_method_index (cfg->orig_method); else method_index = 1; md_args [0] = LLVMMDString (ctx->method_name, strlen (ctx->method_name)); md_args [1] = LLVMConstInt (LLVMInt32Type (), method_index, FALSE); md_node = LLVMMDNode (md_args, 2); LLVMAddNamedMetadataOperand (lmodule, "mono.function_indexes", md_node); //LLVMSetMetadata (method, md_kind, LLVMMDNode (&md_arg, 1)); } if (cfg->compile_aot) { /* Don't generate native code, keep the LLVM IR */ if (cfg->verbose_level) { char *name = mono_method_get_full_name (cfg->method); printf ("%s emitted as %s\n", name, ctx->method_name); g_free (name); } #if 0 int err = LLVMVerifyFunction (ctx->lmethod, LLVMPrintMessageAction); if (err != 0) LLVMDumpValue (ctx->lmethod); g_assert (err == 0); #endif } else { //LLVMVerifyFunction (method, 0); llvm_jit_finalize_method (ctx); } if (ctx->module->method_to_lmethod) g_hash_table_insert (ctx->module->method_to_lmethod, cfg->method, ctx->lmethod); if (ctx->module->idx_to_lmethod) g_hash_table_insert (ctx->module->idx_to_lmethod, GINT_TO_POINTER (cfg->method_index), ctx->lmethod); if (ctx->llvm_only && m_class_is_valuetype (cfg->orig_method->klass) && !(cfg->orig_method->flags & METHOD_ATTRIBUTE_STATIC)) emit_unbox_tramp (ctx, ctx->method_name, ctx->method_type, ctx->lmethod, cfg->method_index); } /* * mono_llvm_create_vars: * * Same as mono_arch_create_vars () for LLVM. */ void mono_llvm_create_vars (MonoCompile *cfg) { MonoMethodSignature *sig; sig = mono_method_signature_internal (cfg->method); if (cfg->gsharedvt && cfg->llvm_only) { gboolean vretaddr = FALSE; if (mini_is_gsharedvt_variable_signature (sig) && sig->ret->type != MONO_TYPE_VOID) { vretaddr = TRUE; } else { MonoMethodSignature *sig = mono_method_signature_internal (cfg->method); LLVMCallInfo *linfo; linfo = get_llvm_call_info (cfg, sig); vretaddr = (linfo->ret.storage == LLVMArgVtypeRetAddr || linfo->ret.storage == LLVMArgVtypeByRef || linfo->ret.storage == LLVMArgGsharedvtFixed || linfo->ret.storage == LLVMArgGsharedvtVariable || linfo->ret.storage == LLVMArgGsharedvtFixedVtype); } if (vretaddr) { /* * Creating vret_addr forces CEE_SETRET to store the result into it, * so we don't have to generate any code in our OP_SETRET case. */ cfg->vret_addr = mono_compile_create_var (cfg, m_class_get_byval_arg (mono_get_intptr_class ()), OP_ARG); if (G_UNLIKELY (cfg->verbose_level > 1)) { printf ("vret_addr = "); mono_print_ins (cfg->vret_addr); } } } else { mono_arch_create_vars (cfg); } cfg->lmf_ir = TRUE; } /* * mono_llvm_emit_call: * * Same as mono_arch_emit_call () for LLVM. */ void mono_llvm_emit_call (MonoCompile *cfg, MonoCallInst *call) { MonoInst *in; MonoMethodSignature *sig; int i, n; LLVMArgInfo *ainfo; sig = call->signature; n = sig->param_count + sig->hasthis; if (sig->call_convention == MONO_CALL_VARARG) { cfg->exception_message = g_strdup ("varargs"); cfg->disable_llvm = TRUE; return; } call->cinfo = get_llvm_call_info (cfg, sig); if (cfg->disable_llvm) return; for (i = 0; i < n; ++i) { MonoInst *ins; ainfo = call->cinfo->args + i; in = call->args [i]; /* Simply remember the arguments */ switch (ainfo->storage) { case LLVMArgNormal: { MonoType *t = (sig->hasthis && i == 0) ? m_class_get_byval_arg (mono_get_intptr_class ()) : ainfo->type; int opcode; opcode = mono_type_to_regmove (cfg, t); if (opcode == OP_FMOVE) { MONO_INST_NEW (cfg, ins, OP_FMOVE); ins->dreg = mono_alloc_freg (cfg); } else if (opcode == OP_LMOVE) { MONO_INST_NEW (cfg, ins, OP_LMOVE); ins->dreg = mono_alloc_lreg (cfg); } else if (opcode == OP_RMOVE) { MONO_INST_NEW (cfg, ins, OP_RMOVE); ins->dreg = mono_alloc_freg (cfg); } else { MONO_INST_NEW (cfg, ins, OP_MOVE); ins->dreg = mono_alloc_ireg (cfg); } ins->sreg1 = in->dreg; break; } case LLVMArgVtypeByVal: case LLVMArgVtypeByRef: case LLVMArgVtypeInReg: case LLVMArgVtypeAddr: case LLVMArgVtypeAsScalar: case LLVMArgAsIArgs: case LLVMArgAsFpArgs: case LLVMArgGsharedvtVariable: case LLVMArgGsharedvtFixed: case LLVMArgGsharedvtFixedVtype: case LLVMArgWasmVtypeAsScalar: MONO_INST_NEW (cfg, ins, OP_LLVM_OUTARG_VT); ins->dreg = mono_alloc_ireg (cfg); ins->sreg1 = in->dreg; ins->inst_p0 = mono_mempool_alloc0 (cfg->mempool, sizeof (LLVMArgInfo)); memcpy (ins->inst_p0, ainfo, sizeof (LLVMArgInfo)); ins->inst_vtype = ainfo->type; ins->klass = mono_class_from_mono_type_internal (ainfo->type); break; default: cfg->exception_message = g_strdup ("ainfo->storage"); cfg->disable_llvm = TRUE; return; } if (!cfg->disable_llvm) { MONO_ADD_INS (cfg->cbb, ins); mono_call_inst_add_outarg_reg (cfg, call, ins->dreg, 0, FALSE); } } } static inline void add_func (LLVMModuleRef module, const char *name, LLVMTypeRef ret_type, LLVMTypeRef *param_types, int nparams) { LLVMAddFunction (module, name, LLVMFunctionType (ret_type, param_types, nparams, FALSE)); } static LLVMValueRef add_intrins (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef *params, int nparams) { return mono_llvm_register_overloaded_intrinsic (module, id, params, nparams); } static LLVMValueRef add_intrins1 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1) { return mono_llvm_register_overloaded_intrinsic (module, id, &param1, 1); } static LLVMValueRef add_intrins2 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2) { LLVMTypeRef params [] = { param1, param2 }; return mono_llvm_register_overloaded_intrinsic (module, id, params, 2); } static LLVMValueRef add_intrins3 (LLVMModuleRef module, IntrinsicId id, LLVMTypeRef param1, LLVMTypeRef param2, LLVMTypeRef param3) { LLVMTypeRef params [] = { param1, param2, param3 }; return mono_llvm_register_overloaded_intrinsic (module, id, params, 3); } static void add_intrinsic (LLVMModuleRef module, int id) { /* Register simple intrinsics */ LLVMValueRef intrins = mono_llvm_register_intrinsic (module, (IntrinsicId)id); if (intrins) { g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins); return; } if (intrin_arm64_ovr [id] != 0) { llvm_ovr_tag_t spec = intrin_arm64_ovr [id]; for (int vw = 0; vw < INTRIN_vectorwidths; ++vw) { for (int ew = 0; ew < INTRIN_elementwidths; ++ew) { llvm_ovr_tag_t vec_bit = INTRIN_vector128 >> ((INTRIN_vectorwidths - 1) - vw); llvm_ovr_tag_t elem_bit = INTRIN_int8 << ew; llvm_ovr_tag_t test = vec_bit | elem_bit; if ((spec & test) == test) { uint8_t kind = intrin_kind [id]; LLVMTypeRef distinguishing_type = intrin_types [vw][ew]; if (kind == INTRIN_kind_ftoi && (elem_bit & (INTRIN_int32 | INTRIN_int64))) { /* * @llvm.aarch64.neon.fcvtas.v4i32.v4f32 * @llvm.aarch64.neon.fcvtas.v2i64.v2f64 */ intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew + 2]); } else if (kind == INTRIN_kind_widen) { /* * @llvm.aarch64.neon.saddlp.v2i64.v4i32 * @llvm.aarch64.neon.saddlp.v4i16.v8i8 */ intrins = add_intrins2 (module, id, distinguishing_type, intrin_types [vw][ew - 1]); } else if (kind == INTRIN_kind_widen_across) { /* * @llvm.aarch64.neon.saddlv.i64.v4i32 * @llvm.aarch64.neon.saddlv.i32.v8i16 * @llvm.aarch64.neon.saddlv.i32.v16i8 * i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9. */ int associated_prim = MAX(ew + 1, 2); LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim]; intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type); } else if (kind == INTRIN_kind_across) { /* * @llvm.aarch64.neon.uaddv.i64.v4i64 * @llvm.aarch64.neon.uaddv.i32.v4i32 * @llvm.aarch64.neon.uaddv.i32.v8i16 * @llvm.aarch64.neon.uaddv.i32.v16i8 * i8/i16 return types for NEON intrinsics will make isel fail as of LLVM 9. */ int associated_prim = MAX(ew, 2); LLVMTypeRef associated_scalar_type = intrin_types [0][associated_prim]; intrins = add_intrins2 (module, id, associated_scalar_type, distinguishing_type); } else if (kind == INTRIN_kind_arm64_dot_prod) { /* * @llvm.aarch64.neon.sdot.v2i32.v8i8 * @llvm.aarch64.neon.sdot.v4i32.v16i8 */ LLVMTypeRef associated_type = intrin_types [vw][0]; intrins = add_intrins2 (module, id, distinguishing_type, associated_type); } else intrins = add_intrins1 (module, id, distinguishing_type); int key = key_from_id_and_tag (id, test); g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (key), intrins); } } } return; } /* Register overloaded intrinsics */ switch (id) { #define INTRINS(intrin_name, llvm_id, arch) #define INTRINS_OVR(intrin_name, llvm_id, arch, llvm_type) case INTRINS_ ## intrin_name: intrins = add_intrins1(module, id, llvm_type); break; #define INTRINS_OVR_2_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2) case INTRINS_ ## intrin_name: intrins = add_intrins2(module, id, llvm_type1, llvm_type2); break; #define INTRINS_OVR_3_ARG(intrin_name, llvm_id, arch, llvm_type1, llvm_type2, llvm_type3) case INTRINS_ ## intrin_name: intrins = add_intrins3(module, id, llvm_type1, llvm_type2, llvm_type3); break; #define INTRINS_OVR_TAG(...) #define INTRINS_OVR_TAG_KIND(...) #include "llvm-intrinsics.h" default: g_assert_not_reached (); break; } g_assert (intrins); g_hash_table_insert (intrins_id_to_intrins, GINT_TO_POINTER (id), intrins); } static LLVMValueRef get_intrins_from_module (LLVMModuleRef lmodule, int id) { LLVMValueRef res; res = (LLVMValueRef)g_hash_table_lookup (intrins_id_to_intrins, GINT_TO_POINTER (id)); g_assert (res); return res; } static LLVMValueRef get_intrins (EmitContext *ctx, int id) { return get_intrins_from_module (ctx->lmodule, id); } static void add_intrinsics (LLVMModuleRef module) { int i; /* Emit declarations of instrinsics */ /* * It would be nicer to emit only the intrinsics actually used, but LLVM's Module * type doesn't seem to do any locking. */ for (i = 0; i < INTRINS_NUM; ++i) add_intrinsic (module, i); /* EH intrinsics */ add_func (module, "mono_personality", LLVMVoidType (), NULL, 0); add_func (module, "llvm_resume_unwind_trampoline", LLVMVoidType (), NULL, 0); } static void add_types (MonoLLVMModule *module) { module->ptr_type = LLVMPointerType (TARGET_SIZEOF_VOID_P == 8 ? LLVMInt64Type () : LLVMInt32Type (), 0); } void mono_llvm_init (gboolean enable_jit) { intrin_types [0][0] = i1_t = LLVMInt8Type (); intrin_types [0][1] = i2_t = LLVMInt16Type (); intrin_types [0][2] = i4_t = LLVMInt32Type (); intrin_types [0][3] = i8_t = LLVMInt64Type (); intrin_types [0][4] = r4_t = LLVMFloatType (); intrin_types [0][5] = r8_t = LLVMDoubleType (); intrin_types [1][0] = v64_i1_t = LLVMVectorType (LLVMInt8Type (), 8); intrin_types [1][1] = v64_i2_t = LLVMVectorType (LLVMInt16Type (), 4); intrin_types [1][2] = v64_i4_t = LLVMVectorType (LLVMInt32Type (), 2); intrin_types [1][3] = v64_i8_t = LLVMVectorType (LLVMInt64Type (), 1); intrin_types [1][4] = v64_r4_t = LLVMVectorType (LLVMFloatType (), 2); intrin_types [1][5] = v64_r8_t = LLVMVectorType (LLVMDoubleType (), 1); intrin_types [2][0] = v128_i1_t = sse_i1_t = type_to_sse_type (MONO_TYPE_I1); intrin_types [2][1] = v128_i2_t = sse_i2_t = type_to_sse_type (MONO_TYPE_I2); intrin_types [2][2] = v128_i4_t = sse_i4_t = type_to_sse_type (MONO_TYPE_I4); intrin_types [2][3] = v128_i8_t = sse_i8_t = type_to_sse_type (MONO_TYPE_I8); intrin_types [2][4] = v128_r4_t = sse_r4_t = type_to_sse_type (MONO_TYPE_R4); intrin_types [2][5] = v128_r8_t = sse_r8_t = type_to_sse_type (MONO_TYPE_R8); intrins_id_to_intrins = g_hash_table_new (NULL, NULL); void_func_t = LLVMFunctionType0 (LLVMVoidType (), FALSE); if (enable_jit) mono_llvm_jit_init (); } void mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager) { MonoLLVMModule *module = (MonoLLVMModule*)mem_manager->llvm_module; int i; if (!module) return; g_hash_table_destroy (module->llvm_types); mono_llvm_dispose_ee (module->mono_ee); if (module->bb_names) { for (i = 0; i < module->bb_names_len; ++i) g_free (module->bb_names [i]); g_free (module->bb_names); } //LLVMDisposeModule (module->module); g_free (module); mem_manager->llvm_module = NULL; } void mono_llvm_create_aot_module (MonoAssembly *assembly, const char *global_prefix, int initial_got_size, LLVMModuleFlags flags) { MonoLLVMModule *module = &aot_module; gboolean emit_dwarf = (flags & LLVM_MODULE_FLAG_DWARF) ? 1 : 0; #ifdef TARGET_WIN32_MSVC gboolean emit_codeview = (flags & LLVM_MODULE_FLAG_CODEVIEW) ? 1 : 0; #endif gboolean static_link = (flags & LLVM_MODULE_FLAG_STATIC) ? 1 : 0; gboolean llvm_only = (flags & LLVM_MODULE_FLAG_LLVM_ONLY) ? 1 : 0; gboolean interp = (flags & LLVM_MODULE_FLAG_INTERP) ? 1 : 0; /* Delete previous module */ g_hash_table_destroy (module->plt_entries); if (module->lmodule) LLVMDisposeModule (module->lmodule); memset (module, 0, sizeof (aot_module)); module->lmodule = LLVMModuleCreateWithName ("aot"); module->assembly = assembly; module->global_prefix = g_strdup (global_prefix); module->eh_frame_symbol = g_strdup_printf ("%s_eh_frame", global_prefix); module->get_method_symbol = g_strdup_printf ("%s_get_method", global_prefix); module->get_unbox_tramp_symbol = g_strdup_printf ("%s_get_unbox_tramp", global_prefix); module->init_aotconst_symbol = g_strdup_printf ("%s_init_aotconst", global_prefix); module->external_symbols = TRUE; module->emit_dwarf = emit_dwarf; module->static_link = static_link; module->llvm_only = llvm_only; module->interp = interp; /* The first few entries are reserved */ module->max_got_offset = initial_got_size; module->context = LLVMGetGlobalContext (); module->cfgs = g_ptr_array_new (); module->aotconst_vars = g_hash_table_new (NULL, NULL); module->llvm_types = g_hash_table_new (NULL, NULL); module->plt_entries = g_hash_table_new (g_str_hash, g_str_equal); module->plt_entries_ji = g_hash_table_new (NULL, NULL); module->direct_callables = g_hash_table_new (g_str_hash, g_str_equal); module->idx_to_lmethod = g_hash_table_new (NULL, NULL); module->method_to_lmethod = g_hash_table_new (NULL, NULL); module->method_to_call_info = g_hash_table_new (NULL, NULL); module->idx_to_unbox_tramp = g_hash_table_new (NULL, NULL); module->no_method_table_lmethods = g_hash_table_new (NULL, NULL); module->callsite_list = g_ptr_array_new (); if (llvm_only) /* clang ignores our debug info because it has an invalid version */ module->emit_dwarf = FALSE; add_intrinsics (module->lmodule); add_types (module); #ifdef MONO_ARCH_LLVM_TARGET_LAYOUT LLVMSetDataLayout (module->lmodule, MONO_ARCH_LLVM_TARGET_LAYOUT); #else g_assert_not_reached (); #endif #ifdef MONO_ARCH_LLVM_TARGET_TRIPLE LLVMSetTarget (module->lmodule, MONO_ARCH_LLVM_TARGET_TRIPLE); #endif if (module->emit_dwarf) { char *dir, *build_info, *s, *cu_name; module->di_builder = mono_llvm_create_di_builder (module->lmodule); // FIXME: dir = g_strdup ("."); build_info = mono_get_runtime_build_info (); s = g_strdup_printf ("Mono AOT Compiler %s (LLVM)", build_info); cu_name = g_path_get_basename (assembly->image->name); module->cu = mono_llvm_di_create_compile_unit (module->di_builder, cu_name, dir, s); g_free (dir); g_free (build_info); g_free (s); } #ifdef TARGET_WIN32_MSVC if (emit_codeview) { LLVMValueRef codeview_option_args[3]; codeview_option_args[0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); codeview_option_args[1] = LLVMMDString ("CodeView", 8); codeview_option_args[2] = LLVMConstInt (LLVMInt32Type (), 1, FALSE); LLVMAddNamedMetadataOperand (module->lmodule, "llvm.module.flags", LLVMMDNode (codeview_option_args, G_N_ELEMENTS (codeview_option_args))); } if (!static_link) { const char linker_options[] = "Linker Options"; const char *default_dynamic_lib_names[] = { "/DEFAULTLIB:msvcrt", "/DEFAULTLIB:ucrt.lib", "/DEFAULTLIB:vcruntime.lib" }; LLVMValueRef default_lib_args[G_N_ELEMENTS (default_dynamic_lib_names)]; LLVMValueRef default_lib_nodes[G_N_ELEMENTS(default_dynamic_lib_names)]; const char *default_lib_name = NULL; for (int i = 0; i < G_N_ELEMENTS (default_dynamic_lib_names); ++i) { const char *default_lib_name = default_dynamic_lib_names[i]; default_lib_args[i] = LLVMMDString (default_lib_name, strlen (default_lib_name)); default_lib_nodes[i] = LLVMMDNode (default_lib_args + i, 1); } LLVMAddNamedMetadataOperand (module->lmodule, "llvm.linker.options", LLVMMDNode (default_lib_args, G_N_ELEMENTS (default_lib_args))); } #endif { LLVMTypeRef got_type = LLVMArrayType (module->ptr_type, 16); module->dummy_got_var = LLVMAddGlobal (module->lmodule, got_type, "dummy_got"); module->got_idx_to_type = g_hash_table_new (NULL, NULL); LLVMSetInitializer (module->dummy_got_var, LLVMConstNull (got_type)); LLVMSetVisibility (module->dummy_got_var, LLVMHiddenVisibility); LLVMSetLinkage (module->dummy_got_var, LLVMInternalLinkage); } /* Add initialization array */ LLVMTypeRef inited_type = LLVMArrayType (LLVMInt8Type (), 0); module->inited_var = LLVMAddGlobal (aot_module.lmodule, inited_type, "mono_inited_tmp"); LLVMSetInitializer (module->inited_var, LLVMConstNull (inited_type)); create_aot_info_var (module); emit_gc_safepoint_poll (module, module->lmodule, NULL); emit_llvm_code_start (module); // Needs idx_to_lmethod emit_init_funcs (module); /* Add a dummy personality function */ if (!use_mono_personality_debug) { LLVMValueRef personality = LLVMAddFunction (module->lmodule, default_personality_name, LLVMFunctionType (LLVMInt32Type (), NULL, 0, TRUE)); LLVMSetLinkage (personality, LLVMExternalLinkage); //EMCC chockes if the personality function is referenced in the 'used' array #ifndef TARGET_WASM mark_as_used (module, personality); #endif } /* Add a reference to the c++ exception we throw/catch */ { LLVMTypeRef exc = LLVMPointerType (LLVMInt8Type (), 0); module->sentinel_exception = LLVMAddGlobal (module->lmodule, exc, "_ZTIPi"); LLVMSetLinkage (module->sentinel_exception, LLVMExternalLinkage); mono_llvm_set_is_constant (module->sentinel_exception); } } void mono_llvm_fixup_aot_module (void) { MonoLLVMModule *module = &aot_module; MonoMethod *method; /* * Replace GOT entries for directly callable methods with the methods themselves. * It would be easier to implement this by predefining all methods before compiling * their bodies, but that couldn't handle the case when a method fails to compile * with llvm. */ GHashTable *specializable = g_hash_table_new (NULL, NULL); GHashTable *patches_to_null = g_hash_table_new (mono_patch_info_hash, mono_patch_info_equal); for (int sindex = 0; sindex < module->callsite_list->len; ++sindex) { CallSite *site = (CallSite*)g_ptr_array_index (module->callsite_list, sindex); method = site->method; LLVMValueRef lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method); LLVMValueRef placeholder = (LLVMValueRef)site->load; LLVMValueRef load; if (placeholder == NULL) /* Method failed LLVM compilation */ continue; gboolean can_direct_call = FALSE; /* Replace sharable instances with their shared version */ if (!lmethod && method->is_inflated) { if (mono_method_is_generic_sharable_full (method, FALSE, TRUE, FALSE)) { ERROR_DECL (error); MonoMethod *shared = mini_get_shared_method_full (method, SHARE_MODE_NONE, error); if (is_ok (error)) { lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, shared); if (lmethod) method = shared; } } } if (lmethod && !m_method_is_synchronized (method)) { can_direct_call = TRUE; } else if (m_method_is_wrapper (method) && !method->is_inflated) { WrapperInfo *info = mono_marshal_get_wrapper_info (method); /* This is a call from the synchronized wrapper to the real method */ if (info->subtype == WRAPPER_SUBTYPE_SYNCHRONIZED_INNER) { method = info->d.synchronized.method; lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, method); if (lmethod) can_direct_call = TRUE; } } if (can_direct_call) { mono_llvm_replace_uses_of (placeholder, lmethod); if (mono_aot_can_specialize (method)) g_hash_table_insert (specializable, lmethod, method); g_hash_table_insert (patches_to_null, site->ji, site->ji); } else { // FIXME: LLVMBuilderRef builder = LLVMCreateBuilder (); LLVMPositionBuilderBefore (builder, placeholder); load = get_aotconst_module (module, builder, site->ji->type, site->ji->data.target, site->type, NULL, NULL); LLVMReplaceAllUsesWith (placeholder, load); } g_free (site); } mono_llvm_propagate_nonnull_final (specializable, module); g_hash_table_destroy (specializable); for (int i = 0; i < module->cfgs->len; ++i) { /* * Nullify the patches pointing to direct calls. This is needed to * avoid allocating extra got slots, which is a perf problem and it * makes module->max_got_offset invalid. * It would be better to just store the patch_info in CallSite, but * cfg->patch_info is copied in aot-compiler.c. */ MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i); for (MonoJumpInfo *patch_info = cfg->patch_info; patch_info; patch_info = patch_info->next) { if (patch_info->type == MONO_PATCH_INFO_METHOD) { if (g_hash_table_lookup (patches_to_null, patch_info)) { patch_info->type = MONO_PATCH_INFO_NONE; /* Nullify the call to init_method () if possible */ g_assert (cfg->got_access_count); cfg->got_access_count --; if (cfg->got_access_count == 0) { LLVMValueRef br = (LLVMValueRef)cfg->llvmonly_init_cond; if (br) LLVMSetSuccessor (br, 0, LLVMGetSuccessor (br, 1)); } } } } } g_hash_table_destroy (patches_to_null); } static LLVMValueRef llvm_array_from_uints (LLVMTypeRef el_type, guint32 *values, int nvalues) { int i; LLVMValueRef res, *vals; vals = g_new0 (LLVMValueRef, nvalues); for (i = 0; i < nvalues; ++i) vals [i] = LLVMConstInt (LLVMInt32Type (), values [i], FALSE); res = LLVMConstArray (LLVMInt32Type (), vals, nvalues); g_free (vals); return res; } static LLVMValueRef llvm_array_from_bytes (guint8 *values, int nvalues) { int i; LLVMValueRef res, *vals; vals = g_new0 (LLVMValueRef, nvalues); for (i = 0; i < nvalues; ++i) vals [i] = LLVMConstInt (LLVMInt8Type (), values [i], FALSE); res = LLVMConstArray (LLVMInt8Type (), vals, nvalues); g_free (vals); return res; } /* * mono_llvm_emit_aot_file_info: * * Emit the MonoAotFileInfo structure. * Same as emit_aot_file_info () in aot-compiler.c. */ void mono_llvm_emit_aot_file_info (MonoAotFileInfo *info, gboolean has_jitted_code) { MonoLLVMModule *module = &aot_module; /* Save these for later */ memcpy (&module->aot_info, info, sizeof (MonoAotFileInfo)); module->has_jitted_code = has_jitted_code; } /* * mono_llvm_emit_aot_data: * * Emit the binary data DATA pointed to by symbol SYMBOL. * Return the LLVM variable for the data. */ gpointer mono_llvm_emit_aot_data_aligned (const char *symbol, guint8 *data, int data_len, int align) { MonoLLVMModule *module = &aot_module; LLVMTypeRef type; LLVMValueRef d; type = LLVMArrayType (LLVMInt8Type (), data_len); d = LLVMAddGlobal (module->lmodule, type, symbol); LLVMSetVisibility (d, LLVMHiddenVisibility); LLVMSetLinkage (d, LLVMInternalLinkage); LLVMSetInitializer (d, mono_llvm_create_constant_data_array (data, data_len)); if (align != 1) LLVMSetAlignment (d, align); mono_llvm_set_is_constant (d); return d; } gpointer mono_llvm_emit_aot_data (const char *symbol, guint8 *data, int data_len) { return mono_llvm_emit_aot_data_aligned (symbol, data, data_len, 8); } /* Add a reference to a global defined in JITted code */ static LLVMValueRef AddJitGlobal (MonoLLVMModule *module, LLVMTypeRef type, const char *name) { char *s; LLVMValueRef v; s = g_strdup_printf ("%s%s", module->global_prefix, name); v = LLVMAddGlobal (module->lmodule, LLVMInt8Type (), s); LLVMSetVisibility (v, LLVMHiddenVisibility); g_free (s); return v; } #define FILE_INFO_NUM_HEADER_FIELDS 2 #define FILE_INFO_NUM_SCALAR_FIELDS 23 #define FILE_INFO_NUM_ARRAY_FIELDS 5 #define FILE_INFO_NUM_AOTID_FIELDS 1 #define FILE_INFO_NFIELDS (FILE_INFO_NUM_HEADER_FIELDS + MONO_AOT_FILE_INFO_NUM_SYMBOLS + FILE_INFO_NUM_SCALAR_FIELDS + FILE_INFO_NUM_ARRAY_FIELDS + FILE_INFO_NUM_AOTID_FIELDS) static void create_aot_info_var (MonoLLVMModule *module) { LLVMTypeRef file_info_type; LLVMTypeRef *eltypes; LLVMValueRef info_var; int i, nfields, tindex; LLVMModuleRef lmodule = module->lmodule; /* Create an LLVM type to represent MonoAotFileInfo */ nfields = FILE_INFO_NFIELDS; eltypes = g_new (LLVMTypeRef, nfields); tindex = 0; eltypes [tindex ++] = LLVMInt32Type (); eltypes [tindex ++] = LLVMInt32Type (); /* Symbols */ for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) eltypes [tindex ++] = LLVMPointerType (LLVMInt8Type (), 0); /* Scalars */ for (i = 0; i < FILE_INFO_NUM_SCALAR_FIELDS; ++i) eltypes [tindex ++] = LLVMInt32Type (); /* Arrays */ eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TABLE_NUM); for (i = 0; i < FILE_INFO_NUM_ARRAY_FIELDS - 1; ++i) eltypes [tindex ++] = LLVMArrayType (LLVMInt32Type (), MONO_AOT_TRAMP_NUM); eltypes [tindex ++] = LLVMArrayType (LLVMInt8Type (), 16); g_assert (tindex == nfields); file_info_type = LLVMStructCreateNamed (module->context, "MonoAotFileInfo"); LLVMStructSetBody (file_info_type, eltypes, nfields, FALSE); info_var = LLVMAddGlobal (lmodule, file_info_type, "mono_aot_file_info"); module->info_var = info_var; module->info_var_eltypes = eltypes; } static void emit_aot_file_info (MonoLLVMModule *module) { LLVMTypeRef *eltypes, eltype; LLVMValueRef info_var; LLVMValueRef *fields; int i, nfields, tindex; MonoAotFileInfo *info; LLVMModuleRef lmodule = module->lmodule; info = &module->aot_info; info_var = module->info_var; eltypes = module->info_var_eltypes; nfields = FILE_INFO_NFIELDS; if (module->static_link) { LLVMSetVisibility (info_var, LLVMHiddenVisibility); LLVMSetLinkage (info_var, LLVMInternalLinkage); } #ifdef TARGET_WIN32 if (!module->static_link) { LLVMSetDLLStorageClass (info_var, LLVMDLLExportStorageClass); } #endif fields = g_new (LLVMValueRef, nfields); tindex = 0; fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->version, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->dummy, FALSE); /* Symbols */ /* * We use LLVMGetNamedGlobal () for symbol which are defined in LLVM code, and LLVMAddGlobal () * for symbols defined in the .s file emitted by the aot compiler. */ eltype = eltypes [tindex]; if (module->llvm_only) fields [tindex ++] = LLVMConstNull (eltype); else fields [tindex ++] = AddJitGlobal (module, eltype, "jit_got"); /* llc defines this directly */ if (!module->llvm_only) { fields [tindex ++] = LLVMAddGlobal (lmodule, eltype, module->eh_frame_symbol); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = module->get_method; fields [tindex ++] = module->get_unbox_tramp ? module->get_unbox_tramp : LLVMConstNull (eltype); } fields [tindex ++] = module->init_aotconst_func; if (module->has_jitted_code) { fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_start"); fields [tindex ++] = AddJitGlobal (module, eltype, "jit_code_end"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (!module->llvm_only) fields [tindex ++] = AddJitGlobal (module, eltype, "method_addresses"); else fields [tindex ++] = LLVMConstNull (eltype); if (module->llvm_only && module->unbox_tramp_indexes) { fields [tindex ++] = module->unbox_tramp_indexes; fields [tindex ++] = module->unbox_trampolines; } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (info->flags & MONO_AOT_FILE_FLAG_SEPARATE_DATA) { for (i = 0; i < MONO_AOT_TABLE_NUM; ++i) fields [tindex ++] = LLVMConstNull (eltype); } else { fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "blob"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_name_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "class_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "ex_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "extra_method_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "got_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "llvm_got_info_offsets"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "image_table"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "weak_field_indexes"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "method_flags_table"); } /* Not needed (mem_end) */ fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_guid"); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "runtime_version"); if (info->trampoline_size [0]) { fields [tindex ++] = AddJitGlobal (module, eltype, "specific_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "static_rgctx_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "imt_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "gsharedvt_arg_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "ftnptr_arg_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_arbitrary_trampolines"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } if (module->static_link && !module->llvm_only) fields [tindex ++] = AddJitGlobal (module, eltype, "globals"); else fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMGetNamedGlobal (lmodule, "assembly_name"); if (!module->llvm_only) { fields [tindex ++] = AddJitGlobal (module, eltype, "plt"); fields [tindex ++] = AddJitGlobal (module, eltype, "plt_end"); fields [tindex ++] = AddJitGlobal (module, eltype, "unwind_info"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampolines_end"); fields [tindex ++] = AddJitGlobal (module, eltype, "unbox_trampoline_addresses"); } else { fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); fields [tindex ++] = LLVMConstNull (eltype); } for (i = 0; i < MONO_AOT_FILE_INFO_NUM_SYMBOLS; ++i) { g_assert (fields [FILE_INFO_NUM_HEADER_FIELDS + i]); fields [FILE_INFO_NUM_HEADER_FIELDS + i] = LLVMConstBitCast (fields [FILE_INFO_NUM_HEADER_FIELDS + i], eltype); } /* Scalars */ fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_offset_base, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_got_info_offset_base, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->got_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->llvm_got_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->plt_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nmethods, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nextra_methods, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->flags, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->opts, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->simd_opts, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->gc_name_index, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->num_rgctx_fetch_trampolines, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->double_align, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->long_align, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->generic_tramp_num, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_shift_bits, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->card_table_mask, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->tramp_page_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->call_table_entry_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->nshared_got_entries, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), info->datafile_size, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_num, FALSE); fields [tindex ++] = LLVMConstInt (LLVMInt32Type (), module->unbox_tramp_elemsize, FALSE); /* Arrays */ fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->table_offsets, MONO_AOT_TABLE_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->num_trampolines, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_got_offset_base, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->trampoline_size, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_uints (LLVMInt32Type (), info->tramp_page_code_offsets, MONO_AOT_TRAMP_NUM); fields [tindex ++] = llvm_array_from_bytes (info->aotid, 16); g_assert (tindex == nfields); LLVMSetInitializer (info_var, LLVMConstNamedStruct (LLVMGetElementType (LLVMTypeOf (info_var)), fields, nfields)); if (module->static_link) { char *s, *p; LLVMValueRef var; s = g_strdup_printf ("mono_aot_module_%s_info", module->assembly->aname.name); /* Get rid of characters which cannot occur in symbols */ p = s; for (p = s; *p; ++p) { if (!(isalnum (*p) || *p == '_')) *p = '_'; } var = LLVMAddGlobal (module->lmodule, LLVMPointerType (LLVMInt8Type (), 0), s); g_free (s); LLVMSetInitializer (var, LLVMConstBitCast (LLVMGetNamedGlobal (module->lmodule, "mono_aot_file_info"), LLVMPointerType (LLVMInt8Type (), 0))); LLVMSetLinkage (var, LLVMExternalLinkage); } } typedef struct { LLVMValueRef lmethod; int argument; } NonnullPropWorkItem; static void mono_llvm_nonnull_state_update (EmitContext *ctx, LLVMValueRef lcall, MonoMethod *call_method, LLVMValueRef *args, int num_params) { if (mono_aot_can_specialize (call_method)) { int num_passed = LLVMGetNumArgOperands (lcall); g_assert (num_params <= num_passed); g_assert (ctx->module->method_to_call_info); GArray *call_site_union = (GArray *) g_hash_table_lookup (ctx->module->method_to_call_info, call_method); if (!call_site_union) { call_site_union = g_array_sized_new (FALSE, TRUE, sizeof (gint32), num_params); int zero = 0; for (int i = 0; i < num_params; i++) g_array_insert_val (call_site_union, i, zero); } for (int i = 0; i < num_params; i++) { if (mono_llvm_is_nonnull (args [i])) { g_assert (i < LLVMGetNumArgOperands (lcall)); mono_llvm_set_call_nonnull_arg (lcall, i); } else { gint32 *nullable_count = &g_array_index (call_site_union, gint32, i); *nullable_count = *nullable_count + 1; } } g_hash_table_insert (ctx->module->method_to_call_info, call_method, call_site_union); } } static void mono_llvm_propagate_nonnull_final (GHashTable *all_specializable, MonoLLVMModule *module) { // When we first traverse the mini IL, we mark the things that are // nonnull (the roots). Then, for all of the methods that can be specialized, we // see if their call sites have nonnull attributes. // If so, we mark the function's param. This param has uses to propagate // the attribute to. This propagation can trigger a need to mark more attributes // non-null, and so on and so forth. GSList *queue = NULL; GHashTableIter iter; LLVMValueRef lmethod; MonoMethod *method; g_hash_table_iter_init (&iter, all_specializable); while (g_hash_table_iter_next (&iter, (void**)&lmethod, (void**)&method)) { GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, method); // Basic sanity checking if (call_site_union) g_assert (call_site_union->len == LLVMCountParams (lmethod)); // Add root to work queue for (int i = 0; call_site_union && i < call_site_union->len; i++) { if (g_array_index (call_site_union, gint32, i) == 0) { NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem)); item->lmethod = lmethod; item->argument = i; queue = g_slist_prepend (queue, item); } } } // This is essentially reference counting, and we are propagating // the refcount decrement here. We have less work to do than we may otherwise // because we are only working with a set of subgraphs of specializable functions. // // We rely on being able to see all of the references in the graph. // This is ensured by the function mono_aot_can_specialize. Everything in // all_specializable is a function that can be specialized, and is the resulting // node in the graph after all of the subsitutions are done. // // Anything disrupting the direct calls made with self-init will break this optimization. while (queue) { // Update the queue state. // Our only other per-iteration responsibility is now to free current NonnullPropWorkItem *current = (NonnullPropWorkItem *) queue->data; queue = queue->next; g_assert (current->argument < LLVMCountParams (current->lmethod)); // Does the actual leaf-node work here // Mark the function argument as nonnull for LLVM mono_llvm_set_func_nonnull_arg (current->lmethod, current->argument); // The rest of this is for propagating forward nullability changes // to calls that use the argument that is now nullable. // Get the actual LLVM value of the argument, so we can see which call instructions // used that argument LLVMValueRef caller_argument = LLVMGetParam (current->lmethod, current->argument); // Iterate over the calls using the newly-non-nullable argument GSList *calls = mono_llvm_calls_using (caller_argument); for (GSList *cursor = calls; cursor != NULL; cursor = cursor->next) { LLVMValueRef lcall = (LLVMValueRef) cursor->data; LLVMValueRef callee_lmethod = LLVMGetCalledValue (lcall); // If this wasn't a direct call for which mono_aot_can_specialize is true, // this lookup won't find a MonoMethod. MonoMethod *callee_method = (MonoMethod *) g_hash_table_lookup (all_specializable, callee_lmethod); if (!callee_method) continue; // Decrement number of nullable refs at that func's arg offset GArray *call_site_union = (GArray *) g_hash_table_lookup (module->method_to_call_info, callee_method); // It has module-local callers and is specializable, should have seen this call site // and inited this g_assert (call_site_union); // The function *definition* parameter arity should always be consistent int max_params = LLVMCountParams (callee_lmethod); if (call_site_union->len != max_params) { mono_llvm_dump_value (callee_lmethod); g_assert_not_reached (); } // Get the values that correspond to the parameters passed to the call // that used our argument LLVMValueRef *operands = mono_llvm_call_args (lcall); for (int call_argument = 0; call_argument < max_params; call_argument++) { // Every time we used the newly-non-nullable argument, decrement the nullable // refcount for that function. if (caller_argument == operands [call_argument]) { gint32 *nullable_count = &g_array_index (call_site_union, gint32, call_argument); g_assert (*nullable_count > 0); *nullable_count = *nullable_count - 1; // If we caused that callee's parameter to become newly nullable, add to work queue if (*nullable_count == 0) { NonnullPropWorkItem *item = g_malloc (sizeof (NonnullPropWorkItem)); item->lmethod = callee_lmethod; item->argument = call_argument; queue = g_slist_prepend (queue, item); } } } g_free (operands); // Update nullability refcount information for the callee now g_hash_table_insert (module->method_to_call_info, callee_method, call_site_union); } g_slist_free (calls); g_free (current); } } /* * Emit the aot module into the LLVM bitcode file FILENAME. */ void mono_llvm_emit_aot_module (const char *filename, const char *cu_name) { LLVMTypeRef inited_type; LLVMValueRef real_inited; MonoLLVMModule *module = &aot_module; emit_llvm_code_end (module); /* * Create the real init_var and replace all uses of the dummy variable with * the real one. */ inited_type = LLVMArrayType (LLVMInt8Type (), module->max_inited_idx + 1); real_inited = LLVMAddGlobal (module->lmodule, inited_type, "mono_inited"); LLVMSetInitializer (real_inited, LLVMConstNull (inited_type)); LLVMSetLinkage (real_inited, LLVMInternalLinkage); mono_llvm_replace_uses_of (module->inited_var, real_inited); LLVMDeleteGlobal (module->inited_var); /* Replace the dummy info_ variables with the real ones */ for (int i = 0; i < module->cfgs->len; ++i) { MonoCompile *cfg = (MonoCompile *)g_ptr_array_index (module->cfgs, i); // FIXME: Eliminate unused vars // FIXME: Speed this up if (cfg->llvm_dummy_info_var) { if (cfg->llvm_info_var) { mono_llvm_replace_uses_of (cfg->llvm_dummy_info_var, cfg->llvm_info_var); LLVMDeleteGlobal (cfg->llvm_dummy_info_var); } else { // FIXME: How can this happen ? LLVMSetInitializer (cfg->llvm_dummy_info_var, mono_llvm_create_constant_data_array (NULL, 0)); } } } if (module->llvm_only) { emit_get_method (&aot_module); emit_get_unbox_tramp (&aot_module); } emit_init_aotconst (module); emit_llvm_used (&aot_module); emit_dbg_info (&aot_module, filename, cu_name); emit_aot_file_info (&aot_module); /* Replace PLT entries for directly callable methods with the methods themselves */ { GHashTableIter iter; MonoJumpInfo *ji; LLVMValueRef callee; GHashTable *specializable = g_hash_table_new (NULL, NULL); g_hash_table_iter_init (&iter, module->plt_entries_ji); while (g_hash_table_iter_next (&iter, (void**)&ji, (void**)&callee)) { if (mono_aot_is_direct_callable (ji)) { LLVMValueRef lmethod; lmethod = (LLVMValueRef)g_hash_table_lookup (module->method_to_lmethod, ji->data.method); /* The types might not match because the caller might pass an rgctx */ if (lmethod && LLVMTypeOf (callee) == LLVMTypeOf (lmethod)) { mono_llvm_replace_uses_of (callee, lmethod); if (mono_aot_can_specialize (ji->data.method)) g_hash_table_insert (specializable, lmethod, ji->data.method); mono_aot_mark_unused_llvm_plt_entry (ji); } } } mono_llvm_propagate_nonnull_final (specializable, module); g_hash_table_destroy (specializable); } #if 0 { char *verifier_err; if (LLVMVerifyModule (module->lmodule, LLVMReturnStatusAction, &verifier_err)) { printf ("%s\n", verifier_err); g_assert_not_reached (); } } #endif /* Note: You can still dump an invalid bitcode file by running `llvm-dis` * in a debugger, set a breakpoint on `LLVMVerifyModule` and fake its * result to 0 (indicating success). */ LLVMWriteBitcodeToFile (module->lmodule, filename); } static LLVMValueRef md_string (const char *s) { return LLVMMDString (s, strlen (s)); } /* Debugging support */ static void emit_dbg_info (MonoLLVMModule *module, const char *filename, const char *cu_name) { LLVMModuleRef lmodule = module->lmodule; LLVMValueRef args [16], ver; /* * This can only be enabled when LLVM code is emitted into a separate object * file, since the AOT compiler also emits dwarf info, * and the abbrev indexes will not be correct since llvm has added its own * abbrevs. */ if (!module->emit_dwarf) return; mono_llvm_di_builder_finalize (module->di_builder); args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); args [1] = LLVMMDString ("Dwarf Version", strlen ("Dwarf Version")); args [2] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); ver = LLVMMDNode (args, 3); LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver); args [0] = LLVMConstInt (LLVMInt32Type (), 2, FALSE); args [1] = LLVMMDString ("Debug Info Version", strlen ("Debug Info Version")); args [2] = LLVMConstInt (LLVMInt64Type (), 3, FALSE); ver = LLVMMDNode (args, 3); LLVMAddNamedMetadataOperand (lmodule, "llvm.module.flags", ver); } static LLVMValueRef emit_dbg_subprogram (EmitContext *ctx, MonoCompile *cfg, LLVMValueRef method, const char *name) { MonoLLVMModule *module = ctx->module; MonoDebugMethodInfo *minfo = ctx->minfo; char *source_file, *dir, *filename; MonoSymSeqPoint *sym_seq_points; int n_seq_points; if (!minfo) return NULL; mono_debug_get_seq_points (minfo, &source_file, NULL, NULL, &sym_seq_points, &n_seq_points); if (!source_file) source_file = g_strdup ("<unknown>"); dir = g_path_get_dirname (source_file); filename = g_path_get_basename (source_file); g_free (source_file); return (LLVMValueRef)mono_llvm_di_create_function (module->di_builder, module->cu, method, cfg->method->name, name, dir, filename, n_seq_points ? sym_seq_points [0].line : 1); } static void emit_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder, const unsigned char *cil_code) { MonoCompile *cfg = ctx->cfg; if (ctx->minfo && cil_code && cil_code >= cfg->header->code && cil_code < cfg->header->code + cfg->header->code_size) { MonoDebugSourceLocation *loc; LLVMValueRef loc_md; loc = mono_debug_method_lookup_location (ctx->minfo, cil_code - cfg->header->code); if (loc) { loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, loc->row, loc->column); mono_llvm_di_set_location (builder, loc_md); mono_debug_free_source_location (loc); } } } static void emit_default_dbg_loc (EmitContext *ctx, LLVMBuilderRef builder) { if (ctx->minfo) { LLVMValueRef loc_md; loc_md = (LLVMValueRef)mono_llvm_di_create_location (ctx->module->di_builder, ctx->dbg_md, 0, 0); mono_llvm_di_set_location (builder, loc_md); } } /* DESIGN: - Emit LLVM IR from the mono IR using the LLVM C API. - The original arch specific code remains, so we can fall back to it if we run into something we can't handle. */ /* A partial list of issues: - Handling of opcodes which can throw exceptions. In the mono JIT, these are implemented using code like this: method: <compare> throw_pos: b<cond> ex_label <rest of code> ex_label: push throw_pos - method call <exception trampoline> The problematic part is push throw_pos - method, which cannot be represented in the LLVM IR, since it does not support label values. -> this can be implemented in AOT mode using inline asm + labels, but cannot be implemented in JIT mode ? -> a possible but slower implementation would use the normal exception throwing code but it would need to control the placement of the throw code (it needs to be exactly after the compare+branch). -> perhaps add a PC offset intrinsics ? - efficient implementation of .ovf opcodes. These are currently implemented as: <ins which sets the condition codes> b<cond> ex_label Some overflow opcodes are now supported by LLVM SVN. - exception handling, unwinding. - SSA is disabled for methods with exception handlers - How to obtain unwind info for LLVM compiled methods ? -> this is now solved by converting the unwind info generated by LLVM into our format. - LLVM uses the c++ exception handling framework, while we use our home grown code, and couldn't use the c++ one: - its not supported under VC++, other exotic platforms. - it might be impossible to support filter clauses with it. - trampolines. The trampolines need a predictable call sequence, since they need to disasm the calling code to obtain register numbers / offsets. LLVM currently generates this code in non-JIT mode: mov -0x98(%rax),%eax callq *%rax Here, the vtable pointer is lost. -> solution: use one vtable trampoline per class. - passing/receiving the IMT pointer/RGCTX. -> solution: pass them as normal arguments ? - argument passing. LLVM does not allow the specification of argument registers etc. This means that all calls are made according to the platform ABI. - passing/receiving vtypes. Vtypes passed/received in registers are handled by the front end by using a signature with scalar arguments, and loading the parts of the vtype into those arguments. Vtypes passed on the stack are handled using the 'byval' attribute. - ldaddr. Supported though alloca, we need to emit the load/store code. - types. The mono JIT uses pointer sized iregs/double fregs, while LLVM uses precisely typed registers, so we have to keep track of the precise LLVM type of each vreg. This is made easier because the IR is already in SSA form. An additional problem is that our IR is not consistent with types, i.e. i32/i64 types are frequently used incorrectly. */ /* AOT SUPPORT: Emit LLVM bytecode into a .bc file, compile it using llc into a .s file, then link it with the file containing the methods emitted by the JIT and the AOT data structures. */ /* FIXME: Normalize some aspects of the mono IR to allow easier translation, like: * - each bblock should end with a branch * - setting the return value, making cfg->ret non-volatile * - avoid some transformations in the JIT which make it harder for us to generate * code. * - use pointer types to help optimizations. */ #else /* DISABLE_JIT */ void mono_llvm_cleanup (void) { } void mono_llvm_free_mem_manager (MonoJitMemoryManager *mem_manager) { } void mono_llvm_init (gboolean enable_jit) { } #endif /* DISABLE_JIT */ #if !defined(DISABLE_JIT) && !defined(MONO_CROSS_COMPILE) /* LLVM JIT support */ /* * decode_llvm_eh_info: * * Decode the EH table emitted by llvm in jit mode, and store * the result into cfg. */ static void decode_llvm_eh_info (EmitContext *ctx, gpointer eh_frame) { MonoCompile *cfg = ctx->cfg; guint8 *cie, *fde; int fde_len; MonoLLVMFDEInfo info; MonoJitExceptionInfo *ei; guint8 *p = (guint8*)eh_frame; int version, fde_count, fde_offset; guint32 ei_len, i, nested_len; gpointer *type_info; gint32 *table; guint8 *unw_info; /* * Decode the one element EH table emitted by the MonoException class * in llvm. */ /* Similar to decode_llvm_mono_eh_frame () in aot-runtime.c */ version = *p; g_assert (version == 3); p ++; p ++; p = (guint8 *)ALIGN_PTR_TO (p, 4); fde_count = *(guint32*)p; p += 4; table = (gint32*)p; g_assert (fde_count <= 2); /* The first entry is the real method */ g_assert (table [0] == 1); fde_offset = table [1]; table += fde_count * 2; /* Extra entry */ cfg->code_len = table [0]; fde_len = table [1] - fde_offset; table += 2; fde = (guint8*)eh_frame + fde_offset; cie = (guint8*)table; /* Compute lengths */ mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, NULL, NULL, NULL); ei = (MonoJitExceptionInfo *)g_malloc0 (info.ex_info_len * sizeof (MonoJitExceptionInfo)); type_info = (gpointer *)g_malloc0 (info.ex_info_len * sizeof (gpointer)); unw_info = (guint8*)g_malloc0 (info.unw_info_len); mono_unwind_decode_llvm_mono_fde (fde, fde_len, cie, cfg->native_code, &info, ei, type_info, unw_info); cfg->encoded_unwind_ops = unw_info; cfg->encoded_unwind_ops_len = info.unw_info_len; if (cfg->verbose_level > 1) mono_print_unwind_info (cfg->encoded_unwind_ops, cfg->encoded_unwind_ops_len); if (info.this_reg != -1) { cfg->llvm_this_reg = info.this_reg; cfg->llvm_this_offset = info.this_offset; } ei_len = info.ex_info_len; // Nested clauses are currently disabled nested_len = 0; cfg->llvm_ex_info = (MonoJitExceptionInfo*)mono_mempool_alloc0 (cfg->mempool, (ei_len + nested_len) * sizeof (MonoJitExceptionInfo)); cfg->llvm_ex_info_len = ei_len + nested_len; memcpy (cfg->llvm_ex_info, ei, ei_len * sizeof (MonoJitExceptionInfo)); /* Fill the rest of the information from the type info */ for (i = 0; i < ei_len; ++i) { gint32 clause_index = *(gint32*)type_info [i]; MonoExceptionClause *clause = &cfg->header->clauses [clause_index]; cfg->llvm_ex_info [i].flags = clause->flags; cfg->llvm_ex_info [i].data.catch_class = clause->data.catch_class; cfg->llvm_ex_info [i].clause_index = clause_index; } } static MonoLLVMModule* init_jit_module (void) { MonoJitMemoryManager *jit_mm; MonoLLVMModule *module; // FIXME: jit_mm = get_default_jit_mm (); if (jit_mm->llvm_module) return (MonoLLVMModule*)jit_mm->llvm_module; mono_loader_lock (); if (jit_mm->llvm_module) { mono_loader_unlock (); return (MonoLLVMModule*)jit_mm->llvm_module; } module = g_new0 (MonoLLVMModule, 1); module->context = LLVMGetGlobalContext (); module->mono_ee = (MonoEERef*)mono_llvm_create_ee (&module->ee); // This contains just the intrinsics module->lmodule = LLVMModuleCreateWithName ("jit-global-module"); add_intrinsics (module->lmodule); add_types (module); module->llvm_types = g_hash_table_new (NULL, NULL); mono_memory_barrier (); jit_mm->llvm_module = module; mono_loader_unlock (); return (MonoLLVMModule*)jit_mm->llvm_module; } static void llvm_jit_finalize_method (EmitContext *ctx) { MonoCompile *cfg = ctx->cfg; int nvars = g_hash_table_size (ctx->jit_callees); LLVMValueRef *callee_vars = g_new0 (LLVMValueRef, nvars); gpointer *callee_addrs = g_new0 (gpointer, nvars); GHashTableIter iter; LLVMValueRef var; MonoMethod *callee; gpointer eh_frame; int i; /* * Compute the addresses of the LLVM globals pointing to the * methods called by the current method. Pass it to the trampoline * code so it can update them after their corresponding method was * compiled. */ g_hash_table_iter_init (&iter, ctx->jit_callees); i = 0; while (g_hash_table_iter_next (&iter, NULL, (void**)&var)) callee_vars [i ++] = var; mono_llvm_optimize_method (ctx->lmethod); if (cfg->verbose_level > 1) { g_print ("\n*** Optimized LLVM IR for %s ***\n", mono_method_full_name (cfg->method, TRUE)); if (cfg->compile_aot) { mono_llvm_dump_value (ctx->lmethod); } else { mono_llvm_dump_module (ctx->lmodule); } g_print ("***\n\n"); } mono_codeman_enable_write (); cfg->native_code = (guint8*)mono_llvm_compile_method (ctx->module->mono_ee, cfg, ctx->lmethod, nvars, callee_vars, callee_addrs, &eh_frame); mono_llvm_remove_gc_safepoint_poll (ctx->lmodule); mono_codeman_disable_write (); decode_llvm_eh_info (ctx, eh_frame); // FIXME: MonoJitMemoryManager *jit_mm = get_default_jit_mm (); jit_mm_lock (jit_mm); if (!jit_mm->llvm_jit_callees) jit_mm->llvm_jit_callees = g_hash_table_new (NULL, NULL); g_hash_table_iter_init (&iter, ctx->jit_callees); i = 0; while (g_hash_table_iter_next (&iter, (void**)&callee, (void**)&var)) { GSList *addrs = (GSList*)g_hash_table_lookup (jit_mm->llvm_jit_callees, callee); addrs = g_slist_prepend (addrs, callee_addrs [i]); g_hash_table_insert (jit_mm->llvm_jit_callees, callee, addrs); i ++; } jit_mm_unlock (jit_mm); } #else static MonoLLVMModule* init_jit_module (void) { g_assert_not_reached (); } static void llvm_jit_finalize_method (EmitContext *ctx) { g_assert_not_reached (); } #endif static MonoCPUFeatures cpu_features; MonoCPUFeatures mono_llvm_get_cpu_features (void) { static const CpuFeatureAliasFlag flags_map [] = { #if defined(TARGET_X86) || defined(TARGET_AMD64) { "sse", MONO_CPU_X86_SSE }, { "sse2", MONO_CPU_X86_SSE2 }, { "pclmul", MONO_CPU_X86_PCLMUL }, { "aes", MONO_CPU_X86_AES }, { "sse2", MONO_CPU_X86_SSE2 }, { "sse3", MONO_CPU_X86_SSE3 }, { "ssse3", MONO_CPU_X86_SSSE3 }, { "sse4.1", MONO_CPU_X86_SSE41 }, { "sse4.2", MONO_CPU_X86_SSE42 }, { "popcnt", MONO_CPU_X86_POPCNT }, { "avx", MONO_CPU_X86_AVX }, { "avx2", MONO_CPU_X86_AVX2 }, { "fma", MONO_CPU_X86_FMA }, { "lzcnt", MONO_CPU_X86_LZCNT }, { "bmi", MONO_CPU_X86_BMI1 }, { "bmi2", MONO_CPU_X86_BMI2 }, #endif #if defined(TARGET_ARM64) { "crc", MONO_CPU_ARM64_CRC }, { "crypto", MONO_CPU_ARM64_CRYPTO }, { "neon", MONO_CPU_ARM64_NEON }, { "rdm", MONO_CPU_ARM64_RDM }, { "dotprod", MONO_CPU_ARM64_DP }, #endif #if defined(TARGET_WASM) { "simd", MONO_CPU_WASM_SIMD }, #endif // flags_map cannot be zero length in MSVC, so add useless dummy entry for arm32 #if defined(TARGET_ARM) && defined(HOST_WIN32) { "inited", MONO_CPU_INITED}, #endif }; if (!cpu_features) cpu_features = MONO_CPU_INITED | (MonoCPUFeatures)mono_llvm_check_cpu_features (flags_map, G_N_ELEMENTS (flags_map)); return cpu_features; }
1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/mono/mono/mini/simd-intrinsics.c
/** * SIMD Intrinsics support for netcore. * Only LLVM is supported as a backend. */ #include <config.h> #include <mono/utils/mono-compiler.h> #include <mono/metadata/icall-decl.h> #include "mini.h" #include "mini-runtime.h" #include "ir-emit.h" #include "llvm-intrinsics-types.h" #ifdef ENABLE_LLVM #include "mini-llvm.h" #include "mini-llvm-cpp.h" #endif #include "mono/utils/bsearch.h" #include <mono/metadata/abi-details.h> #include <mono/metadata/reflection-internals.h> #include <mono/utils/mono-hwcap.h> #if defined (MONO_ARCH_SIMD_INTRINSICS) #if defined(DISABLE_JIT) void mono_simd_intrinsics_init (void) { } #else #define MSGSTRFIELD(line) MSGSTRFIELD1(line) #define MSGSTRFIELD1(line) str##line static const struct msgstr_t { #define METHOD(name) char MSGSTRFIELD(__LINE__) [sizeof (#name)]; #define METHOD2(str,name) char MSGSTRFIELD(__LINE__) [sizeof (str)]; #include "simd-methods.h" #undef METHOD #undef METHOD2 } method_names = { #define METHOD(name) #name, #define METHOD2(str,name) str, #include "simd-methods.h" #undef METHOD #undef METHOD2 }; enum { #define METHOD(name) SN_ ## name = offsetof (struct msgstr_t, MSGSTRFIELD(__LINE__)), #define METHOD2(str,name) SN_ ## name = offsetof (struct msgstr_t, MSGSTRFIELD(__LINE__)), #include "simd-methods.h" }; #define method_name(idx) ((const char*)&method_names + (idx)) static int register_size; #define None 0 typedef struct { uint16_t id; // One of the SN_ constants uint16_t default_op; // ins->opcode uint16_t default_instc0; // ins->inst_c0 uint16_t unsigned_op; uint16_t unsigned_instc0; uint16_t floating_op; uint16_t floating_instc0; } SimdIntrinsic; static const SimdIntrinsic unsupported [] = { {SN_get_IsSupported} }; void mono_simd_intrinsics_init (void) { register_size = 16; #if 0 if ((mini_get_cpu_features () & MONO_CPU_X86_AVX) != 0) register_size = 32; #endif /* Tell the class init code the size of the System.Numerics.Register type */ mono_simd_register_size = register_size; } MonoInst* mono_emit_simd_field_load (MonoCompile *cfg, MonoClassField *field, MonoInst *addr) { return NULL; } static int simd_intrinsic_compare_by_name (const void *key, const void *value) { return strcmp ((const char*)key, method_name (*(guint16*)value)); } static int simd_intrinsic_info_compare_by_name (const void *key, const void *value) { SimdIntrinsic *info = (SimdIntrinsic*)value; return strcmp ((const char*)key, method_name (info->id)); } static int lookup_intrins (guint16 *intrinsics, int size, MonoMethod *cmethod) { const guint16 *result = (const guint16 *)mono_binary_search (cmethod->name, intrinsics, size / sizeof (guint16), sizeof (guint16), &simd_intrinsic_compare_by_name); if (result == NULL) return -1; else return (int)*result; } static SimdIntrinsic* lookup_intrins_info (SimdIntrinsic *intrinsics, int size, MonoMethod *cmethod) { #if 0 for (int i = 0; i < (size / sizeof (SimdIntrinsic)) - 1; ++i) { const char *n1 = method_name (intrinsics [i].id); const char *n2 = method_name (intrinsics [i + 1].id); int len1 = strlen (n1); int len2 = strlen (n2); for (int j = 0; j < len1 && j < len2; ++j) { if (n1 [j] > n2 [j]) { printf ("%s %s\n", n1, n2); g_assert_not_reached (); } else if (n1 [j] < n2 [j]) { break; } } } #endif return (SimdIntrinsic *)mono_binary_search (cmethod->name, intrinsics, size / sizeof (SimdIntrinsic), sizeof (SimdIntrinsic), &simd_intrinsic_info_compare_by_name); } /* * Return a simd vreg for the simd value represented by SRC. * SRC is the 'this' argument to methods. * Set INDIRECT to TRUE if the value was loaded from memory. */ static int load_simd_vreg_class (MonoCompile *cfg, MonoClass *klass, MonoInst *src, gboolean *indirect) { const char *spec = INS_INFO (src->opcode); if (indirect) *indirect = FALSE; if (src->opcode == OP_XMOVE) { return src->sreg1; } else if (src->opcode == OP_LDADDR) { int res = ((MonoInst*)src->inst_p0)->dreg; return res; } else if (spec [MONO_INST_DEST] == 'x') { return src->dreg; } else if (src->type == STACK_PTR || src->type == STACK_MP) { MonoInst *ins; if (indirect) *indirect = TRUE; MONO_INST_NEW (cfg, ins, OP_LOADX_MEMBASE); ins->klass = klass; ins->sreg1 = src->dreg; ins->type = STACK_VTYPE; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); return ins->dreg; } g_warning ("load_simd_vreg:: could not infer source simd (%d) vreg for op", src->type); mono_print_ins (src); g_assert_not_reached (); } static int load_simd_vreg (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *src, gboolean *indirect) { return load_simd_vreg_class (cfg, cmethod->klass, src, indirect); } /* Create and emit a SIMD instruction, dreg is auto-allocated */ static MonoInst* emit_simd_ins (MonoCompile *cfg, MonoClass *klass, int opcode, int sreg1, int sreg2) { const char *spec = INS_INFO (opcode); MonoInst *ins; MONO_INST_NEW (cfg, ins, opcode); if (spec [MONO_INST_DEST] == 'x') { ins->dreg = alloc_xreg (cfg); ins->type = STACK_VTYPE; } else if (spec [MONO_INST_DEST] == 'i') { ins->dreg = alloc_ireg (cfg); ins->type = STACK_I4; } else if (spec [MONO_INST_DEST] == 'l') { ins->dreg = alloc_lreg (cfg); ins->type = STACK_I8; } else if (spec [MONO_INST_DEST] == 'f') { ins->dreg = alloc_freg (cfg); ins->type = STACK_R8; } else if (spec [MONO_INST_DEST] == 'v') { ins->dreg = alloc_dreg (cfg, STACK_VTYPE); ins->type = STACK_VTYPE; } ins->sreg1 = sreg1; ins->sreg2 = sreg2; ins->klass = klass; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* emit_simd_ins_for_sig (MonoCompile *cfg, MonoClass *klass, int opcode, int instc0, int instc1, MonoMethodSignature *fsig, MonoInst **args) { g_assert (fsig->param_count <= 3); MonoInst* ins = emit_simd_ins (cfg, klass, opcode, fsig->param_count > 0 ? args [0]->dreg : -1, fsig->param_count > 1 ? args [1]->dreg : -1); if (instc0 != -1) ins->inst_c0 = instc0; if (instc1 != -1) ins->inst_c1 = instc1; if (fsig->param_count == 3) ins->sreg3 = args [2]->dreg; return ins; } static gboolean is_hw_intrinsics_class (MonoClass *klass, const char *name, gboolean *is_64bit) { const char *class_name = m_class_get_name (klass); if ((!strcmp (class_name, "X64") || !strcmp (class_name, "Arm64")) && m_class_get_nested_in (klass)) { *is_64bit = TRUE; return !strcmp (m_class_get_name (m_class_get_nested_in (klass)), name); } else { *is_64bit = FALSE; return !strcmp (class_name, name); } } static MonoTypeEnum get_underlying_type (MonoType* type) { MonoClass* klass = mono_class_from_mono_type_internal (type); if (type->type == MONO_TYPE_PTR) // e.g. int* => MONO_TYPE_I4 return m_class_get_byval_arg (m_class_get_element_class (klass))->type; else if (type->type == MONO_TYPE_GENERICINST) // e.g. Vector128<int> => MONO_TYPE_I4 return mono_class_get_context (klass)->class_inst->type_argv [0]->type; else return type->type; } static MonoInst* emit_xcompare (MonoCompile *cfg, MonoClass *klass, MonoTypeEnum etype, MonoInst *arg1, MonoInst *arg2) { MonoInst *ins; gboolean is_fp = etype == MONO_TYPE_R4 || etype == MONO_TYPE_R8; ins = emit_simd_ins (cfg, klass, is_fp ? OP_XCOMPARE_FP : OP_XCOMPARE, arg1->dreg, arg2->dreg); ins->inst_c0 = CMP_EQ; ins->inst_c1 = etype; return ins; } static MonoInst* emit_xequal (MonoCompile *cfg, MonoClass *klass, MonoInst *arg1, MonoInst *arg2) { return emit_simd_ins (cfg, klass, OP_XEQUAL, arg1->dreg, arg2->dreg); } static MonoInst* emit_not_xequal (MonoCompile *cfg, MonoClass *klass, MonoInst *arg1, MonoInst *arg2) { MonoInst *ins = emit_simd_ins (cfg, klass, OP_XEQUAL, arg1->dreg, arg2->dreg); int sreg = ins->dreg; int dreg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sreg, 0); EMIT_NEW_UNALU (cfg, ins, OP_CEQ, dreg, -1); return ins; } static MonoInst* emit_xzero (MonoCompile *cfg, MonoClass *klass) { return emit_simd_ins (cfg, klass, OP_XZERO, -1, -1); } static gboolean is_intrinsics_vector_type (MonoType *vector_type) { if (vector_type->type != MONO_TYPE_GENERICINST) return FALSE; MonoClass *klass = mono_class_from_mono_type_internal (vector_type); const char *name = m_class_get_name (klass); return !strcmp (name, "Vector64`1") || !strcmp (name, "Vector128`1") || !strcmp (name, "Vector256`1"); } static MonoType* get_vector_t_elem_type (MonoType *vector_type) { MonoClass *klass; MonoType *etype; g_assert (vector_type->type == MONO_TYPE_GENERICINST); klass = mono_class_from_mono_type_internal (vector_type); g_assert ( !strcmp (m_class_get_name (klass), "Vector`1") || !strcmp (m_class_get_name (klass), "Vector64`1") || !strcmp (m_class_get_name (klass), "Vector128`1") || !strcmp (m_class_get_name (klass), "Vector256`1")); etype = mono_class_get_context (klass)->class_inst->type_argv [0]; return etype; } static gboolean type_is_unsigned (MonoType *type) { MonoClass *klass = mono_class_from_mono_type_internal (type); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; switch (etype->type) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: return TRUE; } return FALSE; } static gboolean type_is_float (MonoType *type) { MonoClass *klass = mono_class_from_mono_type_internal (type); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; switch (etype->type) { case MONO_TYPE_R4: case MONO_TYPE_R8: return TRUE; } return FALSE; } static int type_to_expand_op (MonoType *type) { switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_EXPAND_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_EXPAND_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_EXPAND_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_EXPAND_I8; case MONO_TYPE_R4: return OP_EXPAND_R4; case MONO_TYPE_R8: return OP_EXPAND_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_EXPAND_I8; #else return OP_EXPAND_I4; #endif default: g_assert_not_reached (); } } static int type_to_insert_op (MonoType *type) { switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_INSERT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_INSERT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_INSERT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_INSERT_I8; case MONO_TYPE_R4: return OP_INSERT_R4; case MONO_TYPE_R8: return OP_INSERT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_INSERT_I8; #else return OP_INSERT_I4; #endif default: g_assert_not_reached (); } } typedef struct { const char *name; MonoCPUFeatures feature; const SimdIntrinsic *intrinsics; int intrinsics_size; gboolean jit_supported; } IntrinGroup; typedef MonoInst * (* EmitIntrinsicFn) ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit); static const IntrinGroup unsupported_intrin_group [] = { { "", 0, unsupported, sizeof (unsupported) }, }; static MonoInst * emit_hardware_intrinsics ( MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args, const IntrinGroup *groups, int groups_size_bytes, EmitIntrinsicFn custom_emit) { MonoClass *klass = cmethod->klass; const IntrinGroup *intrin_group = unsupported_intrin_group; gboolean is_64bit = FALSE; int groups_size = groups_size_bytes / sizeof (groups [0]); for (int i = 0; i < groups_size; ++i) { const IntrinGroup *group = &groups [i]; if (is_hw_intrinsics_class (klass, group->name, &is_64bit)) { intrin_group = group; break; } } gboolean supported = FALSE; MonoTypeEnum arg0_type = fsig->param_count > 0 ? get_underlying_type (fsig->params [0]) : MONO_TYPE_VOID; int id = -1; uint16_t op = 0; uint16_t c0 = 0; const SimdIntrinsic *intrinsics = intrin_group->intrinsics; int intrinsics_size = intrin_group->intrinsics_size; MonoCPUFeatures feature = intrin_group->feature; const SimdIntrinsic *info = lookup_intrins_info ((SimdIntrinsic *) intrinsics, intrinsics_size, cmethod); { if (!info) goto support_probe_complete; id = info->id; // Hardware intrinsics are LLVM-only. if (!COMPILE_LLVM (cfg) && !intrin_group->jit_supported) goto support_probe_complete; if (intrin_group->intrinsics == unsupported) supported = FALSE; else if (feature) supported = (mini_get_cpu_features (cfg) & feature) != 0; else supported = TRUE; op = info->default_op; c0 = info->default_instc0; gboolean is_unsigned = FALSE; gboolean is_float = FALSE; switch (arg0_type) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: is_unsigned = TRUE; break; case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE; break; } if (is_unsigned && info->unsigned_op != 0) { op = info->unsigned_op; c0 = info->unsigned_instc0; } else if (is_float && info->floating_op != 0) { op = info->floating_op; c0 = info->floating_instc0; } } support_probe_complete: if (id == SN_get_IsSupported) { MonoInst *ins = NULL; EMIT_NEW_ICONST (cfg, ins, supported ? 1 : 0); return ins; } if (!supported) { // Can't emit non-supported llvm intrinsics if (cfg->method != cmethod) { // Keep the original call so we end up in the intrinsic method return NULL; } else { // Emit an exception from the intrinsic method mono_emit_jit_icall (cfg, mono_throw_platform_not_supported, NULL); return NULL; } } if (op != 0) return emit_simd_ins_for_sig (cfg, klass, op, c0, arg0_type, fsig, args); return custom_emit (cfg, fsig, args, klass, intrin_group, info, id, arg0_type, is_64bit); } static MonoInst * emit_vector_create_elementwise ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoType *vtype, MonoType *etype, MonoInst **args) { int op = type_to_insert_op (etype); MonoClass *vklass = mono_class_from_mono_type_internal (vtype); MonoInst *ins = emit_xzero (cfg, vklass); for (int i = 0; i < fsig->param_count; ++i) { ins = emit_simd_ins (cfg, vklass, op, ins->dreg, args [i]->dreg); ins->inst_c0 = i; } return ins; } #if defined(TARGET_AMD64) || defined(TARGET_ARM64) static int type_to_xinsert_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_XINSERT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_XINSERT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_XINSERT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_XINSERT_I8; case MONO_TYPE_R4: return OP_XINSERT_R4; case MONO_TYPE_R8: return OP_XINSERT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_XINSERT_I8; #else return OP_XINSERT_I4; #endif default: g_assert_not_reached (); } } static int type_to_xextract_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_XEXTRACT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_XEXTRACT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_XEXTRACT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_XEXTRACT_I8; case MONO_TYPE_R4: return OP_XEXTRACT_R4; case MONO_TYPE_R8: return OP_XEXTRACT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_XEXTRACT_I8; #else return OP_XEXTRACT_I4; #endif default: g_assert_not_reached (); } } static int type_to_extract_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_EXTRACT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_EXTRACT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_EXTRACT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_EXTRACT_I8; case MONO_TYPE_R4: return OP_EXTRACT_R4; case MONO_TYPE_R8: return OP_EXTRACT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_EXTRACT_I8; #else return OP_EXTRACT_I4; #endif default: g_assert_not_reached (); } } static guint16 sri_vector_methods [] = { SN_Abs, SN_Add, SN_AndNot, SN_As, SN_AsByte, SN_AsDouble, SN_AsInt16, SN_AsInt32, SN_AsInt64, SN_AsSByte, SN_AsSingle, SN_AsUInt16, SN_AsUInt32, SN_AsUInt64, SN_AsVector128, SN_AsVector2, SN_AsVector256, SN_AsVector3, SN_AsVector4, SN_BitwiseAnd, SN_BitwiseOr, SN_Ceiling, SN_ConditionalSelect, SN_ConvertToDouble, SN_ConvertToInt32, SN_ConvertToUInt32, SN_Create, SN_CreateScalar, SN_CreateScalarUnsafe, SN_Divide, SN_Equals, SN_EqualsAll, SN_EqualsAny, SN_Floor, SN_GetElement, SN_GetLower, SN_GetUpper, SN_GreaterThan, SN_GreaterThanOrEqual, SN_LessThan, SN_LessThanOrEqual, SN_Max, SN_Min, SN_Multiply, SN_Negate, SN_OnesComplement, SN_Sqrt, SN_Subtract, SN_ToScalar, SN_ToVector128, SN_ToVector128Unsafe, SN_ToVector256, SN_ToVector256Unsafe, SN_WithElement, SN_Xor, }; /* nint and nuint haven't been enabled yet for System.Runtime.Intrinsics. * Remove this once support has been added. */ #define MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE(t) ((MONO_TYPE_IS_VECTOR_PRIMITIVE(t)) && ((t)->type != MONO_TYPE_I) && ((t)->type != MONO_TYPE_U)) static gboolean is_elementwise_create_overload (MonoMethodSignature *fsig, MonoType *ret_type) { uint16_t param_count = fsig->param_count; if (param_count < 1) return FALSE; MonoType *type = fsig->params [0]; if (!MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (type)) return FALSE; if (!mono_metadata_type_equal (ret_type, type)) return FALSE; for (uint16_t i = 1; i < param_count; ++i) if (!mono_metadata_type_equal (type, fsig->params [i])) return FALSE; return TRUE; } static gboolean is_create_from_half_vectors_overload (MonoMethodSignature *fsig) { if (fsig->param_count != 2) return FALSE; if (!is_intrinsics_vector_type (fsig->params [0])) return FALSE; return mono_metadata_type_equal (fsig->params [0], fsig->params [1]); } static gboolean is_element_type_primitive (MonoType *vector_type) { MonoType *element_type = get_vector_t_elem_type (vector_type); return MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (element_type); } static MonoInst* emit_sri_vector (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { if (!COMPILE_LLVM (cfg)) return NULL; int id = lookup_intrins (sri_vector_methods, sizeof (sri_vector_methods), cmethod); if (id == -1) return NULL; if (!strcmp (m_class_get_name (cfg->method->klass), "Vector256")) return NULL; // TODO: Fix Vector256.WithUpper/WithLower MonoClass *klass = cmethod->klass; MonoTypeEnum arg0_type = fsig->param_count > 0 ? get_underlying_type (fsig->params [0]) : MONO_TYPE_VOID; switch (id) { case SN_Abs: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 switch (arg0_type) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: return NULL; } gboolean is_float = arg0_type == MONO_TYPE_R4 || arg0_type == MONO_TYPE_R8; int iid = is_float ? INTRINS_AARCH64_ADV_SIMD_FABS : INTRINS_AARCH64_ADV_SIMD_ABS; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, iid, arg0_type, fsig, args); #else return NULL; #endif } case SN_Add: case SN_Divide: case SN_Max: case SN_Min: case SN_Multiply: case SN_Subtract: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int instc0 = -1; if (arg0_type == MONO_TYPE_R4 || arg0_type == MONO_TYPE_R8) { switch (id) { case SN_Add: instc0 = OP_FADD; break; case SN_Divide: instc0 = OP_FDIV; break; case SN_Max: instc0 = OP_FMAX; break; case SN_Min: instc0 = OP_FMIN; break; case SN_Multiply: instc0 = OP_FMUL; break; case SN_Subtract: instc0 = OP_FSUB; break; default: g_assert_not_reached (); } } else { switch (id) { case SN_Add: instc0 = OP_IADD; break; case SN_Divide: return NULL; case SN_Max: instc0 = OP_IMAX; break; case SN_Min: instc0 = OP_IMIN; break; case SN_Multiply: instc0 = OP_IMUL; break; case SN_Subtract: instc0 = OP_ISUB; break; default: g_assert_not_reached (); } } return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, instc0, arg0_type, fsig, args); } case SN_AndNot: if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_BIC, -1, arg0_type, fsig, args); #else return NULL; #endif case SN_BitwiseAnd: case SN_BitwiseOr: case SN_Xor: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int instc0 = -1; switch (id) { case SN_BitwiseAnd: instc0 = XBINOP_FORCEINT_AND; break; case SN_BitwiseOr: instc0 = XBINOP_FORCEINT_OR; break; case SN_Xor: instc0 = XBINOP_FORCEINT_XOR; break; default: g_assert_not_reached (); } return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP_FORCEINT, instc0, arg0_type, fsig, args); } case SN_As: case SN_AsByte: case SN_AsDouble: case SN_AsInt16: case SN_AsInt32: case SN_AsInt64: case SN_AsSByte: case SN_AsSingle: case SN_AsUInt16: case SN_AsUInt32: case SN_AsUInt64: { if (!is_element_type_primitive (fsig->ret) || !is_element_type_primitive (fsig->params [0])) return NULL; return emit_simd_ins (cfg, klass, OP_XCAST, args [0]->dreg, -1); } case SN_Ceiling: case SN_Floor: { #ifdef TARGET_ARM64 if ((arg0_type != MONO_TYPE_R4) && (arg0_type != MONO_TYPE_R8)) return NULL; int ceil_or_floor = id == SN_Ceiling ? INTRINS_AARCH64_ADV_SIMD_FRINTP : INTRINS_AARCH64_ADV_SIMD_FRINTM; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, ceil_or_floor, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConditionalSelect: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_BSL, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConvertToDouble: { #ifdef TARGET_ARM64 if ((arg0_type != MONO_TYPE_I8) && (arg0_type != MONO_TYPE_U8)) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); int size = mono_class_value_size (arg_class, NULL); int op = -1; if (size == 8) op = arg0_type == MONO_TYPE_I8 ? OP_ARM64_SCVTF_SCALAR : OP_ARM64_UCVTF_SCALAR; else op = arg0_type == MONO_TYPE_I8 ? OP_ARM64_SCVTF : OP_ARM64_UCVTF; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConvertToInt32: case SN_ConvertToUInt32: { #ifdef TARGET_ARM64 if (arg0_type != MONO_TYPE_R4) return NULL; int op = id == SN_ConvertToInt32 ? OP_ARM64_FCVTZS : OP_ARM64_FCVTZU; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_Create: { MonoType *etype = get_vector_t_elem_type (fsig->ret); if (fsig->param_count == 1 && mono_metadata_type_equal (fsig->params [0], etype)) return emit_simd_ins (cfg, klass, type_to_expand_op (etype), args [0]->dreg, -1); else if (is_create_from_half_vectors_overload (fsig)) return emit_simd_ins (cfg, klass, OP_XCONCAT, args [0]->dreg, args [1]->dreg); else if (is_elementwise_create_overload (fsig, etype)) return emit_vector_create_elementwise (cfg, fsig, fsig->ret, etype, args); break; } case SN_CreateScalar: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR, -1, arg0_type, fsig, args); case SN_CreateScalarUnsafe: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR_UNSAFE, -1, arg0_type, fsig, args); case SN_Equals: case SN_EqualsAll: case SN_EqualsAny: { if (!is_element_type_primitive (fsig->params [0])) return NULL; switch (id) { case SN_Equals: return emit_xcompare (cfg, klass, arg0_type, args [0], args [1]); case SN_EqualsAll: return emit_xequal (cfg, klass, args [0], args [1]); case SN_EqualsAny: { MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoInst *cmp_eq = emit_xcompare (cfg, arg_class, arg0_type, args [0], args [1]); MonoInst *zero = emit_xzero (cfg, arg_class); return emit_not_xequal (cfg, arg_class, cmp_eq, zero); } default: g_assert_not_reached (); } } case SN_GetElement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoType *etype = mono_class_get_context (arg_class)->class_inst->type_argv [0]; int size = mono_class_value_size (arg_class, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); int elems = size / esize; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, elems); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); int extract_op = type_to_xextract_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, extract_op, -1, arg0_type, fsig, args); } case SN_GetLower: case SN_GetUpper: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_GetLower ? OP_XLOWER : OP_XUPPER; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_GreaterThan: case SN_GreaterThanOrEqual: case SN_LessThan: case SN_LessThanOrEqual: { if (!is_element_type_primitive (fsig->params [0])) return NULL; gboolean is_unsigned = type_is_unsigned (fsig->params [0]); MonoInst *ins = emit_xcompare (cfg, klass, arg0_type, args [0], args [1]); switch (id) { case SN_GreaterThan: ins->inst_c0 = is_unsigned ? CMP_GT_UN : CMP_GT; break; case SN_GreaterThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_GE_UN : CMP_GE; break; case SN_LessThan: ins->inst_c0 = is_unsigned ? CMP_LT_UN : CMP_LT; break; case SN_LessThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_LE_UN : CMP_LE; break; default: g_assert_not_reached (); } return ins; } case SN_Negate: case SN_OnesComplement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 int op = id == SN_Negate ? OP_ARM64_XNEG : OP_ARM64_MVN; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_Sqrt: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 if ((arg0_type != MONO_TYPE_R4) && (arg0_type != MONO_TYPE_R8)) return NULL; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT, arg0_type, fsig, args); #else return NULL; #endif } case SN_ToScalar: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int extract_op = type_to_extract_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, extract_op, 0, arg0_type, fsig, args); } case SN_ToVector128: case SN_ToVector128Unsafe: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_ToVector128 ? OP_XWIDEN : OP_XWIDEN_UNSAFE; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_WithElement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoType *etype = mono_class_get_context (arg_class)->class_inst->type_argv [0]; int size = mono_class_value_size (arg_class, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); int elems = size / esize; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, elems); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); int insert_op = type_to_xinsert_op (arg0_type); MonoInst *ins = emit_simd_ins (cfg, klass, insert_op, args [0]->dreg, args [2]->dreg); ins->sreg3 = args [1]->dreg; ins->inst_c1 = arg0_type; return ins; } case SN_WithLower: case SN_WithUpper: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_GetLower ? OP_XINSERT_LOWER : OP_XINSERT_UPPER; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } default: break; } return NULL; } static guint16 vector64_vector128_t_methods [] = { SN_Equals, SN_get_AllBitsSet, SN_get_Count, SN_get_IsSupported, SN_get_Zero, SN_op_Addition, SN_op_Equality, SN_op_Inequality, SN_op_Subtraction, }; static MonoInst* emit_vector64_vector128_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { int id = lookup_intrins (vector64_vector128_t_methods, sizeof (vector64_vector128_t_methods), cmethod); if (id == -1) return NULL; MonoClass *klass = cmethod->klass; MonoType *type = m_class_get_byval_arg (klass); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; int size = mono_class_value_size (klass, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size > 0); g_assert (esize > 0); int len = size / esize; if (!MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (etype)) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_IsSupported: { MonoInst *ins = NULL; EMIT_NEW_ICONST (cfg, ins, 1); return ins; } default: break; } if (!COMPILE_LLVM (cfg)) return NULL; switch (id) { case SN_get_Count: { MonoInst *ins = NULL; if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; } case SN_get_Zero: { return emit_xzero (cfg, klass); } case SN_get_AllBitsSet: { MonoInst *ins = emit_xzero (cfg, klass); return emit_xcompare (cfg, klass, etype->type, ins, ins); } case SN_Equals: { if (fsig->param_count == 1 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type)) { int sreg1 = load_simd_vreg (cfg, cmethod, args [0], NULL); return emit_simd_ins (cfg, klass, OP_XEQUAL, sreg1, args [1]->dreg); } break; } case SN_op_Addition: case SN_op_Subtraction: { if (!(fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type))) return NULL; MonoInst *ins = emit_simd_ins (cfg, klass, OP_XBINOP, args [0]->dreg, args [1]->dreg); ins->inst_c1 = etype->type; if (etype->type == MONO_TYPE_R4 || etype->type == MONO_TYPE_R8) ins->inst_c0 = id == SN_op_Addition ? OP_FADD : OP_FSUB; else ins->inst_c0 = id == SN_op_Addition ? OP_IADD : OP_ISUB; return ins; } case SN_op_Equality: case SN_op_Inequality: g_assert (fsig->param_count == 2 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); switch (id) { case SN_op_Equality: return emit_xequal (cfg, klass, args [0], args [1]); case SN_op_Inequality: return emit_not_xequal (cfg, klass, args [0], args [1]); default: g_assert_not_reached (); } default: break; } return NULL; } #endif // defined(TARGET_AMD64) || defined(TARGET_ARM64) #ifdef TARGET_AMD64 static guint16 vector_methods [] = { SN_ConvertToDouble, SN_ConvertToInt32, SN_ConvertToInt64, SN_ConvertToSingle, SN_ConvertToUInt32, SN_ConvertToUInt64, SN_Narrow, SN_Widen, SN_get_IsHardwareAccelerated, }; static MonoInst* emit_sys_numerics_vector (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; gboolean supported = FALSE; int id; MonoType *etype; id = lookup_intrins (vector_methods, sizeof (vector_methods), cmethod); if (id == -1) return NULL; //printf ("%s\n", mono_method_full_name (cmethod, 1)); #ifdef MONO_ARCH_SIMD_INTRINSICS supported = TRUE; #endif if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_IsHardwareAccelerated: EMIT_NEW_ICONST (cfg, ins, supported ? 1 : 0); ins->type = STACK_I4; return ins; case SN_ConvertToInt32: etype = get_vector_t_elem_type (fsig->params [0]); g_assert (etype->type == MONO_TYPE_R4); return emit_simd_ins (cfg, mono_class_from_mono_type_internal (fsig->ret), OP_CVTPS2DQ, args [0]->dreg, -1); case SN_ConvertToSingle: etype = get_vector_t_elem_type (fsig->params [0]); g_assert (etype->type == MONO_TYPE_I4 || etype->type == MONO_TYPE_U4); // FIXME: if (etype->type == MONO_TYPE_U4) return NULL; return emit_simd_ins (cfg, mono_class_from_mono_type_internal (fsig->ret), OP_CVTDQ2PS, args [0]->dreg, -1); case SN_ConvertToDouble: case SN_ConvertToInt64: case SN_ConvertToUInt32: case SN_ConvertToUInt64: case SN_Narrow: case SN_Widen: // FIXME: break; default: break; } return NULL; } static guint16 vector_t_methods [] = { SN_ctor, SN_CopyTo, SN_Equals, SN_GreaterThan, SN_GreaterThanOrEqual, SN_LessThan, SN_LessThanOrEqual, SN_Max, SN_Min, SN_get_AllBitsSet, SN_get_Count, SN_get_Item, SN_get_One, SN_get_Zero, SN_op_Addition, SN_op_BitwiseAnd, SN_op_BitwiseOr, SN_op_Division, SN_op_Equality, SN_op_ExclusiveOr, SN_op_Explicit, SN_op_Inequality, SN_op_Multiply, SN_op_Subtraction }; static MonoInst* emit_sys_numerics_vector_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; MonoType *type, *etype; MonoClass *klass; int size, len, id; gboolean is_unsigned; static const float r4_one = 1.0f; static const double r8_one = 1.0; id = lookup_intrins (vector_t_methods, sizeof (vector_t_methods), cmethod); if (id == -1) return NULL; klass = cmethod->klass; type = m_class_get_byval_arg (klass); etype = mono_class_get_context (klass)->class_inst->type_argv [0]; size = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size); len = register_size / size; if (!MONO_TYPE_IS_PRIMITIVE (etype) || etype->type == MONO_TYPE_CHAR || etype->type == MONO_TYPE_BOOLEAN) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_Count: if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; case SN_get_Zero: g_assert (fsig->param_count == 0 && mono_metadata_type_equal (fsig->ret, type)); return emit_xzero (cfg, klass); case SN_get_One: { g_assert (fsig->param_count == 0 && mono_metadata_type_equal (fsig->ret, type)); MonoInst *one = NULL; int expand_opcode = type_to_expand_op (etype); MONO_INST_NEW (cfg, one, -1); switch (expand_opcode) { case OP_EXPAND_R4: one->opcode = OP_R4CONST; one->type = STACK_R4; one->inst_p0 = (void *) &r4_one; break; case OP_EXPAND_R8: one->opcode = OP_R8CONST; one->type = STACK_R8; one->inst_p0 = (void *) &r8_one; break; default: one->opcode = OP_ICONST; one->type = STACK_I4; one->inst_c0 = 1; break; } one->dreg = alloc_dreg (cfg, (MonoStackType)one->type); MONO_ADD_INS (cfg->cbb, one); return emit_simd_ins (cfg, klass, expand_opcode, one->dreg, -1); } case SN_get_AllBitsSet: { /* Compare a zero vector with itself */ ins = emit_xzero (cfg, klass); return emit_xcompare (cfg, klass, etype->type, ins, ins); } case SN_get_Item: { if (!COMPILE_LLVM (cfg)) return NULL; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, len); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); MonoTypeEnum ty = etype->type; int opcode = type_to_xextract_op (ty); int src1 = load_simd_vreg (cfg, cmethod, args [0], NULL); MonoInst *ins = emit_simd_ins (cfg, klass, opcode, src1, args [1]->dreg); ins->inst_c1 = ty; return ins; } case SN_ctor: if (fsig->param_count == 1 && mono_metadata_type_equal (fsig->params [0], etype)) { int dreg = load_simd_vreg (cfg, cmethod, args [0], NULL); int opcode = type_to_expand_op (etype); ins = emit_simd_ins (cfg, klass, opcode, args [1]->dreg, -1); ins->dreg = dreg; return ins; } if ((fsig->param_count == 1 || fsig->param_count == 2) && (fsig->params [0]->type == MONO_TYPE_SZARRAY)) { MonoInst *array_ins = args [1]; MonoInst *index_ins; MonoInst *ldelema_ins; MonoInst *var; int end_index_reg; if (args [0]->opcode != OP_LDADDR) return NULL; /* .ctor (T[]) or .ctor (T[], index) */ if (fsig->param_count == 2) { index_ins = args [2]; } else { EMIT_NEW_ICONST (cfg, index_ins, 0); } /* Emit bounds check for the index (index >= 0) */ mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), index_ins->dreg, "ArgumentOutOfRangeException"); /* Emit bounds check for the end (index + len - 1 < array length) */ end_index_reg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_IADD_IMM, end_index_reg, index_ins->dreg, len - 1); mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), end_index_reg, "ArgumentOutOfRangeException"); /* Load the array slice into the simd reg */ ldelema_ins = mini_emit_ldelema_1_ins (cfg, mono_class_from_mono_type_internal (etype), array_ins, index_ins, FALSE, FALSE); g_assert (args [0]->opcode == OP_LDADDR); var = (MonoInst*)args [0]->inst_p0; EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADX_MEMBASE, var->dreg, ldelema_ins->dreg, 0); ins->klass = cmethod->klass; return args [0]; } break; case SN_CopyTo: if ((fsig->param_count == 1 || fsig->param_count == 2) && (fsig->params [0]->type == MONO_TYPE_SZARRAY)) { MonoInst *array_ins = args [1]; MonoInst *index_ins; MonoInst *ldelema_ins; int val_vreg, end_index_reg; val_vreg = load_simd_vreg (cfg, cmethod, args [0], NULL); /* CopyTo (T[]) or CopyTo (T[], index) */ if (fsig->param_count == 2) { index_ins = args [2]; } else { EMIT_NEW_ICONST (cfg, index_ins, 0); } /* CopyTo () does complicated argument checks */ mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), index_ins->dreg, "ArgumentOutOfRangeException"); end_index_reg = alloc_ireg (cfg); int len_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS (cfg, OP_LOADI4_MEMBASE, len_reg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), MONO_INST_INVARIANT_LOAD); EMIT_NEW_BIALU (cfg, ins, OP_ISUB, end_index_reg, len_reg, index_ins->dreg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, end_index_reg, len); MONO_EMIT_NEW_COND_EXC (cfg, LT, "ArgumentException"); /* Load the array slice into the simd reg */ ldelema_ins = mini_emit_ldelema_1_ins (cfg, mono_class_from_mono_type_internal (etype), array_ins, index_ins, FALSE, FALSE); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ldelema_ins->dreg, 0, val_vreg); ins->klass = cmethod->klass; return ins; } break; case SN_Equals: if (fsig->param_count == 1 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type)) { int sreg1 = load_simd_vreg (cfg, cmethod, args [0], NULL); return emit_simd_ins (cfg, klass, OP_XEQUAL, sreg1, args [1]->dreg); } else if (fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)) { /* Per element equality */ return emit_xcompare (cfg, klass, etype->type, args [0], args [1]); } break; case SN_op_Equality: case SN_op_Inequality: g_assert (fsig->param_count == 2 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); switch (id) { case SN_op_Equality: return emit_xequal (cfg, klass, args [0], args [1]); case SN_op_Inequality: return emit_not_xequal (cfg, klass, args [0], args [1]); default: g_assert_not_reached (); } case SN_GreaterThan: case SN_GreaterThanOrEqual: case SN_LessThan: case SN_LessThanOrEqual: g_assert (fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); is_unsigned = etype->type == MONO_TYPE_U1 || etype->type == MONO_TYPE_U2 || etype->type == MONO_TYPE_U4 || etype->type == MONO_TYPE_U8 || etype->type == MONO_TYPE_U; ins = emit_xcompare (cfg, klass, etype->type, args [0], args [1]); switch (id) { case SN_GreaterThan: ins->inst_c0 = is_unsigned ? CMP_GT_UN : CMP_GT; break; case SN_GreaterThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_GE_UN : CMP_GE; break; case SN_LessThan: ins->inst_c0 = is_unsigned ? CMP_LT_UN : CMP_LT; break; case SN_LessThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_LE_UN : CMP_LE; break; default: g_assert_not_reached (); } return ins; case SN_op_Explicit: return emit_simd_ins (cfg, klass, OP_XCAST, args [0]->dreg, -1); case SN_op_Addition: case SN_op_Subtraction: case SN_op_Division: case SN_op_Multiply: case SN_op_BitwiseAnd: case SN_op_BitwiseOr: case SN_op_ExclusiveOr: case SN_Max: case SN_Min: if (!(fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type))) return NULL; ins = emit_simd_ins (cfg, klass, OP_XBINOP, args [0]->dreg, args [1]->dreg); ins->inst_c1 = etype->type; if (etype->type == MONO_TYPE_R4 || etype->type == MONO_TYPE_R8) { switch (id) { case SN_op_Addition: ins->inst_c0 = OP_FADD; break; case SN_op_Subtraction: ins->inst_c0 = OP_FSUB; break; case SN_op_Multiply: ins->inst_c0 = OP_FMUL; break; case SN_op_Division: ins->inst_c0 = OP_FDIV; break; case SN_Max: ins->inst_c0 = OP_FMAX; break; case SN_Min: ins->inst_c0 = OP_FMIN; break; default: NULLIFY_INS (ins); return NULL; } } else { switch (id) { case SN_op_Addition: ins->inst_c0 = OP_IADD; break; case SN_op_Subtraction: ins->inst_c0 = OP_ISUB; break; /* case SN_op_Division: ins->inst_c0 = OP_IDIV; break; case SN_op_Multiply: ins->inst_c0 = OP_IMUL; break; */ case SN_op_BitwiseAnd: ins->inst_c0 = OP_IAND; break; case SN_op_BitwiseOr: ins->inst_c0 = OP_IOR; break; case SN_op_ExclusiveOr: ins->inst_c0 = OP_IXOR; break; case SN_Max: ins->inst_c0 = OP_IMAX; break; case SN_Min: ins->inst_c0 = OP_IMIN; break; default: NULLIFY_INS (ins); return NULL; } } return ins; default: break; } return NULL; } #endif // TARGET_AMD64 #ifdef TARGET_ARM64 static SimdIntrinsic armbase_methods [] = { {SN_LeadingSignCount}, {SN_LeadingZeroCount}, {SN_MultiplyHigh}, {SN_ReverseElementBits}, {SN_get_IsSupported}, }; static SimdIntrinsic crc32_methods [] = { {SN_ComputeCrc32}, {SN_ComputeCrc32C}, {SN_get_IsSupported} }; static SimdIntrinsic crypto_aes_methods [] = { {SN_Decrypt, OP_XOP_X_X_X, INTRINS_AARCH64_AESD}, {SN_Encrypt, OP_XOP_X_X_X, INTRINS_AARCH64_AESE}, {SN_InverseMixColumns, OP_XOP_X_X, INTRINS_AARCH64_AESIMC}, {SN_MixColumns, OP_XOP_X_X, INTRINS_AARCH64_AESMC}, {SN_PolynomialMultiplyWideningLower}, {SN_PolynomialMultiplyWideningUpper}, {SN_get_IsSupported}, }; static SimdIntrinsic sha1_methods [] = { {SN_FixedRotate, OP_XOP_X_X, INTRINS_AARCH64_SHA1H}, {SN_HashUpdateChoose, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1C}, {SN_HashUpdateMajority, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1M}, {SN_HashUpdateParity, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1P}, {SN_ScheduleUpdate0, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1SU0}, {SN_ScheduleUpdate1, OP_XOP_X_X_X, INTRINS_AARCH64_SHA1SU1}, {SN_get_IsSupported} }; static SimdIntrinsic sha256_methods [] = { {SN_HashUpdate1, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256H}, {SN_HashUpdate2, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256H2}, {SN_ScheduleUpdate0, OP_XOP_X_X_X, INTRINS_AARCH64_SHA256SU0}, {SN_ScheduleUpdate1, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256SU1}, {SN_get_IsSupported} }; // This table must be kept in sorted order. ASCII } is sorted after alphanumeric // characters, so blind use of your editor's "sort lines" facility will // mis-order the lines. // // In Vim you can use `sort /.*{[0-9A-z]*/ r` to sort this table. static SimdIntrinsic advsimd_methods [] = { {SN_Abs, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_ABS, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FABS}, {SN_AbsSaturate, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQABS}, {SN_AbsSaturateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_SQABS}, {SN_AbsScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_ABS, None, None, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FABS}, {SN_AbsoluteCompareGreaterThan}, {SN_AbsoluteCompareGreaterThanOrEqual}, {SN_AbsoluteCompareGreaterThanOrEqualScalar}, {SN_AbsoluteCompareGreaterThanScalar}, {SN_AbsoluteCompareLessThan}, {SN_AbsoluteCompareLessThanOrEqual}, {SN_AbsoluteCompareLessThanOrEqualScalar}, {SN_AbsoluteCompareLessThanScalar}, {SN_AbsoluteDifference, OP_ARM64_SABD, None, OP_ARM64_UABD, None, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FABD}, {SN_AbsoluteDifferenceAdd, OP_ARM64_SABA, None, OP_ARM64_UABA}, {SN_AbsoluteDifferenceScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FABD_SCALAR}, {SN_AbsoluteDifferenceWideningLower, OP_ARM64_SABDL, None, OP_ARM64_UABDL}, {SN_AbsoluteDifferenceWideningLowerAndAdd, OP_ARM64_SABAL, None, OP_ARM64_UABAL}, {SN_AbsoluteDifferenceWideningUpper, OP_ARM64_SABDL2, None, OP_ARM64_UABDL2}, {SN_AbsoluteDifferenceWideningUpperAndAdd, OP_ARM64_SABAL2, None, OP_ARM64_UABAL2}, {SN_Add, OP_XBINOP, OP_IADD, None, None, OP_XBINOP, OP_FADD}, {SN_AddAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SADDV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UADDV}, {SN_AddAcrossWidening, OP_ARM64_SADDLV, None, OP_ARM64_UADDLV}, {SN_AddHighNarrowingLower, OP_ARM64_ADDHN}, {SN_AddHighNarrowingUpper, OP_ARM64_ADDHN2}, {SN_AddPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_ADDP, None, None, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FADDP}, {SN_AddPairwiseScalar, OP_ARM64_ADDP_SCALAR, None, None, None, OP_ARM64_FADDP_SCALAR}, {SN_AddPairwiseWidening, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SADDLP, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UADDLP}, {SN_AddPairwiseWideningAndAdd, OP_ARM64_SADALP, None, OP_ARM64_UADALP}, {SN_AddPairwiseWideningAndAddScalar, OP_ARM64_SADALP, None, OP_ARM64_UADALP}, {SN_AddPairwiseWideningScalar, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SADDLP, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UADDLP}, {SN_AddRoundedHighNarrowingLower, OP_ARM64_RADDHN}, {SN_AddRoundedHighNarrowingUpper, OP_ARM64_RADDHN2}, {SN_AddSaturate}, {SN_AddSaturateScalar}, {SN_AddScalar, OP_XBINOP_SCALAR, OP_IADD, None, None, OP_XBINOP_SCALAR, OP_FADD}, {SN_AddWideningLower, OP_ARM64_SADD, None, OP_ARM64_UADD}, {SN_AddWideningUpper, OP_ARM64_SADD2, None, OP_ARM64_UADD2}, {SN_And, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_AND}, {SN_BitwiseClear, OP_ARM64_BIC}, {SN_BitwiseSelect, OP_ARM64_BSL}, {SN_Ceiling, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_CeilingScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_CompareEqual, OP_XCOMPARE, CMP_EQ, OP_XCOMPARE, CMP_EQ, OP_XCOMPARE_FP, CMP_EQ}, {SN_CompareEqualScalar, OP_XCOMPARE_SCALAR, CMP_EQ, OP_XCOMPARE_SCALAR, CMP_EQ, OP_XCOMPARE_FP_SCALAR, CMP_EQ}, {SN_CompareGreaterThan, OP_XCOMPARE, CMP_GT, OP_XCOMPARE, CMP_GT_UN, OP_XCOMPARE_FP, CMP_GT}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE, CMP_GE, OP_XCOMPARE, CMP_GE_UN, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareGreaterThanOrEqualScalar, OP_XCOMPARE_SCALAR, CMP_GE, OP_XCOMPARE_SCALAR, CMP_GE_UN, OP_XCOMPARE_FP_SCALAR, CMP_GE}, {SN_CompareGreaterThanScalar, OP_XCOMPARE_SCALAR, CMP_GT, OP_XCOMPARE_SCALAR, CMP_GT_UN, OP_XCOMPARE_FP_SCALAR, CMP_GT}, {SN_CompareLessThan, OP_XCOMPARE, CMP_LT, OP_XCOMPARE, CMP_LT_UN, OP_XCOMPARE_FP, CMP_LT}, {SN_CompareLessThanOrEqual, OP_XCOMPARE, CMP_LE, OP_XCOMPARE, CMP_LE_UN, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareLessThanOrEqualScalar, OP_XCOMPARE_SCALAR, CMP_LE, OP_XCOMPARE_SCALAR, CMP_LE_UN, OP_XCOMPARE_FP_SCALAR, CMP_LE}, {SN_CompareLessThanScalar, OP_XCOMPARE_SCALAR, CMP_LT, OP_XCOMPARE_SCALAR, CMP_LT_UN, OP_XCOMPARE_FP_SCALAR, CMP_LT}, {SN_CompareTest, OP_ARM64_CMTST}, {SN_CompareTestScalar, OP_ARM64_CMTST}, {SN_ConvertToDouble, OP_ARM64_SCVTF, None, OP_ARM64_UCVTF, None, OP_ARM64_FCVTL}, {SN_ConvertToDoubleScalar, OP_ARM64_SCVTF_SCALAR, None, OP_ARM64_UCVTF_SCALAR}, {SN_ConvertToDoubleUpper, OP_ARM64_FCVTL2}, {SN_ConvertToInt32RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt32RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt32RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt32RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt32RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt32RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt32RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt32RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt32RoundToZero, OP_ARM64_FCVTZS}, {SN_ConvertToInt32RoundToZeroScalar, OP_ARM64_FCVTZS_SCALAR}, {SN_ConvertToInt64RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt64RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt64RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt64RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt64RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt64RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt64RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt64RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt64RoundToZero, OP_ARM64_FCVTZS}, {SN_ConvertToInt64RoundToZeroScalar, OP_ARM64_FCVTZS_SCALAR}, {SN_ConvertToSingle, OP_ARM64_SCVTF, None, OP_ARM64_UCVTF}, {SN_ConvertToSingleLower, OP_ARM64_FCVTN}, {SN_ConvertToSingleRoundToOddLower, OP_ARM64_FCVTXN}, {SN_ConvertToSingleRoundToOddUpper, OP_ARM64_FCVTXN2}, {SN_ConvertToSingleScalar, OP_ARM64_SCVTF_SCALAR, None, OP_ARM64_UCVTF_SCALAR}, {SN_ConvertToSingleUpper, OP_ARM64_FCVTN2}, {SN_ConvertToUInt32RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt32RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt32RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt32RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt32RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt32RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt32RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt32RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt32RoundToZero, OP_ARM64_FCVTZU}, {SN_ConvertToUInt32RoundToZeroScalar, OP_ARM64_FCVTZU_SCALAR}, {SN_ConvertToUInt64RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt64RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt64RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt64RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt64RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt64RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt64RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt64RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt64RoundToZero, OP_ARM64_FCVTZU}, {SN_ConvertToUInt64RoundToZeroScalar, OP_ARM64_FCVTZU_SCALAR}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_XBINOP_SCALAR, OP_FDIV}, {SN_DuplicateSelectedScalarToVector128}, {SN_DuplicateSelectedScalarToVector64}, {SN_DuplicateToVector128}, {SN_DuplicateToVector64}, {SN_Extract}, {SN_ExtractNarrowingLower, OP_ARM64_XTN}, {SN_ExtractNarrowingSaturateLower, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQXTN, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UQXTN}, {SN_ExtractNarrowingSaturateScalar, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQXTN, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQXTN}, {SN_ExtractNarrowingSaturateUnsignedLower, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQXTUN}, {SN_ExtractNarrowingSaturateUnsignedScalar, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQXTUN}, {SN_ExtractNarrowingSaturateUnsignedUpper, OP_ARM64_SQXTUN2}, {SN_ExtractNarrowingSaturateUpper, OP_ARM64_SQXTN2, None, OP_ARM64_UQXTN2}, {SN_ExtractNarrowingUpper, OP_ARM64_XTN2}, {SN_ExtractVector128, OP_ARM64_EXT}, {SN_ExtractVector64, OP_ARM64_EXT}, {SN_Floor, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_FloorScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_FusedAddHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SHADD, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UHADD}, {SN_FusedAddRoundedHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRHADD, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URHADD}, {SN_FusedMultiplyAdd, OP_ARM64_FMADD}, {SN_FusedMultiplyAddByScalar, OP_ARM64_FMADD_BYSCALAR}, {SN_FusedMultiplyAddBySelectedScalar}, {SN_FusedMultiplyAddNegatedScalar, OP_ARM64_FNMADD_SCALAR}, {SN_FusedMultiplyAddScalar, OP_ARM64_FMADD_SCALAR}, {SN_FusedMultiplyAddScalarBySelectedScalar}, {SN_FusedMultiplySubtract, OP_ARM64_FMSUB}, {SN_FusedMultiplySubtractByScalar, OP_ARM64_FMSUB_BYSCALAR}, {SN_FusedMultiplySubtractBySelectedScalar}, {SN_FusedMultiplySubtractNegatedScalar, OP_ARM64_FNMSUB_SCALAR}, {SN_FusedMultiplySubtractScalar, OP_ARM64_FMSUB_SCALAR}, {SN_FusedMultiplySubtractScalarBySelectedScalar}, {SN_FusedSubtractHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SHSUB, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UHSUB}, {SN_Insert}, {SN_InsertScalar}, {SN_InsertSelectedScalar}, {SN_LeadingSignCount, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_CLS}, {SN_LeadingZeroCount, OP_ARM64_CLZ}, {SN_LoadAndInsertScalar, OP_ARM64_LD1_INSERT}, {SN_LoadAndReplicateToVector128, OP_ARM64_LD1R}, {SN_LoadAndReplicateToVector64, OP_ARM64_LD1R}, {SN_LoadPairScalarVector64, OP_ARM64_LDP_SCALAR}, {SN_LoadPairScalarVector64NonTemporal, OP_ARM64_LDNP_SCALAR}, {SN_LoadPairVector128, OP_ARM64_LDP}, {SN_LoadPairVector128NonTemporal, OP_ARM64_LDNP}, {SN_LoadPairVector64, OP_ARM64_LDP}, {SN_LoadPairVector64NonTemporal, OP_ARM64_LDNP}, {SN_LoadVector128, OP_ARM64_LD1}, {SN_LoadVector64, OP_ARM64_LD1}, {SN_Max, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMAX, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMAX, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAX}, {SN_MaxAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SMAXV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UMAXV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXV}, {SN_MaxNumber, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNM}, {SN_MaxNumberAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXNMV}, {SN_MaxNumberPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNMP}, {SN_MaxNumberPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXNMV}, {SN_MaxNumberScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNM}, {SN_MaxPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMAXP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMAXP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXP}, {SN_MaxPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXV}, {SN_MaxScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAX}, {SN_Min, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMIN, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMIN, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMIN}, {SN_MinAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SMINV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UMINV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINV}, {SN_MinNumber, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNM}, {SN_MinNumberAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINNMV}, {SN_MinNumberPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNMP}, {SN_MinNumberPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINNMV}, {SN_MinNumberScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNM}, {SN_MinPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMINP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMINP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINP}, {SN_MinPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINV}, {SN_MinScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMIN}, {SN_Multiply, OP_XBINOP, OP_IMUL, None, None, OP_XBINOP, OP_FMUL}, {SN_MultiplyAdd, OP_ARM64_MLA}, {SN_MultiplyAddByScalar, OP_ARM64_MLA_SCALAR}, {SN_MultiplyAddBySelectedScalar}, {SN_MultiplyByScalar, OP_XBINOP_BYSCALAR, OP_IMUL, None, None, OP_XBINOP_BYSCALAR, OP_FMUL}, {SN_MultiplyBySelectedScalar}, {SN_MultiplyBySelectedScalarWideningLower}, {SN_MultiplyBySelectedScalarWideningLowerAndAdd}, {SN_MultiplyBySelectedScalarWideningLowerAndSubtract}, {SN_MultiplyBySelectedScalarWideningUpper}, {SN_MultiplyBySelectedScalarWideningUpperAndAdd}, {SN_MultiplyBySelectedScalarWideningUpperAndSubtract}, {SN_MultiplyDoublingByScalarSaturateHigh, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingBySelectedScalarSaturateHigh}, {SN_MultiplyDoublingSaturateHigh, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingSaturateHighScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh}, {SN_MultiplyDoublingWideningAndAddSaturateScalar, OP_ARM64_SQDMLAL_SCALAR}, {SN_MultiplyDoublingWideningAndSubtractSaturateScalar, OP_ARM64_SQDMLSL_SCALAR}, {SN_MultiplyDoublingWideningLowerAndAddSaturate, OP_ARM64_SQDMLAL}, {SN_MultiplyDoublingWideningLowerAndSubtractSaturate, OP_ARM64_SQDMLSL}, {SN_MultiplyDoublingWideningLowerByScalarAndAddSaturate, OP_ARM64_SQDMLAL_BYSCALAR}, {SN_MultiplyDoublingWideningLowerByScalarAndSubtractSaturate, OP_ARM64_SQDMLSL_BYSCALAR}, {SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate}, {SN_MultiplyDoublingWideningSaturateLower, OP_ARM64_SQDMULL}, {SN_MultiplyDoublingWideningSaturateLowerByScalar, OP_ARM64_SQDMULL_BYSCALAR}, {SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar}, {SN_MultiplyDoublingWideningSaturateScalar, OP_ARM64_SQDMULL_SCALAR}, {SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar}, {SN_MultiplyDoublingWideningSaturateUpper, OP_ARM64_SQDMULL2}, {SN_MultiplyDoublingWideningSaturateUpperByScalar, OP_ARM64_SQDMULL2_BYSCALAR}, {SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar}, {SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate}, {SN_MultiplyDoublingWideningUpperAndAddSaturate, OP_ARM64_SQDMLAL2}, {SN_MultiplyDoublingWideningUpperAndSubtractSaturate, OP_ARM64_SQDMLSL2}, {SN_MultiplyDoublingWideningUpperByScalarAndAddSaturate, OP_ARM64_SQDMLAL2_BYSCALAR}, {SN_MultiplyDoublingWideningUpperByScalarAndSubtractSaturate, OP_ARM64_SQDMLSL2_BYSCALAR}, {SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate}, {SN_MultiplyExtended, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedByScalar, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedBySelectedScalar}, {SN_MultiplyExtendedScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedScalarBySelectedScalar}, {SN_MultiplyRoundedDoublingByScalarSaturateHigh, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh}, {SN_MultiplyRoundedDoublingSaturateHigh, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingSaturateHighScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh}, {SN_MultiplyScalar, OP_XBINOP_SCALAR, OP_FMUL}, {SN_MultiplyScalarBySelectedScalar, OP_ARM64_FMUL_SEL}, {SN_MultiplySubtract, OP_ARM64_MLS}, {SN_MultiplySubtractByScalar, OP_ARM64_MLS_SCALAR}, {SN_MultiplySubtractBySelectedScalar}, {SN_MultiplyWideningLower, OP_ARM64_SMULL, None, OP_ARM64_UMULL}, {SN_MultiplyWideningLowerAndAdd, OP_ARM64_SMLAL, None, OP_ARM64_UMLAL}, {SN_MultiplyWideningLowerAndSubtract, OP_ARM64_SMLSL, None, OP_ARM64_UMLSL}, {SN_MultiplyWideningUpper, OP_ARM64_SMULL2, None, OP_ARM64_UMULL2}, {SN_MultiplyWideningUpperAndAdd, OP_ARM64_SMLAL2, None, OP_ARM64_UMLAL2}, {SN_MultiplyWideningUpperAndSubtract, OP_ARM64_SMLSL2, None, OP_ARM64_UMLSL2}, {SN_Negate, OP_ARM64_XNEG}, {SN_NegateSaturate, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQNEG}, {SN_NegateSaturateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_SQNEG}, {SN_NegateScalar, OP_ARM64_XNEG_SCALAR}, {SN_Not, OP_ARM64_MVN}, {SN_Or, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_OR}, {SN_OrNot, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_ORNOT}, {SN_PolynomialMultiply, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_PMUL}, {SN_PolynomialMultiplyWideningLower, OP_ARM64_PMULL}, {SN_PolynomialMultiplyWideningUpper, OP_ARM64_PMULL2}, {SN_PopCount, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_CNT}, {SN_ReciprocalEstimate, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_URECPE, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPE}, {SN_ReciprocalEstimateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPE}, {SN_ReciprocalExponentScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPX}, {SN_ReciprocalSquareRootEstimate, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_URSQRTE, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTE}, {SN_ReciprocalSquareRootEstimateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTE}, {SN_ReciprocalSquareRootStep, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTS}, {SN_ReciprocalSquareRootStepScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTS}, {SN_ReciprocalStep, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPS}, {SN_ReciprocalStepScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPS}, {SN_ReverseElement16, OP_ARM64_REVN, 16}, {SN_ReverseElement32, OP_ARM64_REVN, 32}, {SN_ReverseElement8, OP_ARM64_REVN, 8}, {SN_ReverseElementBits, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_RBIT}, {SN_RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTA}, {SN_RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTA}, {SN_RoundToNearest, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTN}, {SN_RoundToNearestScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTN}, {SN_RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_RoundToZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTZ}, {SN_RoundToZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTZ}, {SN_ShiftArithmetic, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SSHL}, {SN_ShiftArithmeticRounded, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRSHL}, {SN_ShiftArithmeticRoundedSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRSHL}, {SN_ShiftArithmeticRoundedSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRSHL}, {SN_ShiftArithmeticRoundedScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRSHL}, {SN_ShiftArithmeticSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSHL}, {SN_ShiftArithmeticSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSHL}, {SN_ShiftArithmeticScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SSHL}, {SN_ShiftLeftAndInsert, OP_ARM64_SLI}, {SN_ShiftLeftAndInsertScalar, OP_ARM64_SLI}, {SN_ShiftLeftLogical, OP_ARM64_SHL}, {SN_ShiftLeftLogicalSaturate}, {SN_ShiftLeftLogicalSaturateScalar}, {SN_ShiftLeftLogicalSaturateUnsigned, OP_ARM64_SQSHLU}, {SN_ShiftLeftLogicalSaturateUnsignedScalar, OP_ARM64_SQSHLU_SCALAR}, {SN_ShiftLeftLogicalScalar, OP_ARM64_SHL}, {SN_ShiftLeftLogicalWideningLower, OP_ARM64_SSHLL, None, OP_ARM64_USHLL}, {SN_ShiftLeftLogicalWideningUpper, OP_ARM64_SSHLL2, None, OP_ARM64_USHLL2}, {SN_ShiftLogical, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_USHL}, {SN_ShiftLogicalRounded, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URSHL}, {SN_ShiftLogicalRoundedSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQRSHL}, {SN_ShiftLogicalRoundedSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQRSHL}, {SN_ShiftLogicalRoundedScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URSHL}, {SN_ShiftLogicalSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSHL}, {SN_ShiftLogicalSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSHL}, {SN_ShiftLogicalScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_USHL}, {SN_ShiftRightAndInsert, OP_ARM64_SRI}, {SN_ShiftRightAndInsertScalar, OP_ARM64_SRI}, {SN_ShiftRightArithmetic, OP_ARM64_SSHR}, {SN_ShiftRightArithmeticAdd, OP_ARM64_SSRA}, {SN_ShiftRightArithmeticAddScalar, OP_ARM64_SSRA}, {SN_ShiftRightArithmeticNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticRounded, OP_ARM64_SRSHR}, {SN_ShiftRightArithmeticRoundedAdd, OP_ARM64_SRSRA}, {SN_ShiftRightArithmeticRoundedAddScalar, OP_ARM64_SRSRA}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedScalar, OP_ARM64_SRSHR}, {SN_ShiftRightArithmeticScalar, OP_ARM64_SSHR}, {SN_ShiftRightLogical, OP_ARM64_USHR}, {SN_ShiftRightLogicalAdd, OP_ARM64_USRA}, {SN_ShiftRightLogicalAddScalar, OP_ARM64_USRA}, {SN_ShiftRightLogicalNarrowingLower, OP_ARM64_SHRN}, {SN_ShiftRightLogicalNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingUpper, OP_ARM64_SHRN2}, {SN_ShiftRightLogicalRounded, OP_ARM64_URSHR}, {SN_ShiftRightLogicalRoundedAdd, OP_ARM64_URSRA}, {SN_ShiftRightLogicalRoundedAddScalar, OP_ARM64_URSRA}, {SN_ShiftRightLogicalRoundedNarrowingLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_RSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_RSHRN}, {SN_ShiftRightLogicalRoundedScalar, OP_ARM64_URSHR}, {SN_ShiftRightLogicalScalar, OP_ARM64_USHR}, {SN_SignExtendWideningLower, OP_ARM64_SXTL}, {SN_SignExtendWideningUpper, OP_ARM64_SXTL2}, {SN_Sqrt, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT}, {SN_SqrtScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT}, {SN_Store, OP_ARM64_ST1}, {SN_StorePair, OP_ARM64_STP}, {SN_StorePairNonTemporal, OP_ARM64_STNP}, {SN_StorePairScalar, OP_ARM64_STP_SCALAR}, {SN_StorePairScalarNonTemporal, OP_ARM64_STNP_SCALAR}, {SN_StoreSelectedScalar, OP_ARM64_ST1_SCALAR}, {SN_Subtract, OP_XBINOP, OP_ISUB, None, None, OP_XBINOP, OP_FSUB}, {SN_SubtractHighNarrowingLower, OP_ARM64_SUBHN}, {SN_SubtractHighNarrowingUpper, OP_ARM64_SUBHN2}, {SN_SubtractRoundedHighNarrowingLower, OP_ARM64_RSUBHN}, {SN_SubtractRoundedHighNarrowingUpper, OP_ARM64_RSUBHN2}, {SN_SubtractSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSUB, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSUB}, {SN_SubtractSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSUB, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSUB}, {SN_SubtractScalar, OP_XBINOP_SCALAR, OP_ISUB, None, None, OP_XBINOP_SCALAR, OP_FSUB}, {SN_SubtractWideningLower, OP_ARM64_SSUB, None, OP_ARM64_USUB}, {SN_SubtractWideningUpper, OP_ARM64_SSUB2, None, OP_ARM64_USUB2}, {SN_TransposeEven, OP_ARM64_TRN1}, {SN_TransposeOdd, OP_ARM64_TRN2}, {SN_UnzipEven, OP_ARM64_UZP1}, {SN_UnzipOdd, OP_ARM64_UZP2}, {SN_VectorTableLookup, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_TBL1}, {SN_VectorTableLookupExtension, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_TBX1}, {SN_Xor, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_XOR}, {SN_ZeroExtendWideningLower, OP_ARM64_UXTL}, {SN_ZeroExtendWideningUpper, OP_ARM64_UXTL2}, {SN_ZipHigh, OP_ARM64_ZIP2}, {SN_ZipLow, OP_ARM64_ZIP1}, {SN_get_IsSupported}, }; static const SimdIntrinsic rdm_methods [] = { {SN_MultiplyRoundedDoublingAndAddSaturateHigh, OP_ARM64_SQRDMLAH}, {SN_MultiplyRoundedDoublingAndAddSaturateHighScalar, OP_ARM64_SQRDMLAH_SCALAR}, {SN_MultiplyRoundedDoublingAndSubtractSaturateHigh, OP_ARM64_SQRDMLSH}, {SN_MultiplyRoundedDoublingAndSubtractSaturateHighScalar, OP_ARM64_SQRDMLSH_SCALAR}, {SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh}, {SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh}, {SN_get_IsSupported}, }; static const SimdIntrinsic dp_methods [] = { {SN_DotProduct, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_SDOT, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_UDOT}, {SN_DotProductBySelectedQuadruplet}, {SN_get_IsSupported}, }; static const IntrinGroup supported_arm_intrinsics [] = { { "AdvSimd", MONO_CPU_ARM64_NEON, advsimd_methods, sizeof (advsimd_methods) }, { "Aes", MONO_CPU_ARM64_CRYPTO, crypto_aes_methods, sizeof (crypto_aes_methods) }, { "ArmBase", MONO_CPU_ARM64_BASE, armbase_methods, sizeof (armbase_methods) }, { "Crc32", MONO_CPU_ARM64_CRC, crc32_methods, sizeof (crc32_methods) }, { "Dp", MONO_CPU_ARM64_DP, dp_methods, sizeof (dp_methods) }, { "Rdm", MONO_CPU_ARM64_RDM, rdm_methods, sizeof (rdm_methods) }, { "Sha1", MONO_CPU_ARM64_CRYPTO, sha1_methods, sizeof (sha1_methods) }, { "Sha256", MONO_CPU_ARM64_CRYPTO, sha256_methods, sizeof (sha256_methods) }, }; static MonoInst* emit_arm64_intrinsics ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit) { MonoCPUFeatures feature = intrin_group->feature; gboolean arg0_i32 = (arg0_type == MONO_TYPE_I4) || (arg0_type == MONO_TYPE_U4); #if TARGET_SIZEOF_VOID_P == 4 arg0_i32 = arg0_i32 || (arg0_type == MONO_TYPE_I) || (arg0_type == MONO_TYPE_U); #endif if (feature == MONO_CPU_ARM64_BASE) { switch (id) { case SN_LeadingZeroCount: return emit_simd_ins_for_sig (cfg, klass, arg0_i32 ? OP_LZCNT32 : OP_LZCNT64, 0, arg0_type, fsig, args); case SN_LeadingSignCount: return emit_simd_ins_for_sig (cfg, klass, arg0_i32 ? OP_LSCNT32 : OP_LSCNT64, 0, arg0_type, fsig, args); case SN_MultiplyHigh: return emit_simd_ins_for_sig (cfg, klass, (arg0_type == MONO_TYPE_I8 ? OP_ARM64_SMULH : OP_ARM64_UMULH), 0, arg0_type, fsig, args); case SN_ReverseElementBits: return emit_simd_ins_for_sig (cfg, klass, (is_64bit ? OP_XOP_I8_I8 : OP_XOP_I4_I4), (is_64bit ? INTRINS_BITREVERSE_I64 : INTRINS_BITREVERSE_I32), arg0_type, fsig, args); default: g_assert_not_reached (); // if a new API is added we need to either implement it or change IsSupported to false } } if (feature == MONO_CPU_ARM64_CRC) { switch (id) { case SN_ComputeCrc32: case SN_ComputeCrc32C: { IntrinsicId op = (IntrinsicId)0; gboolean is_c = info->id == SN_ComputeCrc32C; switch (get_underlying_type (fsig->params [1])) { case MONO_TYPE_U1: op = is_c ? INTRINS_AARCH64_CRC32CB : INTRINS_AARCH64_CRC32B; break; case MONO_TYPE_U2: op = is_c ? INTRINS_AARCH64_CRC32CH : INTRINS_AARCH64_CRC32H; break; case MONO_TYPE_U4: op = is_c ? INTRINS_AARCH64_CRC32CW : INTRINS_AARCH64_CRC32W; break; case MONO_TYPE_U8: op = is_c ? INTRINS_AARCH64_CRC32CX : INTRINS_AARCH64_CRC32X; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_64bit ? OP_XOP_I4_I4_I8 : OP_XOP_I4_I4_I4, op, arg0_type, fsig, args); } default: g_assert_not_reached (); // if a new API is added we need to either implement it or change IsSupported to false } } if (feature == MONO_CPU_ARM64_NEON) { switch (id) { case SN_AbsoluteCompareGreaterThan: case SN_AbsoluteCompareGreaterThanOrEqual: case SN_AbsoluteCompareLessThan: case SN_AbsoluteCompareLessThanOrEqual: case SN_AbsoluteCompareGreaterThanScalar: case SN_AbsoluteCompareGreaterThanOrEqualScalar: case SN_AbsoluteCompareLessThanScalar: case SN_AbsoluteCompareLessThanOrEqualScalar: { gboolean reverse_args = FALSE; gboolean use_geq = FALSE; gboolean scalar = FALSE; MonoInst *cmp_args [] = { args [0], args [1] }; switch (id) { case SN_AbsoluteCompareGreaterThanScalar: scalar = TRUE; case SN_AbsoluteCompareGreaterThan: break; case SN_AbsoluteCompareGreaterThanOrEqualScalar: scalar = TRUE; case SN_AbsoluteCompareGreaterThanOrEqual: use_geq = TRUE; break; case SN_AbsoluteCompareLessThanScalar: scalar = TRUE; case SN_AbsoluteCompareLessThan: reverse_args = TRUE; break; case SN_AbsoluteCompareLessThanOrEqualScalar: scalar = TRUE; case SN_AbsoluteCompareLessThanOrEqual: reverse_args = TRUE; use_geq = TRUE; break; } if (reverse_args) { cmp_args [0] = args [1]; cmp_args [1] = args [0]; } int iid = use_geq ? INTRINS_AARCH64_ADV_SIMD_FACGE : INTRINS_AARCH64_ADV_SIMD_FACGT; return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_ABSCOMPARE, iid, scalar, fsig, cmp_args); } case SN_AddSaturate: case SN_AddSaturateScalar: { gboolean arg0_unsigned = type_is_unsigned (fsig->params [0]); gboolean arg1_unsigned = type_is_unsigned (fsig->params [1]); int iid = 0; if (arg0_unsigned && arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_UQADD; else if (arg0_unsigned && !arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_USQADD; else if (!arg0_unsigned && arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_SUQADD; else iid = INTRINS_AARCH64_ADV_SIMD_SQADD; int op = id == SN_AddSaturateScalar ? OP_XOP_OVR_SCALAR_X_X_X : OP_XOP_OVR_X_X_X; return emit_simd_ins_for_sig (cfg, klass, op, iid, arg0_type, fsig, args); } case SN_DuplicateSelectedScalarToVector128: case SN_DuplicateSelectedScalarToVector64: case SN_DuplicateToVector64: case SN_DuplicateToVector128: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoType *rtype = get_vector_t_elem_type (fsig->ret); int scalar_src_reg = args [0]->dreg; switch (id) { case SN_DuplicateSelectedScalarToVector128: case SN_DuplicateSelectedScalarToVector64: { MonoInst *ins = emit_simd_ins (cfg, ret_klass, type_to_xextract_op (rtype->type), args [0]->dreg, args [1]->dreg); ins->inst_c1 = arg0_type; scalar_src_reg = ins->dreg; break; } } return emit_simd_ins (cfg, ret_klass, type_to_expand_op (rtype), scalar_src_reg, -1); } case SN_Extract: { int extract_op = type_to_xextract_op (arg0_type); MonoInst *ins = emit_simd_ins (cfg, klass, extract_op, args [0]->dreg, args [1]->dreg); ins->inst_c1 = arg0_type; return ins; } case SN_InsertSelectedScalar: case SN_InsertScalar: case SN_Insert: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); int insert_op = 0; int extract_op = 0; switch (arg0_type) { case MONO_TYPE_I1: case MONO_TYPE_U1: insert_op = OP_XINSERT_I1; extract_op = OP_EXTRACT_I1; break; case MONO_TYPE_I2: case MONO_TYPE_U2: insert_op = OP_XINSERT_I2; extract_op = OP_EXTRACT_I2; break; case MONO_TYPE_I4: case MONO_TYPE_U4: insert_op = OP_XINSERT_I4; extract_op = OP_EXTRACT_I4; break; case MONO_TYPE_I8: case MONO_TYPE_U8: insert_op = OP_XINSERT_I8; extract_op = OP_EXTRACT_I8; break; case MONO_TYPE_R4: insert_op = OP_XINSERT_R4; extract_op = OP_EXTRACT_R4; break; case MONO_TYPE_R8: insert_op = OP_XINSERT_R8; extract_op = OP_EXTRACT_R8; break; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 insert_op = OP_XINSERT_I8; extract_op = OP_EXTRACT_I8; #else insert_op = OP_XINSERT_I4; extract_op = OP_EXTRACT_I4; #endif break; default: g_assert_not_reached (); } int val_src_reg = args [2]->dreg; switch (id) { case SN_InsertSelectedScalar: { MonoInst *scalar = emit_simd_ins (cfg, klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); val_src_reg = scalar->dreg; // fallthrough } case SN_InsertScalar: { MonoInst *ins = emit_simd_ins (cfg, klass, extract_op, val_src_reg, -1); ins->inst_c0 = 0; ins->inst_c1 = arg0_type; val_src_reg = ins->dreg; break; } } MonoInst *ins = emit_simd_ins (cfg, ret_klass, insert_op, args [0]->dreg, val_src_reg); ins->sreg3 = args [1]->dreg; ins->inst_c1 = arg0_type; return ins; } case SN_ShiftLeftLogicalSaturate: case SN_ShiftLeftLogicalSaturateScalar: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoType *etype = get_vector_t_elem_type (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); gboolean scalar = id == SN_ShiftLeftLogicalSaturateScalar; int s2v = scalar ? OP_CREATE_SCALAR_UNSAFE : type_to_expand_op (etype); int xop = scalar ? OP_XOP_OVR_SCALAR_X_X_X : OP_XOP_OVR_X_X_X; int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UQSHL : INTRINS_AARCH64_ADV_SIMD_SQSHL; MonoInst *shift_vector = emit_simd_ins (cfg, ret_klass, s2v, args [1]->dreg, -1); shift_vector->inst_c1 = etype->type; MonoInst *ret = emit_simd_ins (cfg, ret_klass, xop, args [0]->dreg, shift_vector->dreg); ret->inst_c0 = iid; ret->inst_c1 = etype->type; return ret; } case SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh: case SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh: case SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar: case SN_MultiplyExtendedBySelectedScalar: case SN_MultiplyExtendedScalarBySelectedScalar: case SN_MultiplyBySelectedScalar: case SN_MultiplyBySelectedScalarWideningLower: case SN_MultiplyBySelectedScalarWideningUpper: case SN_MultiplyDoublingBySelectedScalarSaturateHigh: case SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar: case SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); gboolean is_float = type_is_float (fsig->ret); int opcode = 0; int c0 = 0; switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQRDMULH; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQRDMULH; break; case SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQDMULH; break; case SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar: opcode = OP_ARM64_SQDMULL_SCALAR; break; case SN_MultiplyExtendedBySelectedScalar: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_FMULX; break; case SN_MultiplyExtendedScalarBySelectedScalar: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_FMULX; break; case SN_MultiplyBySelectedScalar: opcode = OP_XBINOP_BYSCALAR; c0 = OP_IMUL; break; case SN_MultiplyBySelectedScalarWideningLower: opcode = OP_ARM64_SMULL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpper: opcode = OP_ARM64_SMULL2_SCALAR; break; case SN_MultiplyDoublingBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQDMULH; break; case SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar: opcode = OP_ARM64_SQDMULL_BYSCALAR; break; case SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar: opcode = OP_ARM64_SQDMULL2_BYSCALAR; break; default: g_assert_not_reached(); } if (is_unsigned) switch (opcode) { case OP_ARM64_SMULL_SCALAR: opcode = OP_ARM64_UMULL_SCALAR; break; case OP_ARM64_SMULL2_SCALAR: opcode = OP_ARM64_UMULL2_SCALAR; break; } if (is_float) switch (opcode) { case OP_XBINOP_BYSCALAR: c0 = OP_FMUL; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [1]->dreg, args [2]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, scalar->dreg); ret->inst_c0 = c0; ret->inst_c1 = arg0_type; return ret; } case SN_FusedMultiplyAddBySelectedScalar: case SN_FusedMultiplyAddScalarBySelectedScalar: case SN_FusedMultiplySubtractBySelectedScalar: case SN_FusedMultiplySubtractScalarBySelectedScalar: case SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate: case SN_MultiplyAddBySelectedScalar: case SN_MultiplySubtractBySelectedScalar: case SN_MultiplyBySelectedScalarWideningLowerAndAdd: case SN_MultiplyBySelectedScalarWideningLowerAndSubtract: case SN_MultiplyBySelectedScalarWideningUpperAndAdd: case SN_MultiplyBySelectedScalarWideningUpperAndSubtract: case SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate: case SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); int opcode = 0; switch (id) { case SN_FusedMultiplyAddBySelectedScalar: opcode = OP_ARM64_FMADD_BYSCALAR; break; case SN_FusedMultiplyAddScalarBySelectedScalar: opcode = OP_ARM64_FMADD_SCALAR; break; case SN_FusedMultiplySubtractBySelectedScalar: opcode = OP_ARM64_FMSUB_BYSCALAR; break; case SN_FusedMultiplySubtractScalarBySelectedScalar: opcode = OP_ARM64_FMSUB_SCALAR; break; case SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL_SCALAR; break; case SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL_SCALAR; break; case SN_MultiplyAddBySelectedScalar: opcode = OP_ARM64_MLA_SCALAR; break; case SN_MultiplySubtractBySelectedScalar: opcode = OP_ARM64_MLS_SCALAR; break; case SN_MultiplyBySelectedScalarWideningLowerAndAdd: opcode = OP_ARM64_SMLAL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningLowerAndSubtract: opcode = OP_ARM64_SMLSL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpperAndAdd: opcode = OP_ARM64_SMLAL2_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpperAndSubtract: opcode = OP_ARM64_SMLSL2_SCALAR; break; case SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL_BYSCALAR; break; case SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL_BYSCALAR; break; case SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL2_BYSCALAR; break; case SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL2_BYSCALAR; break; default: g_assert_not_reached(); } if (is_unsigned) switch (opcode) { case OP_ARM64_SMLAL_SCALAR: opcode = OP_ARM64_UMLAL_SCALAR; break; case OP_ARM64_SMLSL_SCALAR: opcode = OP_ARM64_UMLSL_SCALAR; break; case OP_ARM64_SMLAL2_SCALAR: opcode = OP_ARM64_UMLAL2_SCALAR; break; case OP_ARM64_SMLSL2_SCALAR: opcode = OP_ARM64_UMLSL2_SCALAR; break; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, args [1]->dreg); ret->sreg3 = scalar->dreg; return ret; } default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_CRYPTO) { switch (id) { case SN_PolynomialMultiplyWideningLower: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_AARCH64_PMULL64, 0, fsig, args); case SN_PolynomialMultiplyWideningUpper: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_AARCH64_PMULL64, 1, fsig, args); default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_RDM) { switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh: case SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); int opcode = 0; switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh: opcode = OP_ARM64_SQRDMLAH_BYSCALAR; break; case SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh: opcode = OP_ARM64_SQRDMLSH_BYSCALAR; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh: opcode = OP_ARM64_SQRDMLAH_SCALAR; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh: opcode = OP_ARM64_SQRDMLSH_SCALAR; break; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, args [1]->dreg); ret->inst_c1 = arg0_type; ret->sreg3 = scalar->dreg; return ret; } default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_DP) { switch (id) { case SN_DotProductBySelectedQuadruplet: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoClass *arg_klass = mono_class_from_mono_type_internal (fsig->params [1]); MonoClass *quad_klass = mono_class_from_mono_type_internal (fsig->params [2]); gboolean is_unsigned = type_is_unsigned (fsig->ret); int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UDOT : INTRINS_AARCH64_ADV_SIMD_SDOT; MonoInst *quad = emit_simd_ins (cfg, arg_klass, OP_ARM64_SELECT_QUAD, args [2]->dreg, args [3]->dreg); quad->data.op [1].klass = quad_klass; MonoInst *ret = emit_simd_ins (cfg, ret_klass, OP_XOP_OVR_X_X_X_X, args [0]->dreg, args [1]->dreg); ret->sreg3 = quad->dreg; ret->inst_c0 = iid; return ret; } default: g_assert_not_reached (); } } return NULL; } #endif // TARGET_ARM64 #ifdef TARGET_AMD64 static SimdIntrinsic sse_methods [] = { {SN_Add, OP_XBINOP, OP_FADD}, {SN_AddScalar, OP_SSE_ADDSS}, {SN_And, OP_SSE_AND}, {SN_AndNot, OP_SSE_ANDN}, {SN_CompareEqual, OP_XCOMPARE_FP, CMP_EQ}, {SN_CompareGreaterThan, OP_XCOMPARE_FP,CMP_GT}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareLessThan, OP_XCOMPARE_FP, CMP_LT}, {SN_CompareLessThanOrEqual, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareNotEqual, OP_XCOMPARE_FP, CMP_NE}, {SN_CompareNotGreaterThan, OP_XCOMPARE_FP, CMP_LE_UN}, {SN_CompareNotGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_LT_UN}, {SN_CompareNotLessThan, OP_XCOMPARE_FP, CMP_GE_UN}, {SN_CompareNotLessThanOrEqual, OP_XCOMPARE_FP, CMP_GT_UN}, {SN_CompareOrdered, OP_XCOMPARE_FP, CMP_ORD}, {SN_CompareScalarEqual, OP_SSE_CMPSS, CMP_EQ}, {SN_CompareScalarGreaterThan, OP_SSE_CMPSS, CMP_GT}, {SN_CompareScalarGreaterThanOrEqual, OP_SSE_CMPSS, CMP_GE}, {SN_CompareScalarLessThan, OP_SSE_CMPSS, CMP_LT}, {SN_CompareScalarLessThanOrEqual, OP_SSE_CMPSS, CMP_LE}, {SN_CompareScalarNotEqual, OP_SSE_CMPSS, CMP_NE}, {SN_CompareScalarNotGreaterThan, OP_SSE_CMPSS, CMP_LE_UN}, {SN_CompareScalarNotGreaterThanOrEqual, OP_SSE_CMPSS, CMP_LT_UN}, {SN_CompareScalarNotLessThan, OP_SSE_CMPSS, CMP_GE_UN}, {SN_CompareScalarNotLessThanOrEqual, OP_SSE_CMPSS, CMP_GT_UN}, {SN_CompareScalarOrdered, OP_SSE_CMPSS, CMP_ORD}, {SN_CompareScalarOrderedEqual, OP_SSE_COMISS, CMP_EQ}, {SN_CompareScalarOrderedGreaterThan, OP_SSE_COMISS, CMP_GT}, {SN_CompareScalarOrderedGreaterThanOrEqual, OP_SSE_COMISS, CMP_GE}, {SN_CompareScalarOrderedLessThan, OP_SSE_COMISS, CMP_LT}, {SN_CompareScalarOrderedLessThanOrEqual, OP_SSE_COMISS, CMP_LE}, {SN_CompareScalarOrderedNotEqual, OP_SSE_COMISS, CMP_NE}, {SN_CompareScalarUnordered, OP_SSE_CMPSS, CMP_UNORD}, {SN_CompareScalarUnorderedEqual, OP_SSE_UCOMISS, CMP_EQ}, {SN_CompareScalarUnorderedGreaterThan, OP_SSE_UCOMISS, CMP_GT}, {SN_CompareScalarUnorderedGreaterThanOrEqual, OP_SSE_UCOMISS, CMP_GE}, {SN_CompareScalarUnorderedLessThan, OP_SSE_UCOMISS, CMP_LT}, {SN_CompareScalarUnorderedLessThanOrEqual, OP_SSE_UCOMISS, CMP_LE}, {SN_CompareScalarUnorderedNotEqual, OP_SSE_UCOMISS, CMP_NE}, {SN_CompareUnordered, OP_XCOMPARE_FP, CMP_UNORD}, {SN_ConvertScalarToVector128Single}, {SN_ConvertToInt32, OP_XOP_I4_X, INTRINS_SSE_CVTSS2SI}, {SN_ConvertToInt32WithTruncation, OP_XOP_I4_X, INTRINS_SSE_CVTTSS2SI}, {SN_ConvertToInt64, OP_XOP_I8_X, INTRINS_SSE_CVTSS2SI64}, {SN_ConvertToInt64WithTruncation, OP_XOP_I8_X, INTRINS_SSE_CVTTSS2SI64}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_SSE_DIVSS}, {SN_LoadAlignedVector128, OP_SSE_LOADU, 16 /* alignment */}, {SN_LoadHigh, OP_SSE_MOVHPS_LOAD}, {SN_LoadLow, OP_SSE_MOVLPS_LOAD}, {SN_LoadScalarVector128, OP_SSE_MOVSS}, {SN_LoadVector128, OP_SSE_LOADU, 1 /* alignment */}, {SN_Max, OP_XOP_X_X_X, INTRINS_SSE_MAXPS}, {SN_MaxScalar, OP_XOP_X_X_X, INTRINS_SSE_MAXSS}, {SN_Min, OP_XOP_X_X_X, INTRINS_SSE_MINPS}, {SN_MinScalar, OP_XOP_X_X_X, INTRINS_SSE_MINSS}, {SN_MoveHighToLow, OP_SSE_MOVEHL}, {SN_MoveLowToHigh, OP_SSE_MOVELH}, {SN_MoveMask, OP_SSE_MOVMSK}, {SN_MoveScalar, OP_SSE_MOVS2}, {SN_Multiply, OP_XBINOP, OP_FMUL}, {SN_MultiplyScalar, OP_SSE_MULSS}, {SN_Or, OP_SSE_OR}, {SN_Prefetch0, OP_SSE_PREFETCHT0}, {SN_Prefetch1, OP_SSE_PREFETCHT1}, {SN_Prefetch2, OP_SSE_PREFETCHT2}, {SN_PrefetchNonTemporal, OP_SSE_PREFETCHNTA}, {SN_Reciprocal, OP_XOP_X_X, INTRINS_SSE_RCP_PS}, {SN_ReciprocalScalar}, {SN_ReciprocalSqrt, OP_XOP_X_X, INTRINS_SSE_RSQRT_PS}, {SN_ReciprocalSqrtScalar}, {SN_Shuffle}, {SN_Sqrt, OP_XOP_X_X, INTRINS_SSE_SQRT_PS}, {SN_SqrtScalar}, {SN_Store, OP_SSE_STORE, 1 /* alignment */}, {SN_StoreAligned, OP_SSE_STORE, 16 /* alignment */}, {SN_StoreAlignedNonTemporal, OP_SSE_MOVNTPS, 16 /* alignment */}, {SN_StoreFence, OP_XOP, INTRINS_SSE_SFENCE}, {SN_StoreHigh, OP_SSE_MOVHPS_STORE}, {SN_StoreLow, OP_SSE_MOVLPS_STORE}, {SN_StoreScalar, OP_SSE_MOVSS_STORE}, {SN_Subtract, OP_XBINOP, OP_FSUB}, {SN_SubtractScalar, OP_SSE_SUBSS}, {SN_UnpackHigh, OP_SSE_UNPACKHI}, {SN_UnpackLow, OP_SSE_UNPACKLO}, {SN_Xor, OP_SSE_XOR}, {SN_get_IsSupported} }; static SimdIntrinsic sse2_methods [] = { {SN_Add}, {SN_AddSaturate, OP_SSE2_ADDS}, {SN_AddScalar, OP_SSE2_ADDSD}, {SN_And, OP_SSE_AND}, {SN_AndNot, OP_SSE_ANDN}, {SN_Average}, {SN_CompareEqual}, {SN_CompareGreaterThan}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareLessThan}, {SN_CompareLessThanOrEqual, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareNotEqual, OP_XCOMPARE_FP, CMP_NE}, {SN_CompareNotGreaterThan, OP_XCOMPARE_FP, CMP_LE_UN}, {SN_CompareNotGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_LT_UN}, {SN_CompareNotLessThan, OP_XCOMPARE_FP, CMP_GE_UN}, {SN_CompareNotLessThanOrEqual, OP_XCOMPARE_FP, CMP_GT_UN}, {SN_CompareOrdered, OP_XCOMPARE_FP, CMP_ORD}, {SN_CompareScalarEqual, OP_SSE2_CMPSD, CMP_EQ}, {SN_CompareScalarGreaterThan, OP_SSE2_CMPSD, CMP_GT}, {SN_CompareScalarGreaterThanOrEqual, OP_SSE2_CMPSD, CMP_GE}, {SN_CompareScalarLessThan, OP_SSE2_CMPSD, CMP_LT}, {SN_CompareScalarLessThanOrEqual, OP_SSE2_CMPSD, CMP_LE}, {SN_CompareScalarNotEqual, OP_SSE2_CMPSD, CMP_NE}, {SN_CompareScalarNotGreaterThan, OP_SSE2_CMPSD, CMP_LE_UN}, {SN_CompareScalarNotGreaterThanOrEqual, OP_SSE2_CMPSD, CMP_LT_UN}, {SN_CompareScalarNotLessThan, OP_SSE2_CMPSD, CMP_GE_UN}, {SN_CompareScalarNotLessThanOrEqual, OP_SSE2_CMPSD, CMP_GT_UN}, {SN_CompareScalarOrdered, OP_SSE2_CMPSD, CMP_ORD}, {SN_CompareScalarOrderedEqual, OP_SSE2_COMISD, CMP_EQ}, {SN_CompareScalarOrderedGreaterThan, OP_SSE2_COMISD, CMP_GT}, {SN_CompareScalarOrderedGreaterThanOrEqual, OP_SSE2_COMISD, CMP_GE}, {SN_CompareScalarOrderedLessThan, OP_SSE2_COMISD, CMP_LT}, {SN_CompareScalarOrderedLessThanOrEqual, OP_SSE2_COMISD, CMP_LE}, {SN_CompareScalarOrderedNotEqual, OP_SSE2_COMISD, CMP_NE}, {SN_CompareScalarUnordered, OP_SSE2_CMPSD, CMP_UNORD}, {SN_CompareScalarUnorderedEqual, OP_SSE2_UCOMISD, CMP_EQ}, {SN_CompareScalarUnorderedGreaterThan, OP_SSE2_UCOMISD, CMP_GT}, {SN_CompareScalarUnorderedGreaterThanOrEqual, OP_SSE2_UCOMISD, CMP_GE}, {SN_CompareScalarUnorderedLessThan, OP_SSE2_UCOMISD, CMP_LT}, {SN_CompareScalarUnorderedLessThanOrEqual, OP_SSE2_UCOMISD, CMP_LE}, {SN_CompareScalarUnorderedNotEqual, OP_SSE2_UCOMISD, CMP_NE}, {SN_CompareUnordered, OP_XCOMPARE_FP, CMP_UNORD}, {SN_ConvertScalarToVector128Double}, {SN_ConvertScalarToVector128Int32}, {SN_ConvertScalarToVector128Int64}, {SN_ConvertScalarToVector128Single, OP_XOP_X_X_X, INTRINS_SSE_CVTSD2SS}, {SN_ConvertScalarToVector128UInt32}, {SN_ConvertScalarToVector128UInt64}, {SN_ConvertToInt32}, {SN_ConvertToInt32WithTruncation, OP_XOP_I4_X, INTRINS_SSE_CVTTSD2SI}, {SN_ConvertToInt64}, {SN_ConvertToInt64WithTruncation, OP_XOP_I8_X, INTRINS_SSE_CVTTSD2SI64}, {SN_ConvertToUInt32}, {SN_ConvertToUInt64}, {SN_ConvertToVector128Double}, {SN_ConvertToVector128Int32}, {SN_ConvertToVector128Int32WithTruncation}, {SN_ConvertToVector128Single}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_SSE2_DIVSD}, {SN_Extract}, {SN_Insert}, {SN_LoadAlignedVector128}, {SN_LoadFence, OP_XOP, INTRINS_SSE_LFENCE}, {SN_LoadHigh, OP_SSE2_MOVHPD_LOAD}, {SN_LoadLow, OP_SSE2_MOVLPD_LOAD}, {SN_LoadScalarVector128}, {SN_LoadVector128}, {SN_MaskMove, OP_SSE2_MASKMOVDQU}, {SN_Max}, {SN_MaxScalar, OP_XOP_X_X_X, INTRINS_SSE_MAXSD}, {SN_MemoryFence, OP_XOP, INTRINS_SSE_MFENCE}, {SN_Min}, // FIXME: {SN_MinScalar, OP_XOP_X_X_X, INTRINS_SSE_MINSD}, {SN_MoveMask, OP_SSE_MOVMSK}, {SN_MoveScalar}, {SN_Multiply}, {SN_MultiplyAddAdjacent, OP_XOP_X_X_X, INTRINS_SSE_PMADDWD}, {SN_MultiplyHigh}, {SN_MultiplyLow, OP_PMULW}, {SN_MultiplyScalar, OP_SSE2_MULSD}, {SN_Or, OP_SSE_OR}, {SN_PackSignedSaturate}, {SN_PackUnsignedSaturate}, {SN_ShiftLeftLogical}, {SN_ShiftLeftLogical128BitLane}, {SN_ShiftRightArithmetic}, {SN_ShiftRightLogical}, {SN_ShiftRightLogical128BitLane}, {SN_Shuffle}, {SN_ShuffleHigh}, {SN_ShuffleLow}, {SN_Sqrt, OP_XOP_X_X, INTRINS_SSE_SQRT_PD}, {SN_SqrtScalar}, {SN_Store, OP_SSE_STORE, 1 /* alignment */}, {SN_StoreAligned, OP_SSE_STORE, 16 /* alignment */}, {SN_StoreAlignedNonTemporal, OP_SSE_MOVNTPS, 16 /* alignment */}, {SN_StoreHigh, OP_SSE2_MOVHPD_STORE}, {SN_StoreLow, OP_SSE2_MOVLPD_STORE}, {SN_StoreNonTemporal, OP_SSE_MOVNTPS, 1 /* alignment */}, {SN_StoreScalar, OP_SSE_STORES}, {SN_Subtract}, {SN_SubtractSaturate, OP_SSE2_SUBS}, {SN_SubtractScalar, OP_SSE2_SUBSD}, {SN_SumAbsoluteDifferences, OP_XOP_X_X_X, INTRINS_SSE_PSADBW}, {SN_UnpackHigh, OP_SSE_UNPACKHI}, {SN_UnpackLow, OP_SSE_UNPACKLO}, {SN_Xor, OP_SSE_XOR}, {SN_get_IsSupported} }; static SimdIntrinsic sse3_methods [] = { {SN_AddSubtract}, {SN_HorizontalAdd}, {SN_HorizontalSubtract}, {SN_LoadAndDuplicateToVector128, OP_SSE3_MOVDDUP_MEM}, {SN_LoadDquVector128, OP_XOP_X_I, INTRINS_SSE_LDU_DQ}, {SN_MoveAndDuplicate, OP_SSE3_MOVDDUP}, {SN_MoveHighAndDuplicate, OP_SSE3_MOVSHDUP}, {SN_MoveLowAndDuplicate, OP_SSE3_MOVSLDUP}, {SN_get_IsSupported} }; static SimdIntrinsic ssse3_methods [] = { {SN_Abs, OP_SSSE3_ABS}, {SN_AlignRight}, {SN_HorizontalAdd}, {SN_HorizontalAddSaturate, OP_XOP_X_X_X, INTRINS_SSE_PHADDSW}, {SN_HorizontalSubtract}, {SN_HorizontalSubtractSaturate, OP_XOP_X_X_X, INTRINS_SSE_PHSUBSW}, {SN_MultiplyAddAdjacent, OP_XOP_X_X_X, INTRINS_SSE_PMADDUBSW}, {SN_MultiplyHighRoundScale, OP_XOP_X_X_X, INTRINS_SSE_PMULHRSW}, {SN_Shuffle, OP_SSSE3_SHUFFLE}, {SN_Sign}, {SN_get_IsSupported} }; static SimdIntrinsic sse41_methods [] = { {SN_Blend}, {SN_BlendVariable}, {SN_Ceiling, OP_SSE41_ROUNDP, 10 /*round mode*/}, {SN_CeilingScalar, 0, 10 /*round mode*/}, {SN_CompareEqual, OP_XCOMPARE, CMP_EQ}, {SN_ConvertToVector128Int16, OP_SSE_CVTII, MONO_TYPE_I2}, {SN_ConvertToVector128Int32, OP_SSE_CVTII, MONO_TYPE_I4}, {SN_ConvertToVector128Int64, OP_SSE_CVTII, MONO_TYPE_I8}, {SN_DotProduct}, {SN_Extract}, {SN_Floor, OP_SSE41_ROUNDP, 9 /*round mode*/}, {SN_FloorScalar, 0, 9 /*round mode*/}, {SN_Insert}, {SN_LoadAlignedVector128NonTemporal, OP_SSE41_LOADANT}, {SN_Max, OP_XBINOP, OP_IMAX}, {SN_Min, OP_XBINOP, OP_IMIN}, {SN_MinHorizontal, OP_XOP_X_X, INTRINS_SSE_PHMINPOSUW}, {SN_MultipleSumAbsoluteDifferences}, {SN_Multiply, OP_SSE41_MUL}, {SN_MultiplyLow, OP_SSE41_MULLO}, {SN_PackUnsignedSaturate, OP_XOP_X_X_X, INTRINS_SSE_PACKUSDW}, {SN_RoundCurrentDirection, OP_SSE41_ROUNDP, 4 /*round mode*/}, {SN_RoundCurrentDirectionScalar, 0, 4 /*round mode*/}, {SN_RoundToNearestInteger, OP_SSE41_ROUNDP, 8 /*round mode*/}, {SN_RoundToNearestIntegerScalar, 0, 8 /*round mode*/}, {SN_RoundToNegativeInfinity, OP_SSE41_ROUNDP, 9 /*round mode*/}, {SN_RoundToNegativeInfinityScalar, 0, 9 /*round mode*/}, {SN_RoundToPositiveInfinity, OP_SSE41_ROUNDP, 10 /*round mode*/}, {SN_RoundToPositiveInfinityScalar, 0, 10 /*round mode*/}, {SN_RoundToZero, OP_SSE41_ROUNDP, 11 /*round mode*/}, {SN_RoundToZeroScalar, 0, 11 /*round mode*/}, {SN_TestC, OP_XOP_I4_X_X, INTRINS_SSE_TESTC}, {SN_TestNotZAndNotC, OP_XOP_I4_X_X, INTRINS_SSE_TESTNZ}, {SN_TestZ, OP_XOP_I4_X_X, INTRINS_SSE_TESTZ}, {SN_get_IsSupported} }; static SimdIntrinsic sse42_methods [] = { {SN_CompareGreaterThan, OP_XCOMPARE, CMP_GT}, {SN_Crc32}, {SN_get_IsSupported} }; static SimdIntrinsic pclmulqdq_methods [] = { {SN_CarrylessMultiply}, {SN_get_IsSupported} }; static SimdIntrinsic aes_methods [] = { {SN_Decrypt, OP_XOP_X_X_X, INTRINS_AESNI_AESDEC}, {SN_DecryptLast, OP_XOP_X_X_X, INTRINS_AESNI_AESDECLAST}, {SN_Encrypt, OP_XOP_X_X_X, INTRINS_AESNI_AESENC}, {SN_EncryptLast, OP_XOP_X_X_X, INTRINS_AESNI_AESENCLAST}, {SN_InverseMixColumns, OP_XOP_X_X, INTRINS_AESNI_AESIMC}, {SN_KeygenAssist}, {SN_get_IsSupported} }; static SimdIntrinsic popcnt_methods [] = { {SN_PopCount}, {SN_get_IsSupported} }; static SimdIntrinsic lzcnt_methods [] = { {SN_LeadingZeroCount}, {SN_get_IsSupported} }; static SimdIntrinsic bmi1_methods [] = { {SN_AndNot}, {SN_BitFieldExtract}, {SN_ExtractLowestSetBit}, {SN_GetMaskUpToLowestSetBit}, {SN_ResetLowestSetBit}, {SN_TrailingZeroCount}, {SN_get_IsSupported} }; static SimdIntrinsic bmi2_methods [] = { {SN_MultiplyNoFlags}, {SN_ParallelBitDeposit}, {SN_ParallelBitExtract}, {SN_ZeroHighBits}, {SN_get_IsSupported} }; static SimdIntrinsic x86base_methods [] = { {SN_BitScanForward}, {SN_BitScanReverse}, {SN_get_IsSupported} }; static const IntrinGroup supported_x86_intrinsics [] = { { "Aes", MONO_CPU_X86_AES, aes_methods, sizeof (aes_methods) }, { "Avx", MONO_CPU_X86_AVX, unsupported, sizeof (unsupported) }, { "Avx2", MONO_CPU_X86_AVX2, unsupported, sizeof (unsupported) }, { "AvxVnni", 0, unsupported, sizeof (unsupported) }, { "Bmi1", MONO_CPU_X86_BMI1, bmi1_methods, sizeof (bmi1_methods) }, { "Bmi2", MONO_CPU_X86_BMI2, bmi2_methods, sizeof (bmi2_methods) }, { "Fma", MONO_CPU_X86_FMA, unsupported, sizeof (unsupported) }, { "Lzcnt", MONO_CPU_X86_LZCNT, lzcnt_methods, sizeof (lzcnt_methods), TRUE }, { "Pclmulqdq", MONO_CPU_X86_PCLMUL, pclmulqdq_methods, sizeof (pclmulqdq_methods) }, { "Popcnt", MONO_CPU_X86_POPCNT, popcnt_methods, sizeof (popcnt_methods), TRUE }, { "Sse", MONO_CPU_X86_SSE, sse_methods, sizeof (sse_methods) }, { "Sse2", MONO_CPU_X86_SSE2, sse2_methods, sizeof (sse2_methods) }, { "Sse3", MONO_CPU_X86_SSE3, sse3_methods, sizeof (sse3_methods) }, { "Sse41", MONO_CPU_X86_SSE41, sse41_methods, sizeof (sse41_methods) }, { "Sse42", MONO_CPU_X86_SSE42, sse42_methods, sizeof (sse42_methods) }, { "Ssse3", MONO_CPU_X86_SSSE3, ssse3_methods, sizeof (ssse3_methods) }, { "X86Base", 0, x86base_methods, sizeof (x86base_methods) }, }; static MonoInst* emit_x86_intrinsics ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit) { MonoCPUFeatures feature = intrin_group->feature; const SimdIntrinsic *intrinsics = intrin_group->intrinsics; if (feature == MONO_CPU_X86_SSE) { switch (id) { case SN_Shuffle: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_SHUFPS, 0, arg0_type, fsig, args); case SN_ConvertScalarToVector128Single: { int op = 0; switch (fsig->params [1]->type) { case MONO_TYPE_I4: op = OP_SSE_CVTSI2SS; break; case MONO_TYPE_I8: op = OP_SSE_CVTSI2SS64; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ReciprocalScalar: case SN_ReciprocalSqrtScalar: case SN_SqrtScalar: { int op = 0; switch (id) { case SN_ReciprocalScalar: op = OP_SSE_RCPSS; break; case SN_ReciprocalSqrtScalar: op = OP_SSE_RSQRTSS; break; case SN_SqrtScalar: op = OP_SSE_SQRTSS; break; }; if (fsig->param_count == 1) return emit_simd_ins (cfg, klass, op, args [0]->dreg, args[0]->dreg); else if (fsig->param_count == 2) return emit_simd_ins (cfg, klass, op, args [0]->dreg, args[1]->dreg); else g_assert_not_reached (); break; } case SN_LoadScalarVector128: return NULL; default: return NULL; } } if (feature == MONO_CPU_X86_SSE2) { switch (id) { case SN_Subtract: return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, arg0_type == MONO_TYPE_R8 ? OP_FSUB : OP_ISUB, arg0_type, fsig, args); case SN_Add: return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, arg0_type == MONO_TYPE_R8 ? OP_FADD : OP_IADD, arg0_type, fsig, args); case SN_Average: if (arg0_type == MONO_TYPE_U1) return emit_simd_ins_for_sig (cfg, klass, OP_PAVGB_UN, -1, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_U2) return emit_simd_ins_for_sig (cfg, klass, OP_PAVGW_UN, -1, arg0_type, fsig, args); else return NULL; case SN_CompareNotEqual: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_NE, arg0_type, fsig, args); case SN_CompareEqual: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_EQ, arg0_type, fsig, args); case SN_CompareGreaterThan: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_GT, arg0_type, fsig, args); case SN_CompareLessThan: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_LT, arg0_type, fsig, args); case SN_ConvertToInt32: if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_I4_X, INTRINS_SSE_CVTSD2SI, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I4, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToInt64: if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_I8_X, INTRINS_SSE_CVTSD2SI64, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I8) return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I8, 0 /*element index*/, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_ConvertScalarToVector128Double: { int op = OP_SSE2_CVTSS2SD; switch (fsig->params [1]->type) { case MONO_TYPE_I4: op = OP_SSE2_CVTSI2SD; break; case MONO_TYPE_I8: op = OP_SSE2_CVTSI2SD64; break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ConvertScalarToVector128Int32: case SN_ConvertScalarToVector128Int64: case SN_ConvertScalarToVector128UInt32: case SN_ConvertScalarToVector128UInt64: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR, -1, arg0_type, fsig, args); case SN_ConvertToUInt32: return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I4, 0 /*element index*/, arg0_type, fsig, args); case SN_ConvertToUInt64: return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I8, 0 /*element index*/, arg0_type, fsig, args); case SN_ConvertToVector128Double: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPS2PD, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTDQ2PD, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Int32: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPS2DQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPD2DQ, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Int32WithTruncation: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTTPS2DQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTTPD2DQ, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Single: if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTDQ2PS, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPD2PS, 0, arg0_type, fsig, args); else return NULL; case SN_LoadAlignedVector128: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_LOADU, 16 /*alignment*/, arg0_type, fsig, args); case SN_LoadVector128: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_LOADU, 1 /*alignment*/, arg0_type, fsig, args); case SN_MoveScalar: return emit_simd_ins_for_sig (cfg, klass, fsig->param_count == 2 ? OP_SSE_MOVS2 : OP_SSE_MOVS, -1, arg0_type, fsig, args); case SN_Max: switch (arg0_type) { case MONO_TYPE_U1: return emit_simd_ins_for_sig (cfg, klass, OP_PMAXB_UN, 0, arg0_type, fsig, args); case MONO_TYPE_I2: return emit_simd_ins_for_sig (cfg, klass, OP_PMAXW, 0, arg0_type, fsig, args); case MONO_TYPE_R8: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_MAXPD, arg0_type, fsig, args); default: g_assert_not_reached (); break; } break; case SN_Min: switch (arg0_type) { case MONO_TYPE_U1: return emit_simd_ins_for_sig (cfg, klass, OP_PMINB_UN, 0, arg0_type, fsig, args); case MONO_TYPE_I2: return emit_simd_ins_for_sig (cfg, klass, OP_PMINW, 0, arg0_type, fsig, args); case MONO_TYPE_R8: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_MINPD, arg0_type, fsig, args); default: g_assert_not_reached (); break; } break; case SN_Multiply: if (arg0_type == MONO_TYPE_U4) return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PMULUDQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_MULPD, 0, arg0_type, fsig, args); else g_assert_not_reached (); case SN_MultiplyHigh: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PMULHW, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_U2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PMULHUW, arg0_type, fsig, args); else g_assert_not_reached (); case SN_PackSignedSaturate: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PACKSSWB, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PACKSSDW, arg0_type, fsig, args); else g_assert_not_reached (); case SN_PackUnsignedSaturate: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PACKUS, -1, arg0_type, fsig, args); case SN_Extract: g_assert (arg0_type == MONO_TYPE_U2); return emit_simd_ins_for_sig (cfg, klass, OP_XEXTRACT_I4, 0, arg0_type, fsig, args); case SN_Insert: g_assert (arg0_type == MONO_TYPE_I2 || arg0_type == MONO_TYPE_U2); return emit_simd_ins_for_sig (cfg, klass, OP_XINSERT_I2, 0, arg0_type, fsig, args); case SN_ShiftRightLogical: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSRLI_W : INTRINS_SSE_PSRL_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSRLI_D : INTRINS_SSE_PSRL_D; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = is_imm ? INTRINS_SSE_PSRLI_Q : INTRINS_SSE_PSRL_Q; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftRightArithmetic: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSRAI_W : INTRINS_SSE_PSRA_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSRAI_D : INTRINS_SSE_PSRA_D; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftLeftLogical: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSLLI_W : INTRINS_SSE_PSLL_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSLLI_D : INTRINS_SSE_PSLL_D; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = is_imm ? INTRINS_SSE_PSLLI_Q : INTRINS_SSE_PSLL_Q; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftLeftLogical128BitLane: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSLLDQ, 0, arg0_type, fsig, args); case SN_ShiftRightLogical128BitLane: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSRLDQ, 0, arg0_type, fsig, args); case SN_Shuffle: { if (fsig->param_count == 2) { g_assert (arg0_type == MONO_TYPE_I4 || arg0_type == MONO_TYPE_U4); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFD, 0, arg0_type, fsig, args); } else if (fsig->param_count == 3) { g_assert (arg0_type == MONO_TYPE_R8); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_SHUFPD, 0, arg0_type, fsig, args); } else { g_assert_not_reached (); break; } } case SN_ShuffleHigh: g_assert (fsig->param_count == 2); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFHW, 0, arg0_type, fsig, args); case SN_ShuffleLow: g_assert (fsig->param_count == 2); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFLW, 0, arg0_type, fsig, args); case SN_SqrtScalar: { if (fsig->param_count == 1) return emit_simd_ins (cfg, klass, OP_SSE2_SQRTSD, args [0]->dreg, args[0]->dreg); else if (fsig->param_count == 2) return emit_simd_ins (cfg, klass, OP_SSE2_SQRTSD, args [0]->dreg, args[1]->dreg); else { g_assert_not_reached (); break; } } case SN_LoadScalarVector128: { int op = 0; switch (arg0_type) { case MONO_TYPE_I4: case MONO_TYPE_U4: op = OP_SSE2_MOVD; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = OP_SSE2_MOVQ; break; case MONO_TYPE_R8: op = OP_SSE2_MOVUPD; break; default: g_assert_not_reached(); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } default: return NULL; } } if (feature == MONO_CPU_X86_SSE3) { switch (id) { case SN_AddSubtract: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_ADDSUBPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_ADDSUBPD, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_HorizontalAdd: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HADDPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HADDPD, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_HorizontalSubtract: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HSUBPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HSUBPD, arg0_type, fsig, args); else g_assert_not_reached (); break; default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSSE3) { switch (id) { case SN_AlignRight: return emit_simd_ins_for_sig (cfg, klass, OP_SSSE3_ALIGNR, 0, arg0_type, fsig, args); case SN_HorizontalAdd: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHADDW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHADDD, arg0_type, fsig, args); case SN_HorizontalSubtract: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHSUBW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHSUBD, arg0_type, fsig, args); case SN_Sign: if (arg0_type == MONO_TYPE_I1) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGNB, arg0_type, fsig, args); if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGNW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGND, arg0_type, fsig, args); default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSE41) { switch (id) { case SN_DotProduct: { int op = 0; switch (arg0_type) { case MONO_TYPE_R4: op = OP_SSE41_DPPS; break; case MONO_TYPE_R8: op = OP_SSE41_DPPD; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_MultipleSumAbsoluteDifferences: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_MPSADBW, 0, arg0_type, fsig, args); case SN_Blend: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_BLEND, 0, arg0_type, fsig, args); case SN_BlendVariable: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_BLENDV, -1, arg0_type, fsig, args); case SN_Extract: { int op = 0; switch (arg0_type) { case MONO_TYPE_U1: op = OP_XEXTRACT_I1; break; case MONO_TYPE_U4: case MONO_TYPE_I4: op = OP_XEXTRACT_I4; break; case MONO_TYPE_U8: case MONO_TYPE_I8: op = OP_XEXTRACT_I8; break; case MONO_TYPE_R4: op = OP_XEXTRACT_R4; break; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 op = OP_XEXTRACT_I8; #else op = OP_XEXTRACT_I4; #endif break; default: g_assert_not_reached(); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_Insert: { int op = arg0_type == MONO_TYPE_R4 ? OP_SSE41_INSERTPS : type_to_xinsert_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); } case SN_CeilingScalar: case SN_FloorScalar: case SN_RoundCurrentDirectionScalar: case SN_RoundToNearestIntegerScalar: case SN_RoundToNegativeInfinityScalar: case SN_RoundToPositiveInfinityScalar: case SN_RoundToZeroScalar: if (fsig->param_count == 2) { return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_ROUNDS, info->default_instc0, arg0_type, fsig, args); } else { MonoInst* ins = emit_simd_ins (cfg, klass, OP_SSE41_ROUNDS, args [0]->dreg, args [0]->dreg); ins->inst_c0 = info->default_instc0; ins->inst_c1 = arg0_type; return ins; } break; default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSE42) { switch (id) { case SN_Crc32: { MonoTypeEnum arg1_type = get_underlying_type (fsig->params [1]); return emit_simd_ins_for_sig (cfg, klass, arg1_type == MONO_TYPE_U8 ? OP_SSE42_CRC64 : OP_SSE42_CRC32, arg1_type, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_PCLMUL) { switch (id) { case SN_CarrylessMultiply: { return emit_simd_ins_for_sig (cfg, klass, OP_PCLMULQDQ, 0, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_AES) { switch (id) { case SN_KeygenAssist: { return emit_simd_ins_for_sig (cfg, klass, OP_AES_KEYGENASSIST, 0, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } MonoInst *ins = NULL; if (feature == MONO_CPU_X86_POPCNT) { switch (id) { case SN_PopCount: MONO_INST_NEW (cfg, ins, is_64bit ? OP_POPCNT64 : OP_POPCNT32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: return NULL; } } if (feature == MONO_CPU_X86_LZCNT) { switch (id) { case SN_LeadingZeroCount: return emit_simd_ins_for_sig (cfg, klass, is_64bit ? OP_LZCNT64 : OP_LZCNT32, 0, arg0_type, fsig, args); default: return NULL; } } if (feature == MONO_CPU_X86_BMI1) { switch (id) { case SN_AndNot: { // (a ^ -1) & b // LLVM replaces it with `andn` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LXOR_IMM : OP_IXOR_IMM, tmp_reg, args [0]->dreg, -1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, tmp_reg, args [1]->dreg); return ins; } case SN_BitFieldExtract: { int ctlreg = args [1]->dreg; if (fsig->param_count == 2) { } else if (fsig->param_count == 3) { MonoInst *ins = NULL; /* This intrinsic is also implemented in managed code. * TODO: remove this if cross-AOT-assembly inlining works */ int startreg = args [1]->dreg; int lenreg = args [2]->dreg; int dreg1 = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_SHL_IMM, dreg1, lenreg, 8); int dreg2 = alloc_ireg (cfg); EMIT_NEW_BIALU (cfg, ins, OP_IOR, dreg2, startreg, dreg1); ctlreg = dreg2; } else { g_assert_not_reached (); } return emit_simd_ins (cfg, klass, is_64bit ? OP_BMI1_BEXTR64 : OP_BMI1_BEXTR32, args [0]->dreg, ctlreg); } case SN_GetMaskUpToLowestSetBit: { // x ^ (x - 1) // LLVM replaces it with `blsmsk` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LSUB_IMM : OP_ISUB_IMM, tmp_reg, args [0]->dreg, 1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LXOR : OP_IXOR, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_ResetLowestSetBit: { // x & (x - 1) // LLVM replaces it with `blsr` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LSUB_IMM : OP_ISUB_IMM, tmp_reg, args [0]->dreg, 1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_ExtractLowestSetBit: { // x & (0 - x) // LLVM replaces it with `blsi` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int zero_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); MONO_EMIT_NEW_ICONST (cfg, zero_reg, 0); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LSUB : OP_ISUB, tmp_reg, zero_reg, args [0]->dreg); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_TrailingZeroCount: MONO_INST_NEW (cfg, ins, is_64bit ? OP_CTTZ64 : OP_CTTZ32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } if (feature == MONO_CPU_X86_BMI2) { switch (id) { case SN_MultiplyNoFlags: { int op = 0; if (fsig->param_count == 2) { op = is_64bit ? OP_MULX_H64 : OP_MULX_H32; } else if (fsig->param_count == 3) { op = is_64bit ? OP_MULX_HL64 : OP_MULX_HL32; } else { g_assert_not_reached (); } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ZeroHighBits: MONO_INST_NEW (cfg, ins, is_64bit ? OP_BZHI64 : OP_BZHI32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_ParallelBitExtract: MONO_INST_NEW (cfg, ins, is_64bit ? OP_PEXT64 : OP_PEXT32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_ParallelBitDeposit: MONO_INST_NEW (cfg, ins, is_64bit ? OP_PDEP64 : OP_PDEP32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } if (intrinsics == x86base_methods) { switch (id) { case SN_BitScanForward: MONO_INST_NEW (cfg, ins, is_64bit ? OP_X86_BSF64 : OP_X86_BSF32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_BitScanReverse: MONO_INST_NEW (cfg, ins, is_64bit ? OP_X86_BSR64 : OP_X86_BSR32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } return NULL; } static guint16 vector_256_t_methods [] = { SN_get_Count, }; static MonoInst* emit_vector256_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; MonoType *etype; MonoClass *klass; int size, len, id; id = lookup_intrins (vector_256_t_methods, sizeof (vector_256_t_methods), cmethod); if (id == -1) return NULL; klass = cmethod->klass; etype = mono_class_get_context (klass)->class_inst->type_argv [0]; size = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size); len = 32 / size; if (!MONO_TYPE_IS_PRIMITIVE (etype) || etype->type == MONO_TYPE_CHAR || etype->type == MONO_TYPE_BOOLEAN || etype->type == MONO_TYPE_I || etype->type == MONO_TYPE_U) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_Count: if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; default: break; } return NULL; } static MonoInst* emit_amd64_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { if (!strcmp (class_ns, "System.Runtime.Intrinsics.X86")) { return emit_hardware_intrinsics (cfg, cmethod, fsig, args, supported_x86_intrinsics, sizeof (supported_x86_intrinsics), emit_x86_intrinsics); } if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector256`1")) return emit_vector256_t (cfg, cmethod, fsig, args); } if (!strcmp (class_ns, "System.Numerics")) { if (!strcmp (class_name, "Vector")) return emit_sys_numerics_vector (cfg, cmethod, fsig, args); if (!strcmp (class_name, "Vector`1")) return emit_sys_numerics_vector_t (cfg, cmethod, fsig, args); } return NULL; } #endif // !TARGET_ARM64 #ifdef TARGET_ARM64 static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { // FIXME: implement Vector64<T>, Vector128<T> and Vector<T> for Arm64 if (!strcmp (class_ns, "System.Runtime.Intrinsics.Arm")) { return emit_hardware_intrinsics(cfg, cmethod, fsig, args, supported_arm_intrinsics, sizeof (supported_arm_intrinsics), emit_arm64_intrinsics); } return NULL; } #elif TARGET_AMD64 // TODO: test and enable for x86 too static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *simd_inst = emit_amd64_intrinsics (class_ns, class_name, cfg, cmethod, fsig, args); if (simd_inst != NULL) cfg->uses_simd_intrinsics |= MONO_CFG_USES_SIMD_INTRINSICS; return simd_inst; } #else static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { return NULL; } #endif MonoInst* mono_emit_simd_intrinsics (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { const char *class_name; const char *class_ns; MonoImage *image = m_class_get_image (cmethod->klass); if (image != mono_get_corlib ()) return NULL; class_ns = m_class_get_name_space (cmethod->klass); class_name = m_class_get_name (cmethod->klass); // If cmethod->klass is nested, the namespace is on the enclosing class. if (m_class_get_nested_in (cmethod->klass)) class_ns = m_class_get_name_space (m_class_get_nested_in (cmethod->klass)); #if defined(TARGET_ARM64) || defined(TARGET_AMD64) if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector128") || !strcmp (class_name, "Vector64")) return emit_sri_vector (cfg, cmethod, fsig, args); } if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector128`1") || !strcmp (class_name, "Vector64`1")) return emit_vector64_vector128_t (cfg, cmethod, fsig, args); } #endif // defined(TARGET_ARM64) || defined(TARGET_AMD64) #if defined(TARGET_ARM64) if (!strcmp (class_ns, "System.Numerics") && !strcmp (class_name, "Vector")){ return emit_sri_vector (cfg, cmethod, fsig, args); } #endif // defined(TARGET_ARM64) return emit_simd_intrinsics (class_ns, class_name, cfg, cmethod, fsig, args); } /* * Windows x64 value type ABI uses reg/stack references (ArgValuetypeAddrInIReg/ArgValuetypeAddrOnStack) * for function arguments. When using SIMD intrinsics arguments optimized into OP_ARG needs to be decomposed * into correspondig SIMD LOADX/STOREX instructions. */ #if defined(TARGET_WIN32) && defined(TARGET_AMD64) static gboolean decompose_vtype_opt_uses_simd_intrinsics (MonoCompile *cfg, MonoInst *ins) { if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS) return TRUE; switch (ins->opcode) { case OP_XMOVE: case OP_XZERO: case OP_XPHI: case OP_LOADX_MEMBASE: case OP_LOADX_ALIGNED_MEMBASE: case OP_STOREX_MEMBASE: case OP_STOREX_ALIGNED_MEMBASE_REG: return TRUE; default: return FALSE; } } static void decompose_vtype_opt_load_arg (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, gint32 *sreg_int32) { guint32 *sreg = (guint32*)sreg_int32; MonoInst *src_var = get_vreg_to_inst (cfg, *sreg); if (src_var && src_var->opcode == OP_ARG && src_var->klass && MONO_CLASS_IS_SIMD (cfg, src_var->klass)) { MonoInst *varload_ins, *load_ins; NEW_VARLOADA (cfg, varload_ins, src_var, src_var->inst_vtype); mono_bblock_insert_before_ins (bb, ins, varload_ins); MONO_INST_NEW (cfg, load_ins, OP_LOADX_MEMBASE); load_ins->klass = src_var->klass; load_ins->type = STACK_VTYPE; load_ins->sreg1 = varload_ins->dreg; load_ins->dreg = alloc_xreg (cfg); mono_bblock_insert_after_ins (bb, varload_ins, load_ins); *sreg = load_ins->dreg; } } static void decompose_vtype_opt_store_arg (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, gint32 *dreg_int32) { guint32 *dreg = (guint32*)dreg_int32; MonoInst *dest_var = get_vreg_to_inst (cfg, *dreg); if (dest_var && dest_var->opcode == OP_ARG && dest_var->klass && MONO_CLASS_IS_SIMD (cfg, dest_var->klass)) { MonoInst *varload_ins, *store_ins; *dreg = alloc_xreg (cfg); NEW_VARLOADA (cfg, varload_ins, dest_var, dest_var->inst_vtype); mono_bblock_insert_after_ins (bb, ins, varload_ins); MONO_INST_NEW (cfg, store_ins, OP_STOREX_MEMBASE); store_ins->klass = dest_var->klass; store_ins->type = STACK_VTYPE; store_ins->sreg1 = *dreg; store_ins->dreg = varload_ins->dreg; mono_bblock_insert_after_ins (bb, varload_ins, store_ins); } } void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins) { if ((cfg->opt & MONO_OPT_SIMD) && decompose_vtype_opt_uses_simd_intrinsics(cfg, ins)) { const char *spec = INS_INFO (ins->opcode); if (spec [MONO_INST_SRC1] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg1)); if (spec [MONO_INST_SRC2] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg2)); if (spec [MONO_INST_SRC3] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg3)); if (spec [MONO_INST_DEST] == 'x') decompose_vtype_opt_store_arg (cfg, bb, ins, &(ins->dreg)); } } #else void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins) { } #endif /*defined(TARGET_WIN32) && defined(TARGET_AMD64)*/ void mono_simd_simplify_indirection (MonoCompile *cfg) { } #endif /* DISABLE_JIT */ #endif /* MONO_ARCH_SIMD_INTRINSICS */ #if defined(TARGET_AMD64) void ves_icall_System_Runtime_Intrinsics_X86_X86Base___cpuidex (int abcd[4], int function_id, int subfunction_id) { #ifndef MONO_CROSS_COMPILE mono_hwcap_x86_call_cpuidex (function_id, subfunction_id, &abcd [0], &abcd [1], &abcd [2], &abcd [3]); #endif } #endif MONO_EMPTY_SOURCE_FILE (simd_intrinsics_netcore);
/** * SIMD Intrinsics support for netcore. * Only LLVM is supported as a backend. */ #include <config.h> #include <mono/utils/mono-compiler.h> #include <mono/metadata/icall-decl.h> #include "mini.h" #include "mini-runtime.h" #include "ir-emit.h" #include "llvm-intrinsics-types.h" #ifdef ENABLE_LLVM #include "mini-llvm.h" #include "mini-llvm-cpp.h" #endif #include "mono/utils/bsearch.h" #include <mono/metadata/abi-details.h> #include <mono/metadata/reflection-internals.h> #include <mono/utils/mono-hwcap.h> #if defined (MONO_ARCH_SIMD_INTRINSICS) #if defined(DISABLE_JIT) void mono_simd_intrinsics_init (void) { } #else #define MSGSTRFIELD(line) MSGSTRFIELD1(line) #define MSGSTRFIELD1(line) str##line static const struct msgstr_t { #define METHOD(name) char MSGSTRFIELD(__LINE__) [sizeof (#name)]; #define METHOD2(str,name) char MSGSTRFIELD(__LINE__) [sizeof (str)]; #include "simd-methods.h" #undef METHOD #undef METHOD2 } method_names = { #define METHOD(name) #name, #define METHOD2(str,name) str, #include "simd-methods.h" #undef METHOD #undef METHOD2 }; enum { #define METHOD(name) SN_ ## name = offsetof (struct msgstr_t, MSGSTRFIELD(__LINE__)), #define METHOD2(str,name) SN_ ## name = offsetof (struct msgstr_t, MSGSTRFIELD(__LINE__)), #include "simd-methods.h" }; #define method_name(idx) ((const char*)&method_names + (idx)) static int register_size; #define None 0 typedef struct { uint16_t id; // One of the SN_ constants uint16_t default_op; // ins->opcode uint16_t default_instc0; // ins->inst_c0 uint16_t unsigned_op; uint16_t unsigned_instc0; uint16_t floating_op; uint16_t floating_instc0; } SimdIntrinsic; static const SimdIntrinsic unsupported [] = { {SN_get_IsSupported} }; void mono_simd_intrinsics_init (void) { register_size = 16; #if 0 if ((mini_get_cpu_features () & MONO_CPU_X86_AVX) != 0) register_size = 32; #endif /* Tell the class init code the size of the System.Numerics.Register type */ mono_simd_register_size = register_size; } MonoInst* mono_emit_simd_field_load (MonoCompile *cfg, MonoClassField *field, MonoInst *addr) { return NULL; } static int simd_intrinsic_compare_by_name (const void *key, const void *value) { return strcmp ((const char*)key, method_name (*(guint16*)value)); } static int simd_intrinsic_info_compare_by_name (const void *key, const void *value) { SimdIntrinsic *info = (SimdIntrinsic*)value; return strcmp ((const char*)key, method_name (info->id)); } static int lookup_intrins (guint16 *intrinsics, int size, MonoMethod *cmethod) { const guint16 *result = (const guint16 *)mono_binary_search (cmethod->name, intrinsics, size / sizeof (guint16), sizeof (guint16), &simd_intrinsic_compare_by_name); if (result == NULL) return -1; else return (int)*result; } static SimdIntrinsic* lookup_intrins_info (SimdIntrinsic *intrinsics, int size, MonoMethod *cmethod) { #if 0 for (int i = 0; i < (size / sizeof (SimdIntrinsic)) - 1; ++i) { const char *n1 = method_name (intrinsics [i].id); const char *n2 = method_name (intrinsics [i + 1].id); int len1 = strlen (n1); int len2 = strlen (n2); for (int j = 0; j < len1 && j < len2; ++j) { if (n1 [j] > n2 [j]) { printf ("%s %s\n", n1, n2); g_assert_not_reached (); } else if (n1 [j] < n2 [j]) { break; } } } #endif return (SimdIntrinsic *)mono_binary_search (cmethod->name, intrinsics, size / sizeof (SimdIntrinsic), sizeof (SimdIntrinsic), &simd_intrinsic_info_compare_by_name); } /* * Return a simd vreg for the simd value represented by SRC. * SRC is the 'this' argument to methods. * Set INDIRECT to TRUE if the value was loaded from memory. */ static int load_simd_vreg_class (MonoCompile *cfg, MonoClass *klass, MonoInst *src, gboolean *indirect) { const char *spec = INS_INFO (src->opcode); if (indirect) *indirect = FALSE; if (src->opcode == OP_XMOVE) { return src->sreg1; } else if (src->opcode == OP_LDADDR) { int res = ((MonoInst*)src->inst_p0)->dreg; return res; } else if (spec [MONO_INST_DEST] == 'x') { return src->dreg; } else if (src->type == STACK_PTR || src->type == STACK_MP) { MonoInst *ins; if (indirect) *indirect = TRUE; MONO_INST_NEW (cfg, ins, OP_LOADX_MEMBASE); ins->klass = klass; ins->sreg1 = src->dreg; ins->type = STACK_VTYPE; ins->dreg = alloc_ireg (cfg); MONO_ADD_INS (cfg->cbb, ins); return ins->dreg; } g_warning ("load_simd_vreg:: could not infer source simd (%d) vreg for op", src->type); mono_print_ins (src); g_assert_not_reached (); } static int load_simd_vreg (MonoCompile *cfg, MonoMethod *cmethod, MonoInst *src, gboolean *indirect) { return load_simd_vreg_class (cfg, cmethod->klass, src, indirect); } /* Create and emit a SIMD instruction, dreg is auto-allocated */ static MonoInst* emit_simd_ins (MonoCompile *cfg, MonoClass *klass, int opcode, int sreg1, int sreg2) { const char *spec = INS_INFO (opcode); MonoInst *ins; MONO_INST_NEW (cfg, ins, opcode); if (spec [MONO_INST_DEST] == 'x') { ins->dreg = alloc_xreg (cfg); ins->type = STACK_VTYPE; } else if (spec [MONO_INST_DEST] == 'i') { ins->dreg = alloc_ireg (cfg); ins->type = STACK_I4; } else if (spec [MONO_INST_DEST] == 'l') { ins->dreg = alloc_lreg (cfg); ins->type = STACK_I8; } else if (spec [MONO_INST_DEST] == 'f') { ins->dreg = alloc_freg (cfg); ins->type = STACK_R8; } else if (spec [MONO_INST_DEST] == 'v') { ins->dreg = alloc_dreg (cfg, STACK_VTYPE); ins->type = STACK_VTYPE; } ins->sreg1 = sreg1; ins->sreg2 = sreg2; ins->klass = klass; MONO_ADD_INS (cfg->cbb, ins); return ins; } static MonoInst* emit_simd_ins_for_sig (MonoCompile *cfg, MonoClass *klass, int opcode, int instc0, int instc1, MonoMethodSignature *fsig, MonoInst **args) { g_assert (fsig->param_count <= 3); MonoInst* ins = emit_simd_ins (cfg, klass, opcode, fsig->param_count > 0 ? args [0]->dreg : -1, fsig->param_count > 1 ? args [1]->dreg : -1); if (instc0 != -1) ins->inst_c0 = instc0; if (instc1 != -1) ins->inst_c1 = instc1; if (fsig->param_count == 3) ins->sreg3 = args [2]->dreg; return ins; } static gboolean is_hw_intrinsics_class (MonoClass *klass, const char *name, gboolean *is_64bit) { const char *class_name = m_class_get_name (klass); if ((!strcmp (class_name, "X64") || !strcmp (class_name, "Arm64")) && m_class_get_nested_in (klass)) { *is_64bit = TRUE; return !strcmp (m_class_get_name (m_class_get_nested_in (klass)), name); } else { *is_64bit = FALSE; return !strcmp (class_name, name); } } static MonoTypeEnum get_underlying_type (MonoType* type) { MonoClass* klass = mono_class_from_mono_type_internal (type); if (type->type == MONO_TYPE_PTR) // e.g. int* => MONO_TYPE_I4 return m_class_get_byval_arg (m_class_get_element_class (klass))->type; else if (type->type == MONO_TYPE_GENERICINST) // e.g. Vector128<int> => MONO_TYPE_I4 return mono_class_get_context (klass)->class_inst->type_argv [0]->type; else return type->type; } static gboolean type_enum_is_unsigned (MonoTypeEnum type); static gboolean type_enum_is_float (MonoTypeEnum type); static MonoInst* emit_xcompare (MonoCompile *cfg, MonoClass *klass, MonoTypeEnum etype, MonoInst *arg1, MonoInst *arg2) { MonoInst *ins; int opcode = type_enum_is_float (etype) ? OP_XCOMPARE_FP : OP_XCOMPARE; ins = emit_simd_ins (cfg, klass, opcode, arg1->dreg, arg2->dreg); ins->inst_c0 = CMP_EQ; ins->inst_c1 = etype; return ins; } static MonoInst* emit_xcompare_for_intrinsic (MonoCompile *cfg, MonoClass *klass, int intrinsic_id, MonoTypeEnum etype, MonoInst *arg1, MonoInst *arg2) { MonoInst *ins = emit_xcompare (cfg, klass, etype, arg1, arg2); gboolean is_unsigned = type_enum_is_unsigned (etype); switch (intrinsic_id) { case SN_GreaterThan: case SN_GreaterThanAll: case SN_GreaterThanAny: ins->inst_c0 = is_unsigned ? CMP_GT_UN : CMP_GT; break; case SN_GreaterThanOrEqual: case SN_GreaterThanOrEqualAll: case SN_GreaterThanOrEqualAny: ins->inst_c0 = is_unsigned ? CMP_GE_UN : CMP_GE; break; case SN_LessThan: case SN_LessThanAll: case SN_LessThanAny: ins->inst_c0 = is_unsigned ? CMP_LT_UN : CMP_LT; break; case SN_LessThanOrEqual: case SN_LessThanOrEqualAll: case SN_LessThanOrEqualAny: ins->inst_c0 = is_unsigned ? CMP_LE_UN : CMP_LE; break; default: g_assert_not_reached (); } return ins; } static MonoInst* emit_xequal (MonoCompile *cfg, MonoClass *klass, MonoInst *arg1, MonoInst *arg2) { return emit_simd_ins (cfg, klass, OP_XEQUAL, arg1->dreg, arg2->dreg); } static MonoInst* emit_not_xequal (MonoCompile *cfg, MonoClass *klass, MonoInst *arg1, MonoInst *arg2) { MonoInst *ins = emit_xequal (cfg, klass, arg1, arg2); int sreg = ins->dreg; int dreg = alloc_ireg (cfg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, sreg, 0); EMIT_NEW_UNALU (cfg, ins, OP_CEQ, dreg, -1); return ins; } static MonoInst* emit_xzero (MonoCompile *cfg, MonoClass *klass) { return emit_simd_ins (cfg, klass, OP_XZERO, -1, -1); } static MonoInst* emit_xones (MonoCompile *cfg, MonoClass *klass) { return emit_simd_ins (cfg, klass, OP_XONES, -1, -1); } static gboolean is_intrinsics_vector_type (MonoType *vector_type) { if (vector_type->type != MONO_TYPE_GENERICINST) return FALSE; MonoClass *klass = mono_class_from_mono_type_internal (vector_type); const char *name = m_class_get_name (klass); return !strcmp (name, "Vector64`1") || !strcmp (name, "Vector128`1") || !strcmp (name, "Vector256`1"); } static MonoType* get_vector_t_elem_type (MonoType *vector_type) { MonoClass *klass; MonoType *etype; g_assert (vector_type->type == MONO_TYPE_GENERICINST); klass = mono_class_from_mono_type_internal (vector_type); g_assert ( !strcmp (m_class_get_name (klass), "Vector`1") || !strcmp (m_class_get_name (klass), "Vector64`1") || !strcmp (m_class_get_name (klass), "Vector128`1") || !strcmp (m_class_get_name (klass), "Vector256`1")); etype = mono_class_get_context (klass)->class_inst->type_argv [0]; return etype; } static gboolean type_is_unsigned (MonoType *type) { MonoClass *klass = mono_class_from_mono_type_internal (type); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; return type_enum_is_unsigned (etype->type); } static gboolean type_enum_is_unsigned (MonoTypeEnum type) { switch (type) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: return TRUE; } return FALSE; } static gboolean type_is_float (MonoType *type) { MonoClass *klass = mono_class_from_mono_type_internal (type); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; return type_enum_is_float (etype->type); } static gboolean type_enum_is_float (MonoTypeEnum type) { return type == MONO_TYPE_R4 || type == MONO_TYPE_R8; } static int type_to_expand_op (MonoType *type) { switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_EXPAND_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_EXPAND_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_EXPAND_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_EXPAND_I8; case MONO_TYPE_R4: return OP_EXPAND_R4; case MONO_TYPE_R8: return OP_EXPAND_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_EXPAND_I8; #else return OP_EXPAND_I4; #endif default: g_assert_not_reached (); } } static int type_to_insert_op (MonoType *type) { switch (type->type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_INSERT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_INSERT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_INSERT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_INSERT_I8; case MONO_TYPE_R4: return OP_INSERT_R4; case MONO_TYPE_R8: return OP_INSERT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_INSERT_I8; #else return OP_INSERT_I4; #endif default: g_assert_not_reached (); } } typedef struct { const char *name; MonoCPUFeatures feature; const SimdIntrinsic *intrinsics; int intrinsics_size; gboolean jit_supported; } IntrinGroup; typedef MonoInst * (* EmitIntrinsicFn) ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit); static const IntrinGroup unsupported_intrin_group [] = { { "", 0, unsupported, sizeof (unsupported) }, }; static MonoInst * emit_hardware_intrinsics ( MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args, const IntrinGroup *groups, int groups_size_bytes, EmitIntrinsicFn custom_emit) { MonoClass *klass = cmethod->klass; const IntrinGroup *intrin_group = unsupported_intrin_group; gboolean is_64bit = FALSE; int groups_size = groups_size_bytes / sizeof (groups [0]); for (int i = 0; i < groups_size; ++i) { const IntrinGroup *group = &groups [i]; if (is_hw_intrinsics_class (klass, group->name, &is_64bit)) { intrin_group = group; break; } } gboolean supported = FALSE; MonoTypeEnum arg0_type = fsig->param_count > 0 ? get_underlying_type (fsig->params [0]) : MONO_TYPE_VOID; int id = -1; uint16_t op = 0; uint16_t c0 = 0; const SimdIntrinsic *intrinsics = intrin_group->intrinsics; int intrinsics_size = intrin_group->intrinsics_size; MonoCPUFeatures feature = intrin_group->feature; const SimdIntrinsic *info = lookup_intrins_info ((SimdIntrinsic *) intrinsics, intrinsics_size, cmethod); { if (!info) goto support_probe_complete; id = info->id; // Hardware intrinsics are LLVM-only. if (!COMPILE_LLVM (cfg) && !intrin_group->jit_supported) goto support_probe_complete; if (intrin_group->intrinsics == unsupported) supported = FALSE; else if (feature) supported = (mini_get_cpu_features (cfg) & feature) != 0; else supported = TRUE; op = info->default_op; c0 = info->default_instc0; gboolean is_unsigned = FALSE; gboolean is_float = FALSE; switch (arg0_type) { case MONO_TYPE_U1: case MONO_TYPE_U2: case MONO_TYPE_U4: case MONO_TYPE_U8: case MONO_TYPE_U: is_unsigned = TRUE; break; case MONO_TYPE_R4: case MONO_TYPE_R8: is_float = TRUE; break; } if (is_unsigned && info->unsigned_op != 0) { op = info->unsigned_op; c0 = info->unsigned_instc0; } else if (is_float && info->floating_op != 0) { op = info->floating_op; c0 = info->floating_instc0; } } support_probe_complete: if (id == SN_get_IsSupported) { MonoInst *ins = NULL; EMIT_NEW_ICONST (cfg, ins, supported ? 1 : 0); return ins; } if (!supported) { // Can't emit non-supported llvm intrinsics if (cfg->method != cmethod) { // Keep the original call so we end up in the intrinsic method return NULL; } else { // Emit an exception from the intrinsic method mono_emit_jit_icall (cfg, mono_throw_platform_not_supported, NULL); return NULL; } } if (op != 0) return emit_simd_ins_for_sig (cfg, klass, op, c0, arg0_type, fsig, args); return custom_emit (cfg, fsig, args, klass, intrin_group, info, id, arg0_type, is_64bit); } static MonoInst * emit_vector_create_elementwise ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoType *vtype, MonoType *etype, MonoInst **args) { int op = type_to_insert_op (etype); MonoClass *vklass = mono_class_from_mono_type_internal (vtype); MonoInst *ins = emit_xzero (cfg, vklass); for (int i = 0; i < fsig->param_count; ++i) { ins = emit_simd_ins (cfg, vklass, op, ins->dreg, args [i]->dreg); ins->inst_c0 = i; } return ins; } #if defined(TARGET_AMD64) || defined(TARGET_ARM64) static int type_to_xinsert_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_XINSERT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_XINSERT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_XINSERT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_XINSERT_I8; case MONO_TYPE_R4: return OP_XINSERT_R4; case MONO_TYPE_R8: return OP_XINSERT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_XINSERT_I8; #else return OP_XINSERT_I4; #endif default: g_assert_not_reached (); } } static int type_to_xextract_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_XEXTRACT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_XEXTRACT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_XEXTRACT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_XEXTRACT_I8; case MONO_TYPE_R4: return OP_XEXTRACT_R4; case MONO_TYPE_R8: return OP_XEXTRACT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_XEXTRACT_I8; #else return OP_XEXTRACT_I4; #endif default: g_assert_not_reached (); } } static int type_to_extract_op (MonoTypeEnum type) { switch (type) { case MONO_TYPE_I1: case MONO_TYPE_U1: return OP_EXTRACT_I1; case MONO_TYPE_I2: case MONO_TYPE_U2: return OP_EXTRACT_I2; case MONO_TYPE_I4: case MONO_TYPE_U4: return OP_EXTRACT_I4; case MONO_TYPE_I8: case MONO_TYPE_U8: return OP_EXTRACT_I8; case MONO_TYPE_R4: return OP_EXTRACT_R4; case MONO_TYPE_R8: return OP_EXTRACT_R8; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 return OP_EXTRACT_I8; #else return OP_EXTRACT_I4; #endif default: g_assert_not_reached (); } } static guint16 sri_vector_methods [] = { SN_Abs, SN_Add, SN_AndNot, SN_As, SN_AsByte, SN_AsDouble, SN_AsInt16, SN_AsInt32, SN_AsInt64, SN_AsSByte, SN_AsSingle, SN_AsUInt16, SN_AsUInt32, SN_AsUInt64, SN_AsVector128, SN_AsVector2, SN_AsVector256, SN_AsVector3, SN_AsVector4, SN_BitwiseAnd, SN_BitwiseOr, SN_Ceiling, SN_ConditionalSelect, SN_ConvertToDouble, SN_ConvertToInt32, SN_ConvertToUInt32, SN_Create, SN_CreateScalar, SN_CreateScalarUnsafe, SN_Divide, SN_Equals, SN_EqualsAll, SN_EqualsAny, SN_Floor, SN_GetElement, SN_GetLower, SN_GetUpper, SN_GreaterThan, SN_GreaterThanAll, SN_GreaterThanAny, SN_GreaterThanOrEqual, SN_GreaterThanOrEqualAll, SN_GreaterThanOrEqualAny, SN_LessThan, SN_LessThanAll, SN_LessThanAny, SN_LessThanOrEqual, SN_LessThanOrEqualAll, SN_LessThanOrEqualAny, SN_Max, SN_Min, SN_Multiply, SN_Negate, SN_OnesComplement, SN_Sqrt, SN_Subtract, SN_ToScalar, SN_ToVector128, SN_ToVector128Unsafe, SN_ToVector256, SN_ToVector256Unsafe, SN_WithElement, SN_Xor, }; /* nint and nuint haven't been enabled yet for System.Runtime.Intrinsics. * Remove this once support has been added. */ #define MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE(t) ((MONO_TYPE_IS_VECTOR_PRIMITIVE(t)) && ((t)->type != MONO_TYPE_I) && ((t)->type != MONO_TYPE_U)) static gboolean is_elementwise_create_overload (MonoMethodSignature *fsig, MonoType *ret_type) { uint16_t param_count = fsig->param_count; if (param_count < 1) return FALSE; MonoType *type = fsig->params [0]; if (!MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (type)) return FALSE; if (!mono_metadata_type_equal (ret_type, type)) return FALSE; for (uint16_t i = 1; i < param_count; ++i) if (!mono_metadata_type_equal (type, fsig->params [i])) return FALSE; return TRUE; } static gboolean is_create_from_half_vectors_overload (MonoMethodSignature *fsig) { if (fsig->param_count != 2) return FALSE; if (!is_intrinsics_vector_type (fsig->params [0])) return FALSE; return mono_metadata_type_equal (fsig->params [0], fsig->params [1]); } static gboolean is_element_type_primitive (MonoType *vector_type) { MonoType *element_type = get_vector_t_elem_type (vector_type); return MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (element_type); } static MonoInst* emit_sri_vector (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { if (!COMPILE_LLVM (cfg)) return NULL; int id = lookup_intrins (sri_vector_methods, sizeof (sri_vector_methods), cmethod); if (id == -1) return NULL; if (!strcmp (m_class_get_name (cfg->method->klass), "Vector256")) return NULL; // TODO: Fix Vector256.WithUpper/WithLower MonoClass *klass = cmethod->klass; MonoTypeEnum arg0_type = fsig->param_count > 0 ? get_underlying_type (fsig->params [0]) : MONO_TYPE_VOID; switch (id) { case SN_Abs: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 if (type_enum_is_unsigned (arg0_type)) return NULL; int iid = type_enum_is_float (arg0_type) ? INTRINS_AARCH64_ADV_SIMD_FABS : INTRINS_AARCH64_ADV_SIMD_ABS; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, iid, arg0_type, fsig, args); #else return NULL; #endif } case SN_Add: case SN_Divide: case SN_Max: case SN_Min: case SN_Multiply: case SN_Subtract: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int instc0 = -1; if (type_enum_is_float (arg0_type)) { switch (id) { case SN_Add: instc0 = OP_FADD; break; case SN_Divide: instc0 = OP_FDIV; break; case SN_Max: instc0 = OP_FMAX; break; case SN_Min: instc0 = OP_FMIN; break; case SN_Multiply: instc0 = OP_FMUL; break; case SN_Subtract: instc0 = OP_FSUB; break; default: g_assert_not_reached (); } } else { switch (id) { case SN_Add: instc0 = OP_IADD; break; case SN_Divide: return NULL; case SN_Max: instc0 = OP_IMAX; break; case SN_Min: instc0 = OP_IMIN; break; case SN_Multiply: instc0 = OP_IMUL; break; case SN_Subtract: instc0 = OP_ISUB; break; default: g_assert_not_reached (); } } return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, instc0, arg0_type, fsig, args); } case SN_AndNot: if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_BIC, -1, arg0_type, fsig, args); #else return NULL; #endif case SN_BitwiseAnd: case SN_BitwiseOr: case SN_Xor: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int instc0 = -1; switch (id) { case SN_BitwiseAnd: instc0 = XBINOP_FORCEINT_AND; break; case SN_BitwiseOr: instc0 = XBINOP_FORCEINT_OR; break; case SN_Xor: instc0 = XBINOP_FORCEINT_XOR; break; default: g_assert_not_reached (); } return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP_FORCEINT, instc0, arg0_type, fsig, args); } case SN_As: case SN_AsByte: case SN_AsDouble: case SN_AsInt16: case SN_AsInt32: case SN_AsInt64: case SN_AsSByte: case SN_AsSingle: case SN_AsUInt16: case SN_AsUInt32: case SN_AsUInt64: { if (!is_element_type_primitive (fsig->ret) || !is_element_type_primitive (fsig->params [0])) return NULL; return emit_simd_ins (cfg, klass, OP_XCAST, args [0]->dreg, -1); } case SN_Ceiling: case SN_Floor: { #ifdef TARGET_ARM64 if (!type_enum_is_float (arg0_type)) return NULL; int ceil_or_floor = id == SN_Ceiling ? INTRINS_AARCH64_ADV_SIMD_FRINTP : INTRINS_AARCH64_ADV_SIMD_FRINTM; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, ceil_or_floor, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConditionalSelect: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_BSL, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConvertToDouble: { #ifdef TARGET_ARM64 if ((arg0_type != MONO_TYPE_I8) && (arg0_type != MONO_TYPE_U8)) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); int size = mono_class_value_size (arg_class, NULL); int op = -1; if (size == 8) op = arg0_type == MONO_TYPE_I8 ? OP_ARM64_SCVTF_SCALAR : OP_ARM64_UCVTF_SCALAR; else op = arg0_type == MONO_TYPE_I8 ? OP_ARM64_SCVTF : OP_ARM64_UCVTF; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_ConvertToInt32: case SN_ConvertToUInt32: { #ifdef TARGET_ARM64 if (arg0_type != MONO_TYPE_R4) return NULL; int op = id == SN_ConvertToInt32 ? OP_ARM64_FCVTZS : OP_ARM64_FCVTZU; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_Create: { MonoType *etype = get_vector_t_elem_type (fsig->ret); if (fsig->param_count == 1 && mono_metadata_type_equal (fsig->params [0], etype)) return emit_simd_ins (cfg, klass, type_to_expand_op (etype), args [0]->dreg, -1); else if (is_create_from_half_vectors_overload (fsig)) return emit_simd_ins (cfg, klass, OP_XCONCAT, args [0]->dreg, args [1]->dreg); else if (is_elementwise_create_overload (fsig, etype)) return emit_vector_create_elementwise (cfg, fsig, fsig->ret, etype, args); break; } case SN_CreateScalar: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR, -1, arg0_type, fsig, args); case SN_CreateScalarUnsafe: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR_UNSAFE, -1, arg0_type, fsig, args); case SN_Equals: case SN_EqualsAll: case SN_EqualsAny: { if (!is_element_type_primitive (fsig->params [0])) return NULL; switch (id) { case SN_Equals: return emit_xcompare (cfg, klass, arg0_type, args [0], args [1]); case SN_EqualsAll: return emit_xequal (cfg, klass, args [0], args [1]); case SN_EqualsAny: { MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoInst *cmp_eq = emit_xcompare (cfg, arg_class, arg0_type, args [0], args [1]); MonoInst *zero = emit_xzero (cfg, arg_class); return emit_not_xequal (cfg, arg_class, cmp_eq, zero); } default: g_assert_not_reached (); } } case SN_GetElement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoType *etype = mono_class_get_context (arg_class)->class_inst->type_argv [0]; int size = mono_class_value_size (arg_class, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); int elems = size / esize; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, elems); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); int extract_op = type_to_xextract_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, extract_op, -1, arg0_type, fsig, args); } case SN_GetLower: case SN_GetUpper: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_GetLower ? OP_XLOWER : OP_XUPPER; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_GreaterThan: case SN_GreaterThanOrEqual: case SN_LessThan: case SN_LessThanOrEqual: { if (!is_element_type_primitive (fsig->params [0])) return NULL; return emit_xcompare_for_intrinsic (cfg, klass, id, arg0_type, args [0], args [1]); } case SN_GreaterThanAll: case SN_GreaterThanAny: case SN_GreaterThanOrEqualAll: case SN_GreaterThanOrEqualAny: case SN_LessThanAll: case SN_LessThanAny: case SN_LessThanOrEqualAll: case SN_LessThanOrEqualAny: { if (!is_element_type_primitive (fsig->params [0])) return NULL; g_assert (fsig->param_count == 2 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], fsig->params [1])); MonoInst *cmp = emit_xcompare_for_intrinsic (cfg, klass, id, arg0_type, args [0], args [1]); MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); switch (id) { case SN_GreaterThanAll: case SN_GreaterThanOrEqualAll: case SN_LessThanAll: case SN_LessThanOrEqualAll: { // for floating point numbers all ones is NaN and so // they must be treated differently than integer types if (type_enum_is_float (arg0_type)) { MonoInst *zero = emit_xzero (cfg, arg_class); MonoInst *inverted_cmp = emit_xcompare (cfg, klass, arg0_type, cmp, zero); return emit_xequal (cfg, klass, inverted_cmp, zero); } MonoInst *ones = emit_xones (cfg, arg_class); return emit_xequal (cfg, klass, cmp, ones); } case SN_GreaterThanAny: case SN_GreaterThanOrEqualAny: case SN_LessThanAny: case SN_LessThanOrEqualAny: { MonoInst *zero = emit_xzero (cfg, arg_class); return emit_not_xequal (cfg, klass, cmp, zero); } default: g_assert_not_reached (); } } case SN_Negate: case SN_OnesComplement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 int op = id == SN_Negate ? OP_ARM64_XNEG : OP_ARM64_MVN; return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); #else return NULL; #endif } case SN_Sqrt: { if (!is_element_type_primitive (fsig->params [0])) return NULL; #ifdef TARGET_ARM64 if (!type_enum_is_float (arg0_type)) return NULL; return emit_simd_ins_for_sig (cfg, klass, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT, arg0_type, fsig, args); #else return NULL; #endif } case SN_ToScalar: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int extract_op = type_to_extract_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, extract_op, 0, arg0_type, fsig, args); } case SN_ToVector128: case SN_ToVector128Unsafe: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_ToVector128 ? OP_XWIDEN : OP_XWIDEN_UNSAFE; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_WithElement: { if (!is_element_type_primitive (fsig->params [0])) return NULL; MonoClass *arg_class = mono_class_from_mono_type_internal (fsig->params [0]); MonoType *etype = mono_class_get_context (arg_class)->class_inst->type_argv [0]; int size = mono_class_value_size (arg_class, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); int elems = size / esize; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, elems); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); int insert_op = type_to_xinsert_op (arg0_type); MonoInst *ins = emit_simd_ins (cfg, klass, insert_op, args [0]->dreg, args [2]->dreg); ins->sreg3 = args [1]->dreg; ins->inst_c1 = arg0_type; return ins; } case SN_WithLower: case SN_WithUpper: { if (!is_element_type_primitive (fsig->params [0])) return NULL; int op = id == SN_GetLower ? OP_XINSERT_LOWER : OP_XINSERT_UPPER; return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } default: break; } return NULL; } static guint16 vector64_vector128_t_methods [] = { SN_Equals, SN_get_AllBitsSet, SN_get_Count, SN_get_IsSupported, SN_get_Zero, SN_op_Addition, SN_op_Equality, SN_op_Inequality, SN_op_Subtraction, }; static MonoInst* emit_vector64_vector128_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { int id = lookup_intrins (vector64_vector128_t_methods, sizeof (vector64_vector128_t_methods), cmethod); if (id == -1) return NULL; MonoClass *klass = cmethod->klass; MonoType *type = m_class_get_byval_arg (klass); MonoType *etype = mono_class_get_context (klass)->class_inst->type_argv [0]; int size = mono_class_value_size (klass, NULL); int esize = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size > 0); g_assert (esize > 0); int len = size / esize; if (!MONO_TYPE_IS_INTRINSICS_VECTOR_PRIMITIVE (etype)) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_IsSupported: { MonoInst *ins = NULL; EMIT_NEW_ICONST (cfg, ins, 1); return ins; } default: break; } if (!COMPILE_LLVM (cfg)) return NULL; switch (id) { case SN_get_Count: { MonoInst *ins = NULL; if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; } case SN_get_Zero: { return emit_xzero (cfg, klass); } case SN_get_AllBitsSet: { return emit_xones (cfg, klass); } case SN_Equals: { if (fsig->param_count == 1 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type)) { int sreg1 = load_simd_vreg (cfg, cmethod, args [0], NULL); return emit_simd_ins (cfg, klass, OP_XEQUAL, sreg1, args [1]->dreg); } break; } case SN_op_Addition: case SN_op_Subtraction: { if (!(fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type))) return NULL; MonoInst *ins = emit_simd_ins (cfg, klass, OP_XBINOP, args [0]->dreg, args [1]->dreg); ins->inst_c1 = etype->type; if (etype->type == MONO_TYPE_R4 || etype->type == MONO_TYPE_R8) ins->inst_c0 = id == SN_op_Addition ? OP_FADD : OP_FSUB; else ins->inst_c0 = id == SN_op_Addition ? OP_IADD : OP_ISUB; return ins; } case SN_op_Equality: case SN_op_Inequality: g_assert (fsig->param_count == 2 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); switch (id) { case SN_op_Equality: return emit_xequal (cfg, klass, args [0], args [1]); case SN_op_Inequality: return emit_not_xequal (cfg, klass, args [0], args [1]); default: g_assert_not_reached (); } default: break; } return NULL; } #endif // defined(TARGET_AMD64) || defined(TARGET_ARM64) #ifdef TARGET_AMD64 static guint16 vector_methods [] = { SN_ConvertToDouble, SN_ConvertToInt32, SN_ConvertToInt64, SN_ConvertToSingle, SN_ConvertToUInt32, SN_ConvertToUInt64, SN_Narrow, SN_Widen, SN_get_IsHardwareAccelerated, }; static MonoInst* emit_sys_numerics_vector (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; gboolean supported = FALSE; int id; MonoType *etype; id = lookup_intrins (vector_methods, sizeof (vector_methods), cmethod); if (id == -1) return NULL; //printf ("%s\n", mono_method_full_name (cmethod, 1)); #ifdef MONO_ARCH_SIMD_INTRINSICS supported = TRUE; #endif if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_IsHardwareAccelerated: EMIT_NEW_ICONST (cfg, ins, supported ? 1 : 0); ins->type = STACK_I4; return ins; case SN_ConvertToInt32: etype = get_vector_t_elem_type (fsig->params [0]); g_assert (etype->type == MONO_TYPE_R4); return emit_simd_ins (cfg, mono_class_from_mono_type_internal (fsig->ret), OP_CVTPS2DQ, args [0]->dreg, -1); case SN_ConvertToSingle: etype = get_vector_t_elem_type (fsig->params [0]); g_assert (etype->type == MONO_TYPE_I4 || etype->type == MONO_TYPE_U4); // FIXME: if (etype->type == MONO_TYPE_U4) return NULL; return emit_simd_ins (cfg, mono_class_from_mono_type_internal (fsig->ret), OP_CVTDQ2PS, args [0]->dreg, -1); case SN_ConvertToDouble: case SN_ConvertToInt64: case SN_ConvertToUInt32: case SN_ConvertToUInt64: case SN_Narrow: case SN_Widen: // FIXME: break; default: break; } return NULL; } static guint16 vector_t_methods [] = { SN_ctor, SN_CopyTo, SN_Equals, SN_GreaterThan, SN_GreaterThanOrEqual, SN_LessThan, SN_LessThanOrEqual, SN_Max, SN_Min, SN_get_AllBitsSet, SN_get_Count, SN_get_Item, SN_get_One, SN_get_Zero, SN_op_Addition, SN_op_BitwiseAnd, SN_op_BitwiseOr, SN_op_Division, SN_op_Equality, SN_op_ExclusiveOr, SN_op_Explicit, SN_op_Inequality, SN_op_Multiply, SN_op_Subtraction }; static MonoInst* emit_sys_numerics_vector_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; MonoType *type, *etype; MonoClass *klass; int size, len, id; gboolean is_unsigned; static const float r4_one = 1.0f; static const double r8_one = 1.0; id = lookup_intrins (vector_t_methods, sizeof (vector_t_methods), cmethod); if (id == -1) return NULL; klass = cmethod->klass; type = m_class_get_byval_arg (klass); etype = mono_class_get_context (klass)->class_inst->type_argv [0]; size = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size); len = register_size / size; if (!MONO_TYPE_IS_PRIMITIVE (etype) || etype->type == MONO_TYPE_CHAR || etype->type == MONO_TYPE_BOOLEAN) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_Count: if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; case SN_get_Zero: g_assert (fsig->param_count == 0 && mono_metadata_type_equal (fsig->ret, type)); return emit_xzero (cfg, klass); case SN_get_One: { g_assert (fsig->param_count == 0 && mono_metadata_type_equal (fsig->ret, type)); MonoInst *one = NULL; int expand_opcode = type_to_expand_op (etype); MONO_INST_NEW (cfg, one, -1); switch (expand_opcode) { case OP_EXPAND_R4: one->opcode = OP_R4CONST; one->type = STACK_R4; one->inst_p0 = (void *) &r4_one; break; case OP_EXPAND_R8: one->opcode = OP_R8CONST; one->type = STACK_R8; one->inst_p0 = (void *) &r8_one; break; default: one->opcode = OP_ICONST; one->type = STACK_I4; one->inst_c0 = 1; break; } one->dreg = alloc_dreg (cfg, (MonoStackType)one->type); MONO_ADD_INS (cfg->cbb, one); return emit_simd_ins (cfg, klass, expand_opcode, one->dreg, -1); } case SN_get_AllBitsSet: { return emit_xones (cfg, klass); } case SN_get_Item: { if (!COMPILE_LLVM (cfg)) return NULL; MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, args [1]->dreg, len); MONO_EMIT_NEW_COND_EXC (cfg, GE_UN, "ArgumentOutOfRangeException"); MonoTypeEnum ty = etype->type; int opcode = type_to_xextract_op (ty); int src1 = load_simd_vreg (cfg, cmethod, args [0], NULL); MonoInst *ins = emit_simd_ins (cfg, klass, opcode, src1, args [1]->dreg); ins->inst_c1 = ty; return ins; } case SN_ctor: if (fsig->param_count == 1 && mono_metadata_type_equal (fsig->params [0], etype)) { int dreg = load_simd_vreg (cfg, cmethod, args [0], NULL); int opcode = type_to_expand_op (etype); ins = emit_simd_ins (cfg, klass, opcode, args [1]->dreg, -1); ins->dreg = dreg; return ins; } if ((fsig->param_count == 1 || fsig->param_count == 2) && (fsig->params [0]->type == MONO_TYPE_SZARRAY)) { MonoInst *array_ins = args [1]; MonoInst *index_ins; MonoInst *ldelema_ins; MonoInst *var; int end_index_reg; if (args [0]->opcode != OP_LDADDR) return NULL; /* .ctor (T[]) or .ctor (T[], index) */ if (fsig->param_count == 2) { index_ins = args [2]; } else { EMIT_NEW_ICONST (cfg, index_ins, 0); } /* Emit bounds check for the index (index >= 0) */ mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), index_ins->dreg, "ArgumentOutOfRangeException"); /* Emit bounds check for the end (index + len - 1 < array length) */ end_index_reg = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_IADD_IMM, end_index_reg, index_ins->dreg, len - 1); mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), end_index_reg, "ArgumentOutOfRangeException"); /* Load the array slice into the simd reg */ ldelema_ins = mini_emit_ldelema_1_ins (cfg, mono_class_from_mono_type_internal (etype), array_ins, index_ins, FALSE, FALSE); g_assert (args [0]->opcode == OP_LDADDR); var = (MonoInst*)args [0]->inst_p0; EMIT_NEW_LOAD_MEMBASE (cfg, ins, OP_LOADX_MEMBASE, var->dreg, ldelema_ins->dreg, 0); ins->klass = cmethod->klass; return args [0]; } break; case SN_CopyTo: if ((fsig->param_count == 1 || fsig->param_count == 2) && (fsig->params [0]->type == MONO_TYPE_SZARRAY)) { MonoInst *array_ins = args [1]; MonoInst *index_ins; MonoInst *ldelema_ins; int val_vreg, end_index_reg; val_vreg = load_simd_vreg (cfg, cmethod, args [0], NULL); /* CopyTo (T[]) or CopyTo (T[], index) */ if (fsig->param_count == 2) { index_ins = args [2]; } else { EMIT_NEW_ICONST (cfg, index_ins, 0); } /* CopyTo () does complicated argument checks */ mini_emit_bounds_check_offset (cfg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), index_ins->dreg, "ArgumentOutOfRangeException"); end_index_reg = alloc_ireg (cfg); int len_reg = alloc_ireg (cfg); MONO_EMIT_NEW_LOAD_MEMBASE_OP_FLAGS (cfg, OP_LOADI4_MEMBASE, len_reg, array_ins->dreg, MONO_STRUCT_OFFSET (MonoArray, max_length), MONO_INST_INVARIANT_LOAD); EMIT_NEW_BIALU (cfg, ins, OP_ISUB, end_index_reg, len_reg, index_ins->dreg); MONO_EMIT_NEW_BIALU_IMM (cfg, OP_COMPARE_IMM, -1, end_index_reg, len); MONO_EMIT_NEW_COND_EXC (cfg, LT, "ArgumentException"); /* Load the array slice into the simd reg */ ldelema_ins = mini_emit_ldelema_1_ins (cfg, mono_class_from_mono_type_internal (etype), array_ins, index_ins, FALSE, FALSE); EMIT_NEW_STORE_MEMBASE (cfg, ins, OP_STOREX_MEMBASE, ldelema_ins->dreg, 0, val_vreg); ins->klass = cmethod->klass; return ins; } break; case SN_Equals: if (fsig->param_count == 1 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type)) { int sreg1 = load_simd_vreg (cfg, cmethod, args [0], NULL); return emit_simd_ins (cfg, klass, OP_XEQUAL, sreg1, args [1]->dreg); } else if (fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)) { /* Per element equality */ return emit_xcompare (cfg, klass, etype->type, args [0], args [1]); } break; case SN_op_Equality: case SN_op_Inequality: g_assert (fsig->param_count == 2 && fsig->ret->type == MONO_TYPE_BOOLEAN && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); switch (id) { case SN_op_Equality: return emit_xequal (cfg, klass, args [0], args [1]); case SN_op_Inequality: return emit_not_xequal (cfg, klass, args [0], args [1]); default: g_assert_not_reached (); } case SN_GreaterThan: case SN_GreaterThanOrEqual: case SN_LessThan: case SN_LessThanOrEqual: g_assert (fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type)); is_unsigned = etype->type == MONO_TYPE_U1 || etype->type == MONO_TYPE_U2 || etype->type == MONO_TYPE_U4 || etype->type == MONO_TYPE_U8 || etype->type == MONO_TYPE_U; ins = emit_xcompare (cfg, klass, etype->type, args [0], args [1]); switch (id) { case SN_GreaterThan: ins->inst_c0 = is_unsigned ? CMP_GT_UN : CMP_GT; break; case SN_GreaterThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_GE_UN : CMP_GE; break; case SN_LessThan: ins->inst_c0 = is_unsigned ? CMP_LT_UN : CMP_LT; break; case SN_LessThanOrEqual: ins->inst_c0 = is_unsigned ? CMP_LE_UN : CMP_LE; break; default: g_assert_not_reached (); } return ins; case SN_op_Explicit: return emit_simd_ins (cfg, klass, OP_XCAST, args [0]->dreg, -1); case SN_op_Addition: case SN_op_Subtraction: case SN_op_Division: case SN_op_Multiply: case SN_op_BitwiseAnd: case SN_op_BitwiseOr: case SN_op_ExclusiveOr: case SN_Max: case SN_Min: if (!(fsig->param_count == 2 && mono_metadata_type_equal (fsig->ret, type) && mono_metadata_type_equal (fsig->params [0], type) && mono_metadata_type_equal (fsig->params [1], type))) return NULL; ins = emit_simd_ins (cfg, klass, OP_XBINOP, args [0]->dreg, args [1]->dreg); ins->inst_c1 = etype->type; if (type_enum_is_float (etype->type)) { switch (id) { case SN_op_Addition: ins->inst_c0 = OP_FADD; break; case SN_op_Subtraction: ins->inst_c0 = OP_FSUB; break; case SN_op_Multiply: ins->inst_c0 = OP_FMUL; break; case SN_op_Division: ins->inst_c0 = OP_FDIV; break; case SN_Max: ins->inst_c0 = OP_FMAX; break; case SN_Min: ins->inst_c0 = OP_FMIN; break; default: NULLIFY_INS (ins); return NULL; } } else { switch (id) { case SN_op_Addition: ins->inst_c0 = OP_IADD; break; case SN_op_Subtraction: ins->inst_c0 = OP_ISUB; break; /* case SN_op_Division: ins->inst_c0 = OP_IDIV; break; case SN_op_Multiply: ins->inst_c0 = OP_IMUL; break; */ case SN_op_BitwiseAnd: ins->inst_c0 = OP_IAND; break; case SN_op_BitwiseOr: ins->inst_c0 = OP_IOR; break; case SN_op_ExclusiveOr: ins->inst_c0 = OP_IXOR; break; case SN_Max: ins->inst_c0 = OP_IMAX; break; case SN_Min: ins->inst_c0 = OP_IMIN; break; default: NULLIFY_INS (ins); return NULL; } } return ins; default: break; } return NULL; } #endif // TARGET_AMD64 #ifdef TARGET_ARM64 static SimdIntrinsic armbase_methods [] = { {SN_LeadingSignCount}, {SN_LeadingZeroCount}, {SN_MultiplyHigh}, {SN_ReverseElementBits}, {SN_get_IsSupported}, }; static SimdIntrinsic crc32_methods [] = { {SN_ComputeCrc32}, {SN_ComputeCrc32C}, {SN_get_IsSupported} }; static SimdIntrinsic crypto_aes_methods [] = { {SN_Decrypt, OP_XOP_X_X_X, INTRINS_AARCH64_AESD}, {SN_Encrypt, OP_XOP_X_X_X, INTRINS_AARCH64_AESE}, {SN_InverseMixColumns, OP_XOP_X_X, INTRINS_AARCH64_AESIMC}, {SN_MixColumns, OP_XOP_X_X, INTRINS_AARCH64_AESMC}, {SN_PolynomialMultiplyWideningLower}, {SN_PolynomialMultiplyWideningUpper}, {SN_get_IsSupported}, }; static SimdIntrinsic sha1_methods [] = { {SN_FixedRotate, OP_XOP_X_X, INTRINS_AARCH64_SHA1H}, {SN_HashUpdateChoose, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1C}, {SN_HashUpdateMajority, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1M}, {SN_HashUpdateParity, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1P}, {SN_ScheduleUpdate0, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA1SU0}, {SN_ScheduleUpdate1, OP_XOP_X_X_X, INTRINS_AARCH64_SHA1SU1}, {SN_get_IsSupported} }; static SimdIntrinsic sha256_methods [] = { {SN_HashUpdate1, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256H}, {SN_HashUpdate2, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256H2}, {SN_ScheduleUpdate0, OP_XOP_X_X_X, INTRINS_AARCH64_SHA256SU0}, {SN_ScheduleUpdate1, OP_XOP_X_X_X_X, INTRINS_AARCH64_SHA256SU1}, {SN_get_IsSupported} }; // This table must be kept in sorted order. ASCII } is sorted after alphanumeric // characters, so blind use of your editor's "sort lines" facility will // mis-order the lines. // // In Vim you can use `sort /.*{[0-9A-z]*/ r` to sort this table. static SimdIntrinsic advsimd_methods [] = { {SN_Abs, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_ABS, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FABS}, {SN_AbsSaturate, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQABS}, {SN_AbsSaturateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_SQABS}, {SN_AbsScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_ABS, None, None, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FABS}, {SN_AbsoluteCompareGreaterThan}, {SN_AbsoluteCompareGreaterThanOrEqual}, {SN_AbsoluteCompareGreaterThanOrEqualScalar}, {SN_AbsoluteCompareGreaterThanScalar}, {SN_AbsoluteCompareLessThan}, {SN_AbsoluteCompareLessThanOrEqual}, {SN_AbsoluteCompareLessThanOrEqualScalar}, {SN_AbsoluteCompareLessThanScalar}, {SN_AbsoluteDifference, OP_ARM64_SABD, None, OP_ARM64_UABD, None, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FABD}, {SN_AbsoluteDifferenceAdd, OP_ARM64_SABA, None, OP_ARM64_UABA}, {SN_AbsoluteDifferenceScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FABD_SCALAR}, {SN_AbsoluteDifferenceWideningLower, OP_ARM64_SABDL, None, OP_ARM64_UABDL}, {SN_AbsoluteDifferenceWideningLowerAndAdd, OP_ARM64_SABAL, None, OP_ARM64_UABAL}, {SN_AbsoluteDifferenceWideningUpper, OP_ARM64_SABDL2, None, OP_ARM64_UABDL2}, {SN_AbsoluteDifferenceWideningUpperAndAdd, OP_ARM64_SABAL2, None, OP_ARM64_UABAL2}, {SN_Add, OP_XBINOP, OP_IADD, None, None, OP_XBINOP, OP_FADD}, {SN_AddAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SADDV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UADDV}, {SN_AddAcrossWidening, OP_ARM64_SADDLV, None, OP_ARM64_UADDLV}, {SN_AddHighNarrowingLower, OP_ARM64_ADDHN}, {SN_AddHighNarrowingUpper, OP_ARM64_ADDHN2}, {SN_AddPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_ADDP, None, None, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FADDP}, {SN_AddPairwiseScalar, OP_ARM64_ADDP_SCALAR, None, None, None, OP_ARM64_FADDP_SCALAR}, {SN_AddPairwiseWidening, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SADDLP, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UADDLP}, {SN_AddPairwiseWideningAndAdd, OP_ARM64_SADALP, None, OP_ARM64_UADALP}, {SN_AddPairwiseWideningAndAddScalar, OP_ARM64_SADALP, None, OP_ARM64_UADALP}, {SN_AddPairwiseWideningScalar, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SADDLP, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UADDLP}, {SN_AddRoundedHighNarrowingLower, OP_ARM64_RADDHN}, {SN_AddRoundedHighNarrowingUpper, OP_ARM64_RADDHN2}, {SN_AddSaturate}, {SN_AddSaturateScalar}, {SN_AddScalar, OP_XBINOP_SCALAR, OP_IADD, None, None, OP_XBINOP_SCALAR, OP_FADD}, {SN_AddWideningLower, OP_ARM64_SADD, None, OP_ARM64_UADD}, {SN_AddWideningUpper, OP_ARM64_SADD2, None, OP_ARM64_UADD2}, {SN_And, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_AND}, {SN_BitwiseClear, OP_ARM64_BIC}, {SN_BitwiseSelect, OP_ARM64_BSL}, {SN_Ceiling, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_CeilingScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_CompareEqual, OP_XCOMPARE, CMP_EQ, OP_XCOMPARE, CMP_EQ, OP_XCOMPARE_FP, CMP_EQ}, {SN_CompareEqualScalar, OP_XCOMPARE_SCALAR, CMP_EQ, OP_XCOMPARE_SCALAR, CMP_EQ, OP_XCOMPARE_FP_SCALAR, CMP_EQ}, {SN_CompareGreaterThan, OP_XCOMPARE, CMP_GT, OP_XCOMPARE, CMP_GT_UN, OP_XCOMPARE_FP, CMP_GT}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE, CMP_GE, OP_XCOMPARE, CMP_GE_UN, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareGreaterThanOrEqualScalar, OP_XCOMPARE_SCALAR, CMP_GE, OP_XCOMPARE_SCALAR, CMP_GE_UN, OP_XCOMPARE_FP_SCALAR, CMP_GE}, {SN_CompareGreaterThanScalar, OP_XCOMPARE_SCALAR, CMP_GT, OP_XCOMPARE_SCALAR, CMP_GT_UN, OP_XCOMPARE_FP_SCALAR, CMP_GT}, {SN_CompareLessThan, OP_XCOMPARE, CMP_LT, OP_XCOMPARE, CMP_LT_UN, OP_XCOMPARE_FP, CMP_LT}, {SN_CompareLessThanOrEqual, OP_XCOMPARE, CMP_LE, OP_XCOMPARE, CMP_LE_UN, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareLessThanOrEqualScalar, OP_XCOMPARE_SCALAR, CMP_LE, OP_XCOMPARE_SCALAR, CMP_LE_UN, OP_XCOMPARE_FP_SCALAR, CMP_LE}, {SN_CompareLessThanScalar, OP_XCOMPARE_SCALAR, CMP_LT, OP_XCOMPARE_SCALAR, CMP_LT_UN, OP_XCOMPARE_FP_SCALAR, CMP_LT}, {SN_CompareTest, OP_ARM64_CMTST}, {SN_CompareTestScalar, OP_ARM64_CMTST}, {SN_ConvertToDouble, OP_ARM64_SCVTF, None, OP_ARM64_UCVTF, None, OP_ARM64_FCVTL}, {SN_ConvertToDoubleScalar, OP_ARM64_SCVTF_SCALAR, None, OP_ARM64_UCVTF_SCALAR}, {SN_ConvertToDoubleUpper, OP_ARM64_FCVTL2}, {SN_ConvertToInt32RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt32RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt32RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt32RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt32RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt32RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt32RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt32RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt32RoundToZero, OP_ARM64_FCVTZS}, {SN_ConvertToInt32RoundToZeroScalar, OP_ARM64_FCVTZS_SCALAR}, {SN_ConvertToInt64RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt64RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAS}, {SN_ConvertToInt64RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt64RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNS}, {SN_ConvertToInt64RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt64RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMS}, {SN_ConvertToInt64RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt64RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPS}, {SN_ConvertToInt64RoundToZero, OP_ARM64_FCVTZS}, {SN_ConvertToInt64RoundToZeroScalar, OP_ARM64_FCVTZS_SCALAR}, {SN_ConvertToSingle, OP_ARM64_SCVTF, None, OP_ARM64_UCVTF}, {SN_ConvertToSingleLower, OP_ARM64_FCVTN}, {SN_ConvertToSingleRoundToOddLower, OP_ARM64_FCVTXN}, {SN_ConvertToSingleRoundToOddUpper, OP_ARM64_FCVTXN2}, {SN_ConvertToSingleScalar, OP_ARM64_SCVTF_SCALAR, None, OP_ARM64_UCVTF_SCALAR}, {SN_ConvertToSingleUpper, OP_ARM64_FCVTN2}, {SN_ConvertToUInt32RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt32RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt32RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt32RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt32RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt32RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt32RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt32RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt32RoundToZero, OP_ARM64_FCVTZU}, {SN_ConvertToUInt32RoundToZeroScalar, OP_ARM64_FCVTZU_SCALAR}, {SN_ConvertToUInt64RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt64RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTAU}, {SN_ConvertToUInt64RoundToEven, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt64RoundToEvenScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTNU}, {SN_ConvertToUInt64RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt64RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTMU}, {SN_ConvertToUInt64RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt64RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FCVTPU}, {SN_ConvertToUInt64RoundToZero, OP_ARM64_FCVTZU}, {SN_ConvertToUInt64RoundToZeroScalar, OP_ARM64_FCVTZU_SCALAR}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_XBINOP_SCALAR, OP_FDIV}, {SN_DuplicateSelectedScalarToVector128}, {SN_DuplicateSelectedScalarToVector64}, {SN_DuplicateToVector128}, {SN_DuplicateToVector64}, {SN_Extract}, {SN_ExtractNarrowingLower, OP_ARM64_XTN}, {SN_ExtractNarrowingSaturateLower, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQXTN, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_UQXTN}, {SN_ExtractNarrowingSaturateScalar, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQXTN, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQXTN}, {SN_ExtractNarrowingSaturateUnsignedLower, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQXTUN}, {SN_ExtractNarrowingSaturateUnsignedScalar, OP_ARM64_XNARROW_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQXTUN}, {SN_ExtractNarrowingSaturateUnsignedUpper, OP_ARM64_SQXTUN2}, {SN_ExtractNarrowingSaturateUpper, OP_ARM64_SQXTN2, None, OP_ARM64_UQXTN2}, {SN_ExtractNarrowingUpper, OP_ARM64_XTN2}, {SN_ExtractVector128, OP_ARM64_EXT}, {SN_ExtractVector64, OP_ARM64_EXT}, {SN_Floor, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_FloorScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_FusedAddHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SHADD, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UHADD}, {SN_FusedAddRoundedHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRHADD, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URHADD}, {SN_FusedMultiplyAdd, OP_ARM64_FMADD}, {SN_FusedMultiplyAddByScalar, OP_ARM64_FMADD_BYSCALAR}, {SN_FusedMultiplyAddBySelectedScalar}, {SN_FusedMultiplyAddNegatedScalar, OP_ARM64_FNMADD_SCALAR}, {SN_FusedMultiplyAddScalar, OP_ARM64_FMADD_SCALAR}, {SN_FusedMultiplyAddScalarBySelectedScalar}, {SN_FusedMultiplySubtract, OP_ARM64_FMSUB}, {SN_FusedMultiplySubtractByScalar, OP_ARM64_FMSUB_BYSCALAR}, {SN_FusedMultiplySubtractBySelectedScalar}, {SN_FusedMultiplySubtractNegatedScalar, OP_ARM64_FNMSUB_SCALAR}, {SN_FusedMultiplySubtractScalar, OP_ARM64_FMSUB_SCALAR}, {SN_FusedMultiplySubtractScalarBySelectedScalar}, {SN_FusedSubtractHalving, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SHSUB, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UHSUB}, {SN_Insert}, {SN_InsertScalar}, {SN_InsertSelectedScalar}, {SN_LeadingSignCount, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_CLS}, {SN_LeadingZeroCount, OP_ARM64_CLZ}, {SN_LoadAndInsertScalar, OP_ARM64_LD1_INSERT}, {SN_LoadAndReplicateToVector128, OP_ARM64_LD1R}, {SN_LoadAndReplicateToVector64, OP_ARM64_LD1R}, {SN_LoadPairScalarVector64, OP_ARM64_LDP_SCALAR}, {SN_LoadPairScalarVector64NonTemporal, OP_ARM64_LDNP_SCALAR}, {SN_LoadPairVector128, OP_ARM64_LDP}, {SN_LoadPairVector128NonTemporal, OP_ARM64_LDNP}, {SN_LoadPairVector64, OP_ARM64_LDP}, {SN_LoadPairVector64NonTemporal, OP_ARM64_LDNP}, {SN_LoadVector128, OP_ARM64_LD1}, {SN_LoadVector64, OP_ARM64_LD1}, {SN_Max, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMAX, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMAX, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAX}, {SN_MaxAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SMAXV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UMAXV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXV}, {SN_MaxNumber, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNM}, {SN_MaxNumberAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXNMV}, {SN_MaxNumberPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNMP}, {SN_MaxNumberPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXNMV}, {SN_MaxNumberScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXNM}, {SN_MaxPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMAXP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMAXP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAXP}, {SN_MaxPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMAXV}, {SN_MaxScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMAX}, {SN_Min, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMIN, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMIN, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMIN}, {SN_MinAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_SMINV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_UMINV, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINV}, {SN_MinNumber, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNM}, {SN_MinNumberAcross, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINNMV}, {SN_MinNumberPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNMP}, {SN_MinNumberPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINNMV}, {SN_MinNumberScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINNM}, {SN_MinPairwise, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SMINP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UMINP, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMINP}, {SN_MinPairwiseScalar, OP_ARM64_XHORIZ, INTRINS_AARCH64_ADV_SIMD_FMINV}, {SN_MinScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMIN}, {SN_Multiply, OP_XBINOP, OP_IMUL, None, None, OP_XBINOP, OP_FMUL}, {SN_MultiplyAdd, OP_ARM64_MLA}, {SN_MultiplyAddByScalar, OP_ARM64_MLA_SCALAR}, {SN_MultiplyAddBySelectedScalar}, {SN_MultiplyByScalar, OP_XBINOP_BYSCALAR, OP_IMUL, None, None, OP_XBINOP_BYSCALAR, OP_FMUL}, {SN_MultiplyBySelectedScalar}, {SN_MultiplyBySelectedScalarWideningLower}, {SN_MultiplyBySelectedScalarWideningLowerAndAdd}, {SN_MultiplyBySelectedScalarWideningLowerAndSubtract}, {SN_MultiplyBySelectedScalarWideningUpper}, {SN_MultiplyBySelectedScalarWideningUpperAndAdd}, {SN_MultiplyBySelectedScalarWideningUpperAndSubtract}, {SN_MultiplyDoublingByScalarSaturateHigh, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingBySelectedScalarSaturateHigh}, {SN_MultiplyDoublingSaturateHigh, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingSaturateHighScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQDMULH}, {SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh}, {SN_MultiplyDoublingWideningAndAddSaturateScalar, OP_ARM64_SQDMLAL_SCALAR}, {SN_MultiplyDoublingWideningAndSubtractSaturateScalar, OP_ARM64_SQDMLSL_SCALAR}, {SN_MultiplyDoublingWideningLowerAndAddSaturate, OP_ARM64_SQDMLAL}, {SN_MultiplyDoublingWideningLowerAndSubtractSaturate, OP_ARM64_SQDMLSL}, {SN_MultiplyDoublingWideningLowerByScalarAndAddSaturate, OP_ARM64_SQDMLAL_BYSCALAR}, {SN_MultiplyDoublingWideningLowerByScalarAndSubtractSaturate, OP_ARM64_SQDMLSL_BYSCALAR}, {SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate}, {SN_MultiplyDoublingWideningSaturateLower, OP_ARM64_SQDMULL}, {SN_MultiplyDoublingWideningSaturateLowerByScalar, OP_ARM64_SQDMULL_BYSCALAR}, {SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar}, {SN_MultiplyDoublingWideningSaturateScalar, OP_ARM64_SQDMULL_SCALAR}, {SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar}, {SN_MultiplyDoublingWideningSaturateUpper, OP_ARM64_SQDMULL2}, {SN_MultiplyDoublingWideningSaturateUpperByScalar, OP_ARM64_SQDMULL2_BYSCALAR}, {SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar}, {SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate}, {SN_MultiplyDoublingWideningUpperAndAddSaturate, OP_ARM64_SQDMLAL2}, {SN_MultiplyDoublingWideningUpperAndSubtractSaturate, OP_ARM64_SQDMLSL2}, {SN_MultiplyDoublingWideningUpperByScalarAndAddSaturate, OP_ARM64_SQDMLAL2_BYSCALAR}, {SN_MultiplyDoublingWideningUpperByScalarAndSubtractSaturate, OP_ARM64_SQDMLSL2_BYSCALAR}, {SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate}, {SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate}, {SN_MultiplyExtended, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedByScalar, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedBySelectedScalar}, {SN_MultiplyExtendedScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FMULX}, {SN_MultiplyExtendedScalarBySelectedScalar}, {SN_MultiplyRoundedDoublingByScalarSaturateHigh, OP_XOP_OVR_BYSCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh}, {SN_MultiplyRoundedDoublingSaturateHigh, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingSaturateHighScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRDMULH}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh}, {SN_MultiplyScalar, OP_XBINOP_SCALAR, OP_FMUL}, {SN_MultiplyScalarBySelectedScalar, OP_ARM64_FMUL_SEL}, {SN_MultiplySubtract, OP_ARM64_MLS}, {SN_MultiplySubtractByScalar, OP_ARM64_MLS_SCALAR}, {SN_MultiplySubtractBySelectedScalar}, {SN_MultiplyWideningLower, OP_ARM64_SMULL, None, OP_ARM64_UMULL}, {SN_MultiplyWideningLowerAndAdd, OP_ARM64_SMLAL, None, OP_ARM64_UMLAL}, {SN_MultiplyWideningLowerAndSubtract, OP_ARM64_SMLSL, None, OP_ARM64_UMLSL}, {SN_MultiplyWideningUpper, OP_ARM64_SMULL2, None, OP_ARM64_UMULL2}, {SN_MultiplyWideningUpperAndAdd, OP_ARM64_SMLAL2, None, OP_ARM64_UMLAL2}, {SN_MultiplyWideningUpperAndSubtract, OP_ARM64_SMLSL2, None, OP_ARM64_UMLSL2}, {SN_Negate, OP_ARM64_XNEG}, {SN_NegateSaturate, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_SQNEG}, {SN_NegateSaturateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_SQNEG}, {SN_NegateScalar, OP_ARM64_XNEG_SCALAR}, {SN_Not, OP_ARM64_MVN}, {SN_Or, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_OR}, {SN_OrNot, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_ORNOT}, {SN_PolynomialMultiply, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_PMUL}, {SN_PolynomialMultiplyWideningLower, OP_ARM64_PMULL}, {SN_PolynomialMultiplyWideningUpper, OP_ARM64_PMULL2}, {SN_PopCount, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_CNT}, {SN_ReciprocalEstimate, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_URECPE, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPE}, {SN_ReciprocalEstimateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPE}, {SN_ReciprocalExponentScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPX}, {SN_ReciprocalSquareRootEstimate, None, None, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_URSQRTE, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTE}, {SN_ReciprocalSquareRootEstimateScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTE}, {SN_ReciprocalSquareRootStep, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTS}, {SN_ReciprocalSquareRootStepScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRSQRTS}, {SN_ReciprocalStep, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPS}, {SN_ReciprocalStepScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_FRECPS}, {SN_ReverseElement16, OP_ARM64_REVN, 16}, {SN_ReverseElement32, OP_ARM64_REVN, 32}, {SN_ReverseElement8, OP_ARM64_REVN, 8}, {SN_ReverseElementBits, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_RBIT}, {SN_RoundAwayFromZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTA}, {SN_RoundAwayFromZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTA}, {SN_RoundToNearest, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTN}, {SN_RoundToNearestScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTN}, {SN_RoundToNegativeInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_RoundToNegativeInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTM}, {SN_RoundToPositiveInfinity, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_RoundToPositiveInfinityScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTP}, {SN_RoundToZero, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTZ}, {SN_RoundToZeroScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FRINTZ}, {SN_ShiftArithmetic, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SSHL}, {SN_ShiftArithmeticRounded, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRSHL}, {SN_ShiftArithmeticRoundedSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRSHL}, {SN_ShiftArithmeticRoundedSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQRSHL}, {SN_ShiftArithmeticRoundedScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SRSHL}, {SN_ShiftArithmeticSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSHL}, {SN_ShiftArithmeticSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSHL}, {SN_ShiftArithmeticScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SSHL}, {SN_ShiftLeftAndInsert, OP_ARM64_SLI}, {SN_ShiftLeftAndInsertScalar, OP_ARM64_SLI}, {SN_ShiftLeftLogical, OP_ARM64_SHL}, {SN_ShiftLeftLogicalSaturate}, {SN_ShiftLeftLogicalSaturateScalar}, {SN_ShiftLeftLogicalSaturateUnsigned, OP_ARM64_SQSHLU}, {SN_ShiftLeftLogicalSaturateUnsignedScalar, OP_ARM64_SQSHLU_SCALAR}, {SN_ShiftLeftLogicalScalar, OP_ARM64_SHL}, {SN_ShiftLeftLogicalWideningLower, OP_ARM64_SSHLL, None, OP_ARM64_USHLL}, {SN_ShiftLeftLogicalWideningUpper, OP_ARM64_SSHLL2, None, OP_ARM64_USHLL2}, {SN_ShiftLogical, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_USHL}, {SN_ShiftLogicalRounded, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URSHL}, {SN_ShiftLogicalRoundedSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQRSHL}, {SN_ShiftLogicalRoundedSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQRSHL}, {SN_ShiftLogicalRoundedScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_URSHL}, {SN_ShiftLogicalSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSHL}, {SN_ShiftLogicalSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSHL}, {SN_ShiftLogicalScalar, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_USHL}, {SN_ShiftRightAndInsert, OP_ARM64_SRI}, {SN_ShiftRightAndInsertScalar, OP_ARM64_SRI}, {SN_ShiftRightArithmetic, OP_ARM64_SSHR}, {SN_ShiftRightArithmeticAdd, OP_ARM64_SSRA}, {SN_ShiftRightArithmeticAddScalar, OP_ARM64_SSRA}, {SN_ShiftRightArithmeticNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUnsignedUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQSHRUN}, {SN_ShiftRightArithmeticNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQSHRN}, {SN_ShiftRightArithmeticRounded, OP_ARM64_SRSHR}, {SN_ShiftRightArithmeticRoundedAdd, OP_ARM64_SRSRA}, {SN_ShiftRightArithmeticRoundedAddScalar, OP_ARM64_SRSRA}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUnsignedUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQRSHRUN}, {SN_ShiftRightArithmeticRoundedNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_SQRSHRN}, {SN_ShiftRightArithmeticRoundedScalar, OP_ARM64_SRSHR}, {SN_ShiftRightArithmeticScalar, OP_ARM64_SSHR}, {SN_ShiftRightLogical, OP_ARM64_USHR}, {SN_ShiftRightLogicalAdd, OP_ARM64_USRA}, {SN_ShiftRightLogicalAddScalar, OP_ARM64_USRA}, {SN_ShiftRightLogicalNarrowingLower, OP_ARM64_SHRN}, {SN_ShiftRightLogicalNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_UQSHRN}, {SN_ShiftRightLogicalNarrowingUpper, OP_ARM64_SHRN2}, {SN_ShiftRightLogicalRounded, OP_ARM64_URSHR}, {SN_ShiftRightLogicalRoundedAdd, OP_ARM64_URSRA}, {SN_ShiftRightLogicalRoundedAddScalar, OP_ARM64_URSRA}, {SN_ShiftRightLogicalRoundedNarrowingLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_RSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateLower, OP_ARM64_XNSHIFT, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateScalar, OP_ARM64_XNSHIFT_SCALAR, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingSaturateUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_UQRSHRN}, {SN_ShiftRightLogicalRoundedNarrowingUpper, OP_ARM64_XNSHIFT2, INTRINS_AARCH64_ADV_SIMD_RSHRN}, {SN_ShiftRightLogicalRoundedScalar, OP_ARM64_URSHR}, {SN_ShiftRightLogicalScalar, OP_ARM64_USHR}, {SN_SignExtendWideningLower, OP_ARM64_SXTL}, {SN_SignExtendWideningUpper, OP_ARM64_SXTL2}, {SN_Sqrt, OP_XOP_OVR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT}, {SN_SqrtScalar, OP_XOP_OVR_SCALAR_X_X, INTRINS_AARCH64_ADV_SIMD_FSQRT}, {SN_Store, OP_ARM64_ST1}, {SN_StorePair, OP_ARM64_STP}, {SN_StorePairNonTemporal, OP_ARM64_STNP}, {SN_StorePairScalar, OP_ARM64_STP_SCALAR}, {SN_StorePairScalarNonTemporal, OP_ARM64_STNP_SCALAR}, {SN_StoreSelectedScalar, OP_ARM64_ST1_SCALAR}, {SN_Subtract, OP_XBINOP, OP_ISUB, None, None, OP_XBINOP, OP_FSUB}, {SN_SubtractHighNarrowingLower, OP_ARM64_SUBHN}, {SN_SubtractHighNarrowingUpper, OP_ARM64_SUBHN2}, {SN_SubtractRoundedHighNarrowingLower, OP_ARM64_RSUBHN}, {SN_SubtractRoundedHighNarrowingUpper, OP_ARM64_RSUBHN2}, {SN_SubtractSaturate, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSUB, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSUB}, {SN_SubtractSaturateScalar, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_SQSUB, OP_XOP_OVR_SCALAR_X_X_X, INTRINS_AARCH64_ADV_SIMD_UQSUB}, {SN_SubtractScalar, OP_XBINOP_SCALAR, OP_ISUB, None, None, OP_XBINOP_SCALAR, OP_FSUB}, {SN_SubtractWideningLower, OP_ARM64_SSUB, None, OP_ARM64_USUB}, {SN_SubtractWideningUpper, OP_ARM64_SSUB2, None, OP_ARM64_USUB2}, {SN_TransposeEven, OP_ARM64_TRN1}, {SN_TransposeOdd, OP_ARM64_TRN2}, {SN_UnzipEven, OP_ARM64_UZP1}, {SN_UnzipOdd, OP_ARM64_UZP2}, {SN_VectorTableLookup, OP_XOP_OVR_X_X_X, INTRINS_AARCH64_ADV_SIMD_TBL1}, {SN_VectorTableLookupExtension, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_TBX1}, {SN_Xor, OP_XBINOP_FORCEINT, XBINOP_FORCEINT_XOR}, {SN_ZeroExtendWideningLower, OP_ARM64_UXTL}, {SN_ZeroExtendWideningUpper, OP_ARM64_UXTL2}, {SN_ZipHigh, OP_ARM64_ZIP2}, {SN_ZipLow, OP_ARM64_ZIP1}, {SN_get_IsSupported}, }; static const SimdIntrinsic rdm_methods [] = { {SN_MultiplyRoundedDoublingAndAddSaturateHigh, OP_ARM64_SQRDMLAH}, {SN_MultiplyRoundedDoublingAndAddSaturateHighScalar, OP_ARM64_SQRDMLAH_SCALAR}, {SN_MultiplyRoundedDoublingAndSubtractSaturateHigh, OP_ARM64_SQRDMLSH}, {SN_MultiplyRoundedDoublingAndSubtractSaturateHighScalar, OP_ARM64_SQRDMLSH_SCALAR}, {SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh}, {SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh}, {SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh}, {SN_get_IsSupported}, }; static const SimdIntrinsic dp_methods [] = { {SN_DotProduct, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_SDOT, OP_XOP_OVR_X_X_X_X, INTRINS_AARCH64_ADV_SIMD_UDOT}, {SN_DotProductBySelectedQuadruplet}, {SN_get_IsSupported}, }; static const IntrinGroup supported_arm_intrinsics [] = { { "AdvSimd", MONO_CPU_ARM64_NEON, advsimd_methods, sizeof (advsimd_methods) }, { "Aes", MONO_CPU_ARM64_CRYPTO, crypto_aes_methods, sizeof (crypto_aes_methods) }, { "ArmBase", MONO_CPU_ARM64_BASE, armbase_methods, sizeof (armbase_methods) }, { "Crc32", MONO_CPU_ARM64_CRC, crc32_methods, sizeof (crc32_methods) }, { "Dp", MONO_CPU_ARM64_DP, dp_methods, sizeof (dp_methods) }, { "Rdm", MONO_CPU_ARM64_RDM, rdm_methods, sizeof (rdm_methods) }, { "Sha1", MONO_CPU_ARM64_CRYPTO, sha1_methods, sizeof (sha1_methods) }, { "Sha256", MONO_CPU_ARM64_CRYPTO, sha256_methods, sizeof (sha256_methods) }, }; static MonoInst* emit_arm64_intrinsics ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit) { MonoCPUFeatures feature = intrin_group->feature; gboolean arg0_i32 = (arg0_type == MONO_TYPE_I4) || (arg0_type == MONO_TYPE_U4); #if TARGET_SIZEOF_VOID_P == 4 arg0_i32 = arg0_i32 || (arg0_type == MONO_TYPE_I) || (arg0_type == MONO_TYPE_U); #endif if (feature == MONO_CPU_ARM64_BASE) { switch (id) { case SN_LeadingZeroCount: return emit_simd_ins_for_sig (cfg, klass, arg0_i32 ? OP_LZCNT32 : OP_LZCNT64, 0, arg0_type, fsig, args); case SN_LeadingSignCount: return emit_simd_ins_for_sig (cfg, klass, arg0_i32 ? OP_LSCNT32 : OP_LSCNT64, 0, arg0_type, fsig, args); case SN_MultiplyHigh: return emit_simd_ins_for_sig (cfg, klass, (arg0_type == MONO_TYPE_I8 ? OP_ARM64_SMULH : OP_ARM64_UMULH), 0, arg0_type, fsig, args); case SN_ReverseElementBits: return emit_simd_ins_for_sig (cfg, klass, (is_64bit ? OP_XOP_I8_I8 : OP_XOP_I4_I4), (is_64bit ? INTRINS_BITREVERSE_I64 : INTRINS_BITREVERSE_I32), arg0_type, fsig, args); default: g_assert_not_reached (); // if a new API is added we need to either implement it or change IsSupported to false } } if (feature == MONO_CPU_ARM64_CRC) { switch (id) { case SN_ComputeCrc32: case SN_ComputeCrc32C: { IntrinsicId op = (IntrinsicId)0; gboolean is_c = info->id == SN_ComputeCrc32C; switch (get_underlying_type (fsig->params [1])) { case MONO_TYPE_U1: op = is_c ? INTRINS_AARCH64_CRC32CB : INTRINS_AARCH64_CRC32B; break; case MONO_TYPE_U2: op = is_c ? INTRINS_AARCH64_CRC32CH : INTRINS_AARCH64_CRC32H; break; case MONO_TYPE_U4: op = is_c ? INTRINS_AARCH64_CRC32CW : INTRINS_AARCH64_CRC32W; break; case MONO_TYPE_U8: op = is_c ? INTRINS_AARCH64_CRC32CX : INTRINS_AARCH64_CRC32X; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_64bit ? OP_XOP_I4_I4_I8 : OP_XOP_I4_I4_I4, op, arg0_type, fsig, args); } default: g_assert_not_reached (); // if a new API is added we need to either implement it or change IsSupported to false } } if (feature == MONO_CPU_ARM64_NEON) { switch (id) { case SN_AbsoluteCompareGreaterThan: case SN_AbsoluteCompareGreaterThanOrEqual: case SN_AbsoluteCompareLessThan: case SN_AbsoluteCompareLessThanOrEqual: case SN_AbsoluteCompareGreaterThanScalar: case SN_AbsoluteCompareGreaterThanOrEqualScalar: case SN_AbsoluteCompareLessThanScalar: case SN_AbsoluteCompareLessThanOrEqualScalar: { gboolean reverse_args = FALSE; gboolean use_geq = FALSE; gboolean scalar = FALSE; MonoInst *cmp_args [] = { args [0], args [1] }; switch (id) { case SN_AbsoluteCompareGreaterThanScalar: scalar = TRUE; case SN_AbsoluteCompareGreaterThan: break; case SN_AbsoluteCompareGreaterThanOrEqualScalar: scalar = TRUE; case SN_AbsoluteCompareGreaterThanOrEqual: use_geq = TRUE; break; case SN_AbsoluteCompareLessThanScalar: scalar = TRUE; case SN_AbsoluteCompareLessThan: reverse_args = TRUE; break; case SN_AbsoluteCompareLessThanOrEqualScalar: scalar = TRUE; case SN_AbsoluteCompareLessThanOrEqual: reverse_args = TRUE; use_geq = TRUE; break; } if (reverse_args) { cmp_args [0] = args [1]; cmp_args [1] = args [0]; } int iid = use_geq ? INTRINS_AARCH64_ADV_SIMD_FACGE : INTRINS_AARCH64_ADV_SIMD_FACGT; return emit_simd_ins_for_sig (cfg, klass, OP_ARM64_ABSCOMPARE, iid, scalar, fsig, cmp_args); } case SN_AddSaturate: case SN_AddSaturateScalar: { gboolean arg0_unsigned = type_is_unsigned (fsig->params [0]); gboolean arg1_unsigned = type_is_unsigned (fsig->params [1]); int iid = 0; if (arg0_unsigned && arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_UQADD; else if (arg0_unsigned && !arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_USQADD; else if (!arg0_unsigned && arg1_unsigned) iid = INTRINS_AARCH64_ADV_SIMD_SUQADD; else iid = INTRINS_AARCH64_ADV_SIMD_SQADD; int op = id == SN_AddSaturateScalar ? OP_XOP_OVR_SCALAR_X_X_X : OP_XOP_OVR_X_X_X; return emit_simd_ins_for_sig (cfg, klass, op, iid, arg0_type, fsig, args); } case SN_DuplicateSelectedScalarToVector128: case SN_DuplicateSelectedScalarToVector64: case SN_DuplicateToVector64: case SN_DuplicateToVector128: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoType *rtype = get_vector_t_elem_type (fsig->ret); int scalar_src_reg = args [0]->dreg; switch (id) { case SN_DuplicateSelectedScalarToVector128: case SN_DuplicateSelectedScalarToVector64: { MonoInst *ins = emit_simd_ins (cfg, ret_klass, type_to_xextract_op (rtype->type), args [0]->dreg, args [1]->dreg); ins->inst_c1 = arg0_type; scalar_src_reg = ins->dreg; break; } } return emit_simd_ins (cfg, ret_klass, type_to_expand_op (rtype), scalar_src_reg, -1); } case SN_Extract: { int extract_op = type_to_xextract_op (arg0_type); MonoInst *ins = emit_simd_ins (cfg, klass, extract_op, args [0]->dreg, args [1]->dreg); ins->inst_c1 = arg0_type; return ins; } case SN_InsertSelectedScalar: case SN_InsertScalar: case SN_Insert: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); int insert_op = 0; int extract_op = 0; switch (arg0_type) { case MONO_TYPE_I1: case MONO_TYPE_U1: insert_op = OP_XINSERT_I1; extract_op = OP_EXTRACT_I1; break; case MONO_TYPE_I2: case MONO_TYPE_U2: insert_op = OP_XINSERT_I2; extract_op = OP_EXTRACT_I2; break; case MONO_TYPE_I4: case MONO_TYPE_U4: insert_op = OP_XINSERT_I4; extract_op = OP_EXTRACT_I4; break; case MONO_TYPE_I8: case MONO_TYPE_U8: insert_op = OP_XINSERT_I8; extract_op = OP_EXTRACT_I8; break; case MONO_TYPE_R4: insert_op = OP_XINSERT_R4; extract_op = OP_EXTRACT_R4; break; case MONO_TYPE_R8: insert_op = OP_XINSERT_R8; extract_op = OP_EXTRACT_R8; break; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 insert_op = OP_XINSERT_I8; extract_op = OP_EXTRACT_I8; #else insert_op = OP_XINSERT_I4; extract_op = OP_EXTRACT_I4; #endif break; default: g_assert_not_reached (); } int val_src_reg = args [2]->dreg; switch (id) { case SN_InsertSelectedScalar: { MonoInst *scalar = emit_simd_ins (cfg, klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); val_src_reg = scalar->dreg; // fallthrough } case SN_InsertScalar: { MonoInst *ins = emit_simd_ins (cfg, klass, extract_op, val_src_reg, -1); ins->inst_c0 = 0; ins->inst_c1 = arg0_type; val_src_reg = ins->dreg; break; } } MonoInst *ins = emit_simd_ins (cfg, ret_klass, insert_op, args [0]->dreg, val_src_reg); ins->sreg3 = args [1]->dreg; ins->inst_c1 = arg0_type; return ins; } case SN_ShiftLeftLogicalSaturate: case SN_ShiftLeftLogicalSaturateScalar: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoType *etype = get_vector_t_elem_type (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); gboolean scalar = id == SN_ShiftLeftLogicalSaturateScalar; int s2v = scalar ? OP_CREATE_SCALAR_UNSAFE : type_to_expand_op (etype); int xop = scalar ? OP_XOP_OVR_SCALAR_X_X_X : OP_XOP_OVR_X_X_X; int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UQSHL : INTRINS_AARCH64_ADV_SIMD_SQSHL; MonoInst *shift_vector = emit_simd_ins (cfg, ret_klass, s2v, args [1]->dreg, -1); shift_vector->inst_c1 = etype->type; MonoInst *ret = emit_simd_ins (cfg, ret_klass, xop, args [0]->dreg, shift_vector->dreg); ret->inst_c0 = iid; ret->inst_c1 = etype->type; return ret; } case SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh: case SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh: case SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar: case SN_MultiplyExtendedBySelectedScalar: case SN_MultiplyExtendedScalarBySelectedScalar: case SN_MultiplyBySelectedScalar: case SN_MultiplyBySelectedScalarWideningLower: case SN_MultiplyBySelectedScalarWideningUpper: case SN_MultiplyDoublingBySelectedScalarSaturateHigh: case SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar: case SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); gboolean is_float = type_is_float (fsig->ret); int opcode = 0; int c0 = 0; switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQRDMULH; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQRDMULH; break; case SN_MultiplyDoublingScalarBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQDMULH; break; case SN_MultiplyDoublingWideningSaturateScalarBySelectedScalar: opcode = OP_ARM64_SQDMULL_SCALAR; break; case SN_MultiplyExtendedBySelectedScalar: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_FMULX; break; case SN_MultiplyExtendedScalarBySelectedScalar: opcode = OP_XOP_OVR_SCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_FMULX; break; case SN_MultiplyBySelectedScalar: opcode = OP_XBINOP_BYSCALAR; c0 = OP_IMUL; break; case SN_MultiplyBySelectedScalarWideningLower: opcode = OP_ARM64_SMULL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpper: opcode = OP_ARM64_SMULL2_SCALAR; break; case SN_MultiplyDoublingBySelectedScalarSaturateHigh: opcode = OP_XOP_OVR_BYSCALAR_X_X_X; c0 = INTRINS_AARCH64_ADV_SIMD_SQDMULH; break; case SN_MultiplyDoublingWideningSaturateLowerBySelectedScalar: opcode = OP_ARM64_SQDMULL_BYSCALAR; break; case SN_MultiplyDoublingWideningSaturateUpperBySelectedScalar: opcode = OP_ARM64_SQDMULL2_BYSCALAR; break; default: g_assert_not_reached(); } if (is_unsigned) switch (opcode) { case OP_ARM64_SMULL_SCALAR: opcode = OP_ARM64_UMULL_SCALAR; break; case OP_ARM64_SMULL2_SCALAR: opcode = OP_ARM64_UMULL2_SCALAR; break; } if (is_float) switch (opcode) { case OP_XBINOP_BYSCALAR: c0 = OP_FMUL; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [1]->dreg, args [2]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, scalar->dreg); ret->inst_c0 = c0; ret->inst_c1 = arg0_type; return ret; } case SN_FusedMultiplyAddBySelectedScalar: case SN_FusedMultiplyAddScalarBySelectedScalar: case SN_FusedMultiplySubtractBySelectedScalar: case SN_FusedMultiplySubtractScalarBySelectedScalar: case SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate: case SN_MultiplyAddBySelectedScalar: case SN_MultiplySubtractBySelectedScalar: case SN_MultiplyBySelectedScalarWideningLowerAndAdd: case SN_MultiplyBySelectedScalarWideningLowerAndSubtract: case SN_MultiplyBySelectedScalarWideningUpperAndAdd: case SN_MultiplyBySelectedScalarWideningUpperAndSubtract: case SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate: case SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate: case SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); gboolean is_unsigned = type_is_unsigned (fsig->ret); int opcode = 0; switch (id) { case SN_FusedMultiplyAddBySelectedScalar: opcode = OP_ARM64_FMADD_BYSCALAR; break; case SN_FusedMultiplyAddScalarBySelectedScalar: opcode = OP_ARM64_FMADD_SCALAR; break; case SN_FusedMultiplySubtractBySelectedScalar: opcode = OP_ARM64_FMSUB_BYSCALAR; break; case SN_FusedMultiplySubtractScalarBySelectedScalar: opcode = OP_ARM64_FMSUB_SCALAR; break; case SN_MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL_SCALAR; break; case SN_MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL_SCALAR; break; case SN_MultiplyAddBySelectedScalar: opcode = OP_ARM64_MLA_SCALAR; break; case SN_MultiplySubtractBySelectedScalar: opcode = OP_ARM64_MLS_SCALAR; break; case SN_MultiplyBySelectedScalarWideningLowerAndAdd: opcode = OP_ARM64_SMLAL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningLowerAndSubtract: opcode = OP_ARM64_SMLSL_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpperAndAdd: opcode = OP_ARM64_SMLAL2_SCALAR; break; case SN_MultiplyBySelectedScalarWideningUpperAndSubtract: opcode = OP_ARM64_SMLSL2_SCALAR; break; case SN_MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL_BYSCALAR; break; case SN_MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL_BYSCALAR; break; case SN_MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate: opcode = OP_ARM64_SQDMLAL2_BYSCALAR; break; case SN_MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate: opcode = OP_ARM64_SQDMLSL2_BYSCALAR; break; default: g_assert_not_reached(); } if (is_unsigned) switch (opcode) { case OP_ARM64_SMLAL_SCALAR: opcode = OP_ARM64_UMLAL_SCALAR; break; case OP_ARM64_SMLSL_SCALAR: opcode = OP_ARM64_UMLSL_SCALAR; break; case OP_ARM64_SMLAL2_SCALAR: opcode = OP_ARM64_UMLAL2_SCALAR; break; case OP_ARM64_SMLSL2_SCALAR: opcode = OP_ARM64_UMLSL2_SCALAR; break; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, args [1]->dreg); ret->sreg3 = scalar->dreg; return ret; } default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_CRYPTO) { switch (id) { case SN_PolynomialMultiplyWideningLower: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_AARCH64_PMULL64, 0, fsig, args); case SN_PolynomialMultiplyWideningUpper: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_AARCH64_PMULL64, 1, fsig, args); default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_RDM) { switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh: case SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh: case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); int opcode = 0; switch (id) { case SN_MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh: opcode = OP_ARM64_SQRDMLAH_BYSCALAR; break; case SN_MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh: opcode = OP_ARM64_SQRDMLSH_BYSCALAR; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh: opcode = OP_ARM64_SQRDMLAH_SCALAR; break; case SN_MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh: opcode = OP_ARM64_SQRDMLSH_SCALAR; break; } MonoInst *scalar = emit_simd_ins (cfg, ret_klass, OP_ARM64_SELECT_SCALAR, args [2]->dreg, args [3]->dreg); MonoInst *ret = emit_simd_ins (cfg, ret_klass, opcode, args [0]->dreg, args [1]->dreg); ret->inst_c1 = arg0_type; ret->sreg3 = scalar->dreg; return ret; } default: g_assert_not_reached (); } } if (feature == MONO_CPU_ARM64_DP) { switch (id) { case SN_DotProductBySelectedQuadruplet: { MonoClass *ret_klass = mono_class_from_mono_type_internal (fsig->ret); MonoClass *arg_klass = mono_class_from_mono_type_internal (fsig->params [1]); MonoClass *quad_klass = mono_class_from_mono_type_internal (fsig->params [2]); gboolean is_unsigned = type_is_unsigned (fsig->ret); int iid = is_unsigned ? INTRINS_AARCH64_ADV_SIMD_UDOT : INTRINS_AARCH64_ADV_SIMD_SDOT; MonoInst *quad = emit_simd_ins (cfg, arg_klass, OP_ARM64_SELECT_QUAD, args [2]->dreg, args [3]->dreg); quad->data.op [1].klass = quad_klass; MonoInst *ret = emit_simd_ins (cfg, ret_klass, OP_XOP_OVR_X_X_X_X, args [0]->dreg, args [1]->dreg); ret->sreg3 = quad->dreg; ret->inst_c0 = iid; return ret; } default: g_assert_not_reached (); } } return NULL; } #endif // TARGET_ARM64 #ifdef TARGET_AMD64 static SimdIntrinsic sse_methods [] = { {SN_Add, OP_XBINOP, OP_FADD}, {SN_AddScalar, OP_SSE_ADDSS}, {SN_And, OP_SSE_AND}, {SN_AndNot, OP_SSE_ANDN}, {SN_CompareEqual, OP_XCOMPARE_FP, CMP_EQ}, {SN_CompareGreaterThan, OP_XCOMPARE_FP,CMP_GT}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareLessThan, OP_XCOMPARE_FP, CMP_LT}, {SN_CompareLessThanOrEqual, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareNotEqual, OP_XCOMPARE_FP, CMP_NE}, {SN_CompareNotGreaterThan, OP_XCOMPARE_FP, CMP_LE_UN}, {SN_CompareNotGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_LT_UN}, {SN_CompareNotLessThan, OP_XCOMPARE_FP, CMP_GE_UN}, {SN_CompareNotLessThanOrEqual, OP_XCOMPARE_FP, CMP_GT_UN}, {SN_CompareOrdered, OP_XCOMPARE_FP, CMP_ORD}, {SN_CompareScalarEqual, OP_SSE_CMPSS, CMP_EQ}, {SN_CompareScalarGreaterThan, OP_SSE_CMPSS, CMP_GT}, {SN_CompareScalarGreaterThanOrEqual, OP_SSE_CMPSS, CMP_GE}, {SN_CompareScalarLessThan, OP_SSE_CMPSS, CMP_LT}, {SN_CompareScalarLessThanOrEqual, OP_SSE_CMPSS, CMP_LE}, {SN_CompareScalarNotEqual, OP_SSE_CMPSS, CMP_NE}, {SN_CompareScalarNotGreaterThan, OP_SSE_CMPSS, CMP_LE_UN}, {SN_CompareScalarNotGreaterThanOrEqual, OP_SSE_CMPSS, CMP_LT_UN}, {SN_CompareScalarNotLessThan, OP_SSE_CMPSS, CMP_GE_UN}, {SN_CompareScalarNotLessThanOrEqual, OP_SSE_CMPSS, CMP_GT_UN}, {SN_CompareScalarOrdered, OP_SSE_CMPSS, CMP_ORD}, {SN_CompareScalarOrderedEqual, OP_SSE_COMISS, CMP_EQ}, {SN_CompareScalarOrderedGreaterThan, OP_SSE_COMISS, CMP_GT}, {SN_CompareScalarOrderedGreaterThanOrEqual, OP_SSE_COMISS, CMP_GE}, {SN_CompareScalarOrderedLessThan, OP_SSE_COMISS, CMP_LT}, {SN_CompareScalarOrderedLessThanOrEqual, OP_SSE_COMISS, CMP_LE}, {SN_CompareScalarOrderedNotEqual, OP_SSE_COMISS, CMP_NE}, {SN_CompareScalarUnordered, OP_SSE_CMPSS, CMP_UNORD}, {SN_CompareScalarUnorderedEqual, OP_SSE_UCOMISS, CMP_EQ}, {SN_CompareScalarUnorderedGreaterThan, OP_SSE_UCOMISS, CMP_GT}, {SN_CompareScalarUnorderedGreaterThanOrEqual, OP_SSE_UCOMISS, CMP_GE}, {SN_CompareScalarUnorderedLessThan, OP_SSE_UCOMISS, CMP_LT}, {SN_CompareScalarUnorderedLessThanOrEqual, OP_SSE_UCOMISS, CMP_LE}, {SN_CompareScalarUnorderedNotEqual, OP_SSE_UCOMISS, CMP_NE}, {SN_CompareUnordered, OP_XCOMPARE_FP, CMP_UNORD}, {SN_ConvertScalarToVector128Single}, {SN_ConvertToInt32, OP_XOP_I4_X, INTRINS_SSE_CVTSS2SI}, {SN_ConvertToInt32WithTruncation, OP_XOP_I4_X, INTRINS_SSE_CVTTSS2SI}, {SN_ConvertToInt64, OP_XOP_I8_X, INTRINS_SSE_CVTSS2SI64}, {SN_ConvertToInt64WithTruncation, OP_XOP_I8_X, INTRINS_SSE_CVTTSS2SI64}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_SSE_DIVSS}, {SN_LoadAlignedVector128, OP_SSE_LOADU, 16 /* alignment */}, {SN_LoadHigh, OP_SSE_MOVHPS_LOAD}, {SN_LoadLow, OP_SSE_MOVLPS_LOAD}, {SN_LoadScalarVector128, OP_SSE_MOVSS}, {SN_LoadVector128, OP_SSE_LOADU, 1 /* alignment */}, {SN_Max, OP_XOP_X_X_X, INTRINS_SSE_MAXPS}, {SN_MaxScalar, OP_XOP_X_X_X, INTRINS_SSE_MAXSS}, {SN_Min, OP_XOP_X_X_X, INTRINS_SSE_MINPS}, {SN_MinScalar, OP_XOP_X_X_X, INTRINS_SSE_MINSS}, {SN_MoveHighToLow, OP_SSE_MOVEHL}, {SN_MoveLowToHigh, OP_SSE_MOVELH}, {SN_MoveMask, OP_SSE_MOVMSK}, {SN_MoveScalar, OP_SSE_MOVS2}, {SN_Multiply, OP_XBINOP, OP_FMUL}, {SN_MultiplyScalar, OP_SSE_MULSS}, {SN_Or, OP_SSE_OR}, {SN_Prefetch0, OP_SSE_PREFETCHT0}, {SN_Prefetch1, OP_SSE_PREFETCHT1}, {SN_Prefetch2, OP_SSE_PREFETCHT2}, {SN_PrefetchNonTemporal, OP_SSE_PREFETCHNTA}, {SN_Reciprocal, OP_XOP_X_X, INTRINS_SSE_RCP_PS}, {SN_ReciprocalScalar}, {SN_ReciprocalSqrt, OP_XOP_X_X, INTRINS_SSE_RSQRT_PS}, {SN_ReciprocalSqrtScalar}, {SN_Shuffle}, {SN_Sqrt, OP_XOP_X_X, INTRINS_SSE_SQRT_PS}, {SN_SqrtScalar}, {SN_Store, OP_SSE_STORE, 1 /* alignment */}, {SN_StoreAligned, OP_SSE_STORE, 16 /* alignment */}, {SN_StoreAlignedNonTemporal, OP_SSE_MOVNTPS, 16 /* alignment */}, {SN_StoreFence, OP_XOP, INTRINS_SSE_SFENCE}, {SN_StoreHigh, OP_SSE_MOVHPS_STORE}, {SN_StoreLow, OP_SSE_MOVLPS_STORE}, {SN_StoreScalar, OP_SSE_MOVSS_STORE}, {SN_Subtract, OP_XBINOP, OP_FSUB}, {SN_SubtractScalar, OP_SSE_SUBSS}, {SN_UnpackHigh, OP_SSE_UNPACKHI}, {SN_UnpackLow, OP_SSE_UNPACKLO}, {SN_Xor, OP_SSE_XOR}, {SN_get_IsSupported} }; static SimdIntrinsic sse2_methods [] = { {SN_Add}, {SN_AddSaturate, OP_SSE2_ADDS}, {SN_AddScalar, OP_SSE2_ADDSD}, {SN_And, OP_SSE_AND}, {SN_AndNot, OP_SSE_ANDN}, {SN_Average}, {SN_CompareEqual}, {SN_CompareGreaterThan}, {SN_CompareGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_GE}, {SN_CompareLessThan}, {SN_CompareLessThanOrEqual, OP_XCOMPARE_FP, CMP_LE}, {SN_CompareNotEqual, OP_XCOMPARE_FP, CMP_NE}, {SN_CompareNotGreaterThan, OP_XCOMPARE_FP, CMP_LE_UN}, {SN_CompareNotGreaterThanOrEqual, OP_XCOMPARE_FP, CMP_LT_UN}, {SN_CompareNotLessThan, OP_XCOMPARE_FP, CMP_GE_UN}, {SN_CompareNotLessThanOrEqual, OP_XCOMPARE_FP, CMP_GT_UN}, {SN_CompareOrdered, OP_XCOMPARE_FP, CMP_ORD}, {SN_CompareScalarEqual, OP_SSE2_CMPSD, CMP_EQ}, {SN_CompareScalarGreaterThan, OP_SSE2_CMPSD, CMP_GT}, {SN_CompareScalarGreaterThanOrEqual, OP_SSE2_CMPSD, CMP_GE}, {SN_CompareScalarLessThan, OP_SSE2_CMPSD, CMP_LT}, {SN_CompareScalarLessThanOrEqual, OP_SSE2_CMPSD, CMP_LE}, {SN_CompareScalarNotEqual, OP_SSE2_CMPSD, CMP_NE}, {SN_CompareScalarNotGreaterThan, OP_SSE2_CMPSD, CMP_LE_UN}, {SN_CompareScalarNotGreaterThanOrEqual, OP_SSE2_CMPSD, CMP_LT_UN}, {SN_CompareScalarNotLessThan, OP_SSE2_CMPSD, CMP_GE_UN}, {SN_CompareScalarNotLessThanOrEqual, OP_SSE2_CMPSD, CMP_GT_UN}, {SN_CompareScalarOrdered, OP_SSE2_CMPSD, CMP_ORD}, {SN_CompareScalarOrderedEqual, OP_SSE2_COMISD, CMP_EQ}, {SN_CompareScalarOrderedGreaterThan, OP_SSE2_COMISD, CMP_GT}, {SN_CompareScalarOrderedGreaterThanOrEqual, OP_SSE2_COMISD, CMP_GE}, {SN_CompareScalarOrderedLessThan, OP_SSE2_COMISD, CMP_LT}, {SN_CompareScalarOrderedLessThanOrEqual, OP_SSE2_COMISD, CMP_LE}, {SN_CompareScalarOrderedNotEqual, OP_SSE2_COMISD, CMP_NE}, {SN_CompareScalarUnordered, OP_SSE2_CMPSD, CMP_UNORD}, {SN_CompareScalarUnorderedEqual, OP_SSE2_UCOMISD, CMP_EQ}, {SN_CompareScalarUnorderedGreaterThan, OP_SSE2_UCOMISD, CMP_GT}, {SN_CompareScalarUnorderedGreaterThanOrEqual, OP_SSE2_UCOMISD, CMP_GE}, {SN_CompareScalarUnorderedLessThan, OP_SSE2_UCOMISD, CMP_LT}, {SN_CompareScalarUnorderedLessThanOrEqual, OP_SSE2_UCOMISD, CMP_LE}, {SN_CompareScalarUnorderedNotEqual, OP_SSE2_UCOMISD, CMP_NE}, {SN_CompareUnordered, OP_XCOMPARE_FP, CMP_UNORD}, {SN_ConvertScalarToVector128Double}, {SN_ConvertScalarToVector128Int32}, {SN_ConvertScalarToVector128Int64}, {SN_ConvertScalarToVector128Single, OP_XOP_X_X_X, INTRINS_SSE_CVTSD2SS}, {SN_ConvertScalarToVector128UInt32}, {SN_ConvertScalarToVector128UInt64}, {SN_ConvertToInt32}, {SN_ConvertToInt32WithTruncation, OP_XOP_I4_X, INTRINS_SSE_CVTTSD2SI}, {SN_ConvertToInt64}, {SN_ConvertToInt64WithTruncation, OP_XOP_I8_X, INTRINS_SSE_CVTTSD2SI64}, {SN_ConvertToUInt32}, {SN_ConvertToUInt64}, {SN_ConvertToVector128Double}, {SN_ConvertToVector128Int32}, {SN_ConvertToVector128Int32WithTruncation}, {SN_ConvertToVector128Single}, {SN_Divide, OP_XBINOP, OP_FDIV}, {SN_DivideScalar, OP_SSE2_DIVSD}, {SN_Extract}, {SN_Insert}, {SN_LoadAlignedVector128}, {SN_LoadFence, OP_XOP, INTRINS_SSE_LFENCE}, {SN_LoadHigh, OP_SSE2_MOVHPD_LOAD}, {SN_LoadLow, OP_SSE2_MOVLPD_LOAD}, {SN_LoadScalarVector128}, {SN_LoadVector128}, {SN_MaskMove, OP_SSE2_MASKMOVDQU}, {SN_Max}, {SN_MaxScalar, OP_XOP_X_X_X, INTRINS_SSE_MAXSD}, {SN_MemoryFence, OP_XOP, INTRINS_SSE_MFENCE}, {SN_Min}, // FIXME: {SN_MinScalar, OP_XOP_X_X_X, INTRINS_SSE_MINSD}, {SN_MoveMask, OP_SSE_MOVMSK}, {SN_MoveScalar}, {SN_Multiply}, {SN_MultiplyAddAdjacent, OP_XOP_X_X_X, INTRINS_SSE_PMADDWD}, {SN_MultiplyHigh}, {SN_MultiplyLow, OP_PMULW}, {SN_MultiplyScalar, OP_SSE2_MULSD}, {SN_Or, OP_SSE_OR}, {SN_PackSignedSaturate}, {SN_PackUnsignedSaturate}, {SN_ShiftLeftLogical}, {SN_ShiftLeftLogical128BitLane}, {SN_ShiftRightArithmetic}, {SN_ShiftRightLogical}, {SN_ShiftRightLogical128BitLane}, {SN_Shuffle}, {SN_ShuffleHigh}, {SN_ShuffleLow}, {SN_Sqrt, OP_XOP_X_X, INTRINS_SSE_SQRT_PD}, {SN_SqrtScalar}, {SN_Store, OP_SSE_STORE, 1 /* alignment */}, {SN_StoreAligned, OP_SSE_STORE, 16 /* alignment */}, {SN_StoreAlignedNonTemporal, OP_SSE_MOVNTPS, 16 /* alignment */}, {SN_StoreHigh, OP_SSE2_MOVHPD_STORE}, {SN_StoreLow, OP_SSE2_MOVLPD_STORE}, {SN_StoreNonTemporal, OP_SSE_MOVNTPS, 1 /* alignment */}, {SN_StoreScalar, OP_SSE_STORES}, {SN_Subtract}, {SN_SubtractSaturate, OP_SSE2_SUBS}, {SN_SubtractScalar, OP_SSE2_SUBSD}, {SN_SumAbsoluteDifferences, OP_XOP_X_X_X, INTRINS_SSE_PSADBW}, {SN_UnpackHigh, OP_SSE_UNPACKHI}, {SN_UnpackLow, OP_SSE_UNPACKLO}, {SN_Xor, OP_SSE_XOR}, {SN_get_IsSupported} }; static SimdIntrinsic sse3_methods [] = { {SN_AddSubtract}, {SN_HorizontalAdd}, {SN_HorizontalSubtract}, {SN_LoadAndDuplicateToVector128, OP_SSE3_MOVDDUP_MEM}, {SN_LoadDquVector128, OP_XOP_X_I, INTRINS_SSE_LDU_DQ}, {SN_MoveAndDuplicate, OP_SSE3_MOVDDUP}, {SN_MoveHighAndDuplicate, OP_SSE3_MOVSHDUP}, {SN_MoveLowAndDuplicate, OP_SSE3_MOVSLDUP}, {SN_get_IsSupported} }; static SimdIntrinsic ssse3_methods [] = { {SN_Abs, OP_SSSE3_ABS}, {SN_AlignRight}, {SN_HorizontalAdd}, {SN_HorizontalAddSaturate, OP_XOP_X_X_X, INTRINS_SSE_PHADDSW}, {SN_HorizontalSubtract}, {SN_HorizontalSubtractSaturate, OP_XOP_X_X_X, INTRINS_SSE_PHSUBSW}, {SN_MultiplyAddAdjacent, OP_XOP_X_X_X, INTRINS_SSE_PMADDUBSW}, {SN_MultiplyHighRoundScale, OP_XOP_X_X_X, INTRINS_SSE_PMULHRSW}, {SN_Shuffle, OP_SSSE3_SHUFFLE}, {SN_Sign}, {SN_get_IsSupported} }; static SimdIntrinsic sse41_methods [] = { {SN_Blend}, {SN_BlendVariable}, {SN_Ceiling, OP_SSE41_ROUNDP, 10 /*round mode*/}, {SN_CeilingScalar, 0, 10 /*round mode*/}, {SN_CompareEqual, OP_XCOMPARE, CMP_EQ}, {SN_ConvertToVector128Int16, OP_SSE_CVTII, MONO_TYPE_I2}, {SN_ConvertToVector128Int32, OP_SSE_CVTII, MONO_TYPE_I4}, {SN_ConvertToVector128Int64, OP_SSE_CVTII, MONO_TYPE_I8}, {SN_DotProduct}, {SN_Extract}, {SN_Floor, OP_SSE41_ROUNDP, 9 /*round mode*/}, {SN_FloorScalar, 0, 9 /*round mode*/}, {SN_Insert}, {SN_LoadAlignedVector128NonTemporal, OP_SSE41_LOADANT}, {SN_Max, OP_XBINOP, OP_IMAX}, {SN_Min, OP_XBINOP, OP_IMIN}, {SN_MinHorizontal, OP_XOP_X_X, INTRINS_SSE_PHMINPOSUW}, {SN_MultipleSumAbsoluteDifferences}, {SN_Multiply, OP_SSE41_MUL}, {SN_MultiplyLow, OP_SSE41_MULLO}, {SN_PackUnsignedSaturate, OP_XOP_X_X_X, INTRINS_SSE_PACKUSDW}, {SN_RoundCurrentDirection, OP_SSE41_ROUNDP, 4 /*round mode*/}, {SN_RoundCurrentDirectionScalar, 0, 4 /*round mode*/}, {SN_RoundToNearestInteger, OP_SSE41_ROUNDP, 8 /*round mode*/}, {SN_RoundToNearestIntegerScalar, 0, 8 /*round mode*/}, {SN_RoundToNegativeInfinity, OP_SSE41_ROUNDP, 9 /*round mode*/}, {SN_RoundToNegativeInfinityScalar, 0, 9 /*round mode*/}, {SN_RoundToPositiveInfinity, OP_SSE41_ROUNDP, 10 /*round mode*/}, {SN_RoundToPositiveInfinityScalar, 0, 10 /*round mode*/}, {SN_RoundToZero, OP_SSE41_ROUNDP, 11 /*round mode*/}, {SN_RoundToZeroScalar, 0, 11 /*round mode*/}, {SN_TestC, OP_XOP_I4_X_X, INTRINS_SSE_TESTC}, {SN_TestNotZAndNotC, OP_XOP_I4_X_X, INTRINS_SSE_TESTNZ}, {SN_TestZ, OP_XOP_I4_X_X, INTRINS_SSE_TESTZ}, {SN_get_IsSupported} }; static SimdIntrinsic sse42_methods [] = { {SN_CompareGreaterThan, OP_XCOMPARE, CMP_GT}, {SN_Crc32}, {SN_get_IsSupported} }; static SimdIntrinsic pclmulqdq_methods [] = { {SN_CarrylessMultiply}, {SN_get_IsSupported} }; static SimdIntrinsic aes_methods [] = { {SN_Decrypt, OP_XOP_X_X_X, INTRINS_AESNI_AESDEC}, {SN_DecryptLast, OP_XOP_X_X_X, INTRINS_AESNI_AESDECLAST}, {SN_Encrypt, OP_XOP_X_X_X, INTRINS_AESNI_AESENC}, {SN_EncryptLast, OP_XOP_X_X_X, INTRINS_AESNI_AESENCLAST}, {SN_InverseMixColumns, OP_XOP_X_X, INTRINS_AESNI_AESIMC}, {SN_KeygenAssist}, {SN_get_IsSupported} }; static SimdIntrinsic popcnt_methods [] = { {SN_PopCount}, {SN_get_IsSupported} }; static SimdIntrinsic lzcnt_methods [] = { {SN_LeadingZeroCount}, {SN_get_IsSupported} }; static SimdIntrinsic bmi1_methods [] = { {SN_AndNot}, {SN_BitFieldExtract}, {SN_ExtractLowestSetBit}, {SN_GetMaskUpToLowestSetBit}, {SN_ResetLowestSetBit}, {SN_TrailingZeroCount}, {SN_get_IsSupported} }; static SimdIntrinsic bmi2_methods [] = { {SN_MultiplyNoFlags}, {SN_ParallelBitDeposit}, {SN_ParallelBitExtract}, {SN_ZeroHighBits}, {SN_get_IsSupported} }; static SimdIntrinsic x86base_methods [] = { {SN_BitScanForward}, {SN_BitScanReverse}, {SN_get_IsSupported} }; static const IntrinGroup supported_x86_intrinsics [] = { { "Aes", MONO_CPU_X86_AES, aes_methods, sizeof (aes_methods) }, { "Avx", MONO_CPU_X86_AVX, unsupported, sizeof (unsupported) }, { "Avx2", MONO_CPU_X86_AVX2, unsupported, sizeof (unsupported) }, { "AvxVnni", 0, unsupported, sizeof (unsupported) }, { "Bmi1", MONO_CPU_X86_BMI1, bmi1_methods, sizeof (bmi1_methods) }, { "Bmi2", MONO_CPU_X86_BMI2, bmi2_methods, sizeof (bmi2_methods) }, { "Fma", MONO_CPU_X86_FMA, unsupported, sizeof (unsupported) }, { "Lzcnt", MONO_CPU_X86_LZCNT, lzcnt_methods, sizeof (lzcnt_methods), TRUE }, { "Pclmulqdq", MONO_CPU_X86_PCLMUL, pclmulqdq_methods, sizeof (pclmulqdq_methods) }, { "Popcnt", MONO_CPU_X86_POPCNT, popcnt_methods, sizeof (popcnt_methods), TRUE }, { "Sse", MONO_CPU_X86_SSE, sse_methods, sizeof (sse_methods) }, { "Sse2", MONO_CPU_X86_SSE2, sse2_methods, sizeof (sse2_methods) }, { "Sse3", MONO_CPU_X86_SSE3, sse3_methods, sizeof (sse3_methods) }, { "Sse41", MONO_CPU_X86_SSE41, sse41_methods, sizeof (sse41_methods) }, { "Sse42", MONO_CPU_X86_SSE42, sse42_methods, sizeof (sse42_methods) }, { "Ssse3", MONO_CPU_X86_SSSE3, ssse3_methods, sizeof (ssse3_methods) }, { "X86Base", 0, x86base_methods, sizeof (x86base_methods) }, }; static MonoInst* emit_x86_intrinsics ( MonoCompile *cfg, MonoMethodSignature *fsig, MonoInst **args, MonoClass *klass, const IntrinGroup *intrin_group, const SimdIntrinsic *info, int id, MonoTypeEnum arg0_type, gboolean is_64bit) { MonoCPUFeatures feature = intrin_group->feature; const SimdIntrinsic *intrinsics = intrin_group->intrinsics; if (feature == MONO_CPU_X86_SSE) { switch (id) { case SN_Shuffle: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_SHUFPS, 0, arg0_type, fsig, args); case SN_ConvertScalarToVector128Single: { int op = 0; switch (fsig->params [1]->type) { case MONO_TYPE_I4: op = OP_SSE_CVTSI2SS; break; case MONO_TYPE_I8: op = OP_SSE_CVTSI2SS64; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ReciprocalScalar: case SN_ReciprocalSqrtScalar: case SN_SqrtScalar: { int op = 0; switch (id) { case SN_ReciprocalScalar: op = OP_SSE_RCPSS; break; case SN_ReciprocalSqrtScalar: op = OP_SSE_RSQRTSS; break; case SN_SqrtScalar: op = OP_SSE_SQRTSS; break; }; if (fsig->param_count == 1) return emit_simd_ins (cfg, klass, op, args [0]->dreg, args[0]->dreg); else if (fsig->param_count == 2) return emit_simd_ins (cfg, klass, op, args [0]->dreg, args[1]->dreg); else g_assert_not_reached (); break; } case SN_LoadScalarVector128: return NULL; default: return NULL; } } if (feature == MONO_CPU_X86_SSE2) { switch (id) { case SN_Subtract: return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, arg0_type == MONO_TYPE_R8 ? OP_FSUB : OP_ISUB, arg0_type, fsig, args); case SN_Add: return emit_simd_ins_for_sig (cfg, klass, OP_XBINOP, arg0_type == MONO_TYPE_R8 ? OP_FADD : OP_IADD, arg0_type, fsig, args); case SN_Average: if (arg0_type == MONO_TYPE_U1) return emit_simd_ins_for_sig (cfg, klass, OP_PAVGB_UN, -1, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_U2) return emit_simd_ins_for_sig (cfg, klass, OP_PAVGW_UN, -1, arg0_type, fsig, args); else return NULL; case SN_CompareNotEqual: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_NE, arg0_type, fsig, args); case SN_CompareEqual: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_EQ, arg0_type, fsig, args); case SN_CompareGreaterThan: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_GT, arg0_type, fsig, args); case SN_CompareLessThan: return emit_simd_ins_for_sig (cfg, klass, arg0_type == MONO_TYPE_R8 ? OP_XCOMPARE_FP : OP_XCOMPARE, CMP_LT, arg0_type, fsig, args); case SN_ConvertToInt32: if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_I4_X, INTRINS_SSE_CVTSD2SI, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I4, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToInt64: if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_I8_X, INTRINS_SSE_CVTSD2SI64, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I8) return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I8, 0 /*element index*/, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_ConvertScalarToVector128Double: { int op = OP_SSE2_CVTSS2SD; switch (fsig->params [1]->type) { case MONO_TYPE_I4: op = OP_SSE2_CVTSI2SD; break; case MONO_TYPE_I8: op = OP_SSE2_CVTSI2SD64; break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ConvertScalarToVector128Int32: case SN_ConvertScalarToVector128Int64: case SN_ConvertScalarToVector128UInt32: case SN_ConvertScalarToVector128UInt64: return emit_simd_ins_for_sig (cfg, klass, OP_CREATE_SCALAR, -1, arg0_type, fsig, args); case SN_ConvertToUInt32: return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I4, 0 /*element index*/, arg0_type, fsig, args); case SN_ConvertToUInt64: return emit_simd_ins_for_sig (cfg, klass, OP_EXTRACT_I8, 0 /*element index*/, arg0_type, fsig, args); case SN_ConvertToVector128Double: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPS2PD, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTDQ2PD, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Int32: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPS2DQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPD2DQ, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Int32WithTruncation: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTTPS2DQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTTPD2DQ, 0, arg0_type, fsig, args); else return NULL; case SN_ConvertToVector128Single: if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_CVTDQ2PS, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_CVTPD2PS, 0, arg0_type, fsig, args); else return NULL; case SN_LoadAlignedVector128: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_LOADU, 16 /*alignment*/, arg0_type, fsig, args); case SN_LoadVector128: return emit_simd_ins_for_sig (cfg, klass, OP_SSE_LOADU, 1 /*alignment*/, arg0_type, fsig, args); case SN_MoveScalar: return emit_simd_ins_for_sig (cfg, klass, fsig->param_count == 2 ? OP_SSE_MOVS2 : OP_SSE_MOVS, -1, arg0_type, fsig, args); case SN_Max: switch (arg0_type) { case MONO_TYPE_U1: return emit_simd_ins_for_sig (cfg, klass, OP_PMAXB_UN, 0, arg0_type, fsig, args); case MONO_TYPE_I2: return emit_simd_ins_for_sig (cfg, klass, OP_PMAXW, 0, arg0_type, fsig, args); case MONO_TYPE_R8: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_MAXPD, arg0_type, fsig, args); default: g_assert_not_reached (); break; } break; case SN_Min: switch (arg0_type) { case MONO_TYPE_U1: return emit_simd_ins_for_sig (cfg, klass, OP_PMINB_UN, 0, arg0_type, fsig, args); case MONO_TYPE_I2: return emit_simd_ins_for_sig (cfg, klass, OP_PMINW, 0, arg0_type, fsig, args); case MONO_TYPE_R8: return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_MINPD, arg0_type, fsig, args); default: g_assert_not_reached (); break; } break; case SN_Multiply: if (arg0_type == MONO_TYPE_U4) return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PMULUDQ, 0, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_MULPD, 0, arg0_type, fsig, args); else g_assert_not_reached (); case SN_MultiplyHigh: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PMULHW, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_U2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PMULHUW, arg0_type, fsig, args); else g_assert_not_reached (); case SN_PackSignedSaturate: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PACKSSWB, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_I4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PACKSSDW, arg0_type, fsig, args); else g_assert_not_reached (); case SN_PackUnsignedSaturate: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PACKUS, -1, arg0_type, fsig, args); case SN_Extract: g_assert (arg0_type == MONO_TYPE_U2); return emit_simd_ins_for_sig (cfg, klass, OP_XEXTRACT_I4, 0, arg0_type, fsig, args); case SN_Insert: g_assert (arg0_type == MONO_TYPE_I2 || arg0_type == MONO_TYPE_U2); return emit_simd_ins_for_sig (cfg, klass, OP_XINSERT_I2, 0, arg0_type, fsig, args); case SN_ShiftRightLogical: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSRLI_W : INTRINS_SSE_PSRL_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSRLI_D : INTRINS_SSE_PSRL_D; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = is_imm ? INTRINS_SSE_PSRLI_Q : INTRINS_SSE_PSRL_Q; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftRightArithmetic: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSRAI_W : INTRINS_SSE_PSRA_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSRAI_D : INTRINS_SSE_PSRA_D; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftLeftLogical: { gboolean is_imm = fsig->params [1]->type == MONO_TYPE_U1; IntrinsicId op = (IntrinsicId)0; switch (arg0_type) { case MONO_TYPE_I2: case MONO_TYPE_U2: op = is_imm ? INTRINS_SSE_PSLLI_W : INTRINS_SSE_PSLL_W; break; case MONO_TYPE_I4: case MONO_TYPE_U4: op = is_imm ? INTRINS_SSE_PSLLI_D : INTRINS_SSE_PSLL_D; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = is_imm ? INTRINS_SSE_PSLLI_Q : INTRINS_SSE_PSLL_Q; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, is_imm ? OP_XOP_X_X_I4 : OP_XOP_X_X_X, op, arg0_type, fsig, args); } case SN_ShiftLeftLogical128BitLane: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSLLDQ, 0, arg0_type, fsig, args); case SN_ShiftRightLogical128BitLane: return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSRLDQ, 0, arg0_type, fsig, args); case SN_Shuffle: { if (fsig->param_count == 2) { g_assert (arg0_type == MONO_TYPE_I4 || arg0_type == MONO_TYPE_U4); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFD, 0, arg0_type, fsig, args); } else if (fsig->param_count == 3) { g_assert (arg0_type == MONO_TYPE_R8); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_SHUFPD, 0, arg0_type, fsig, args); } else { g_assert_not_reached (); break; } } case SN_ShuffleHigh: g_assert (fsig->param_count == 2); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFHW, 0, arg0_type, fsig, args); case SN_ShuffleLow: g_assert (fsig->param_count == 2); return emit_simd_ins_for_sig (cfg, klass, OP_SSE2_PSHUFLW, 0, arg0_type, fsig, args); case SN_SqrtScalar: { if (fsig->param_count == 1) return emit_simd_ins (cfg, klass, OP_SSE2_SQRTSD, args [0]->dreg, args[0]->dreg); else if (fsig->param_count == 2) return emit_simd_ins (cfg, klass, OP_SSE2_SQRTSD, args [0]->dreg, args[1]->dreg); else { g_assert_not_reached (); break; } } case SN_LoadScalarVector128: { int op = 0; switch (arg0_type) { case MONO_TYPE_I4: case MONO_TYPE_U4: op = OP_SSE2_MOVD; break; case MONO_TYPE_I8: case MONO_TYPE_U8: op = OP_SSE2_MOVQ; break; case MONO_TYPE_R8: op = OP_SSE2_MOVUPD; break; default: g_assert_not_reached(); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } default: return NULL; } } if (feature == MONO_CPU_X86_SSE3) { switch (id) { case SN_AddSubtract: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_ADDSUBPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_ADDSUBPD, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_HorizontalAdd: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HADDPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HADDPD, arg0_type, fsig, args); else g_assert_not_reached (); break; case SN_HorizontalSubtract: if (arg0_type == MONO_TYPE_R4) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HSUBPS, arg0_type, fsig, args); else if (arg0_type == MONO_TYPE_R8) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_HSUBPD, arg0_type, fsig, args); else g_assert_not_reached (); break; default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSSE3) { switch (id) { case SN_AlignRight: return emit_simd_ins_for_sig (cfg, klass, OP_SSSE3_ALIGNR, 0, arg0_type, fsig, args); case SN_HorizontalAdd: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHADDW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHADDD, arg0_type, fsig, args); case SN_HorizontalSubtract: if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHSUBW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PHSUBD, arg0_type, fsig, args); case SN_Sign: if (arg0_type == MONO_TYPE_I1) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGNB, arg0_type, fsig, args); if (arg0_type == MONO_TYPE_I2) return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGNW, arg0_type, fsig, args); return emit_simd_ins_for_sig (cfg, klass, OP_XOP_X_X_X, INTRINS_SSE_PSIGND, arg0_type, fsig, args); default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSE41) { switch (id) { case SN_DotProduct: { int op = 0; switch (arg0_type) { case MONO_TYPE_R4: op = OP_SSE41_DPPS; break; case MONO_TYPE_R8: op = OP_SSE41_DPPD; break; default: g_assert_not_reached (); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_MultipleSumAbsoluteDifferences: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_MPSADBW, 0, arg0_type, fsig, args); case SN_Blend: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_BLEND, 0, arg0_type, fsig, args); case SN_BlendVariable: return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_BLENDV, -1, arg0_type, fsig, args); case SN_Extract: { int op = 0; switch (arg0_type) { case MONO_TYPE_U1: op = OP_XEXTRACT_I1; break; case MONO_TYPE_U4: case MONO_TYPE_I4: op = OP_XEXTRACT_I4; break; case MONO_TYPE_U8: case MONO_TYPE_I8: op = OP_XEXTRACT_I8; break; case MONO_TYPE_R4: op = OP_XEXTRACT_R4; break; case MONO_TYPE_I: case MONO_TYPE_U: #if TARGET_SIZEOF_VOID_P == 8 op = OP_XEXTRACT_I8; #else op = OP_XEXTRACT_I4; #endif break; default: g_assert_not_reached(); break; } return emit_simd_ins_for_sig (cfg, klass, op, 0, arg0_type, fsig, args); } case SN_Insert: { int op = arg0_type == MONO_TYPE_R4 ? OP_SSE41_INSERTPS : type_to_xinsert_op (arg0_type); return emit_simd_ins_for_sig (cfg, klass, op, -1, arg0_type, fsig, args); } case SN_CeilingScalar: case SN_FloorScalar: case SN_RoundCurrentDirectionScalar: case SN_RoundToNearestIntegerScalar: case SN_RoundToNegativeInfinityScalar: case SN_RoundToPositiveInfinityScalar: case SN_RoundToZeroScalar: if (fsig->param_count == 2) { return emit_simd_ins_for_sig (cfg, klass, OP_SSE41_ROUNDS, info->default_instc0, arg0_type, fsig, args); } else { MonoInst* ins = emit_simd_ins (cfg, klass, OP_SSE41_ROUNDS, args [0]->dreg, args [0]->dreg); ins->inst_c0 = info->default_instc0; ins->inst_c1 = arg0_type; return ins; } break; default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_SSE42) { switch (id) { case SN_Crc32: { MonoTypeEnum arg1_type = get_underlying_type (fsig->params [1]); return emit_simd_ins_for_sig (cfg, klass, arg1_type == MONO_TYPE_U8 ? OP_SSE42_CRC64 : OP_SSE42_CRC32, arg1_type, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_PCLMUL) { switch (id) { case SN_CarrylessMultiply: { return emit_simd_ins_for_sig (cfg, klass, OP_PCLMULQDQ, 0, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } if (feature == MONO_CPU_X86_AES) { switch (id) { case SN_KeygenAssist: { return emit_simd_ins_for_sig (cfg, klass, OP_AES_KEYGENASSIST, 0, arg0_type, fsig, args); } default: g_assert_not_reached (); break; } } MonoInst *ins = NULL; if (feature == MONO_CPU_X86_POPCNT) { switch (id) { case SN_PopCount: MONO_INST_NEW (cfg, ins, is_64bit ? OP_POPCNT64 : OP_POPCNT32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: return NULL; } } if (feature == MONO_CPU_X86_LZCNT) { switch (id) { case SN_LeadingZeroCount: return emit_simd_ins_for_sig (cfg, klass, is_64bit ? OP_LZCNT64 : OP_LZCNT32, 0, arg0_type, fsig, args); default: return NULL; } } if (feature == MONO_CPU_X86_BMI1) { switch (id) { case SN_AndNot: { // (a ^ -1) & b // LLVM replaces it with `andn` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LXOR_IMM : OP_IXOR_IMM, tmp_reg, args [0]->dreg, -1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, tmp_reg, args [1]->dreg); return ins; } case SN_BitFieldExtract: { int ctlreg = args [1]->dreg; if (fsig->param_count == 2) { } else if (fsig->param_count == 3) { MonoInst *ins = NULL; /* This intrinsic is also implemented in managed code. * TODO: remove this if cross-AOT-assembly inlining works */ int startreg = args [1]->dreg; int lenreg = args [2]->dreg; int dreg1 = alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, OP_SHL_IMM, dreg1, lenreg, 8); int dreg2 = alloc_ireg (cfg); EMIT_NEW_BIALU (cfg, ins, OP_IOR, dreg2, startreg, dreg1); ctlreg = dreg2; } else { g_assert_not_reached (); } return emit_simd_ins (cfg, klass, is_64bit ? OP_BMI1_BEXTR64 : OP_BMI1_BEXTR32, args [0]->dreg, ctlreg); } case SN_GetMaskUpToLowestSetBit: { // x ^ (x - 1) // LLVM replaces it with `blsmsk` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LSUB_IMM : OP_ISUB_IMM, tmp_reg, args [0]->dreg, 1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LXOR : OP_IXOR, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_ResetLowestSetBit: { // x & (x - 1) // LLVM replaces it with `blsr` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); EMIT_NEW_BIALU_IMM (cfg, ins, is_64bit ? OP_LSUB_IMM : OP_ISUB_IMM, tmp_reg, args [0]->dreg, 1); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_ExtractLowestSetBit: { // x & (0 - x) // LLVM replaces it with `blsi` int tmp_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int result_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); int zero_reg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); MONO_EMIT_NEW_ICONST (cfg, zero_reg, 0); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LSUB : OP_ISUB, tmp_reg, zero_reg, args [0]->dreg); EMIT_NEW_BIALU (cfg, ins, is_64bit ? OP_LAND : OP_IAND, result_reg, args [0]->dreg, tmp_reg); return ins; } case SN_TrailingZeroCount: MONO_INST_NEW (cfg, ins, is_64bit ? OP_CTTZ64 : OP_CTTZ32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } if (feature == MONO_CPU_X86_BMI2) { switch (id) { case SN_MultiplyNoFlags: { int op = 0; if (fsig->param_count == 2) { op = is_64bit ? OP_MULX_H64 : OP_MULX_H32; } else if (fsig->param_count == 3) { op = is_64bit ? OP_MULX_HL64 : OP_MULX_HL32; } else { g_assert_not_reached (); } return emit_simd_ins_for_sig (cfg, klass, op, 0, 0, fsig, args); } case SN_ZeroHighBits: MONO_INST_NEW (cfg, ins, is_64bit ? OP_BZHI64 : OP_BZHI32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_ParallelBitExtract: MONO_INST_NEW (cfg, ins, is_64bit ? OP_PEXT64 : OP_PEXT32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_ParallelBitDeposit: MONO_INST_NEW (cfg, ins, is_64bit ? OP_PDEP64 : OP_PDEP32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->sreg2 = args [1]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } if (intrinsics == x86base_methods) { switch (id) { case SN_BitScanForward: MONO_INST_NEW (cfg, ins, is_64bit ? OP_X86_BSF64 : OP_X86_BSF32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; case SN_BitScanReverse: MONO_INST_NEW (cfg, ins, is_64bit ? OP_X86_BSR64 : OP_X86_BSR32); ins->dreg = is_64bit ? alloc_lreg (cfg) : alloc_ireg (cfg); ins->sreg1 = args [0]->dreg; ins->type = is_64bit ? STACK_I8 : STACK_I4; MONO_ADD_INS (cfg->cbb, ins); return ins; default: g_assert_not_reached (); } } return NULL; } static guint16 vector_256_t_methods [] = { SN_get_Count, }; static MonoInst* emit_vector256_t (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *ins; MonoType *etype; MonoClass *klass; int size, len, id; id = lookup_intrins (vector_256_t_methods, sizeof (vector_256_t_methods), cmethod); if (id == -1) return NULL; klass = cmethod->klass; etype = mono_class_get_context (klass)->class_inst->type_argv [0]; size = mono_class_value_size (mono_class_from_mono_type_internal (etype), NULL); g_assert (size); len = 32 / size; if (!MONO_TYPE_IS_PRIMITIVE (etype) || etype->type == MONO_TYPE_CHAR || etype->type == MONO_TYPE_BOOLEAN || etype->type == MONO_TYPE_I || etype->type == MONO_TYPE_U) return NULL; if (cfg->verbose_level > 1) { char *name = mono_method_full_name (cmethod, TRUE); printf (" SIMD intrinsic %s\n", name); g_free (name); } switch (id) { case SN_get_Count: if (!(fsig->param_count == 0 && fsig->ret->type == MONO_TYPE_I4)) break; EMIT_NEW_ICONST (cfg, ins, len); return ins; default: break; } return NULL; } static MonoInst* emit_amd64_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { if (!strcmp (class_ns, "System.Runtime.Intrinsics.X86")) { return emit_hardware_intrinsics (cfg, cmethod, fsig, args, supported_x86_intrinsics, sizeof (supported_x86_intrinsics), emit_x86_intrinsics); } if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector256`1")) return emit_vector256_t (cfg, cmethod, fsig, args); } if (!strcmp (class_ns, "System.Numerics")) { if (!strcmp (class_name, "Vector")) return emit_sys_numerics_vector (cfg, cmethod, fsig, args); if (!strcmp (class_name, "Vector`1")) return emit_sys_numerics_vector_t (cfg, cmethod, fsig, args); } return NULL; } #endif // !TARGET_ARM64 #ifdef TARGET_ARM64 static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { // FIXME: implement Vector64<T>, Vector128<T> and Vector<T> for Arm64 if (!strcmp (class_ns, "System.Runtime.Intrinsics.Arm")) { return emit_hardware_intrinsics(cfg, cmethod, fsig, args, supported_arm_intrinsics, sizeof (supported_arm_intrinsics), emit_arm64_intrinsics); } return NULL; } #elif TARGET_AMD64 // TODO: test and enable for x86 too static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { MonoInst *simd_inst = emit_amd64_intrinsics (class_ns, class_name, cfg, cmethod, fsig, args); if (simd_inst != NULL) cfg->uses_simd_intrinsics |= MONO_CFG_USES_SIMD_INTRINSICS; return simd_inst; } #else static MonoInst* emit_simd_intrinsics (const char *class_ns, const char *class_name, MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { return NULL; } #endif MonoInst* mono_emit_simd_intrinsics (MonoCompile *cfg, MonoMethod *cmethod, MonoMethodSignature *fsig, MonoInst **args) { const char *class_name; const char *class_ns; MonoImage *image = m_class_get_image (cmethod->klass); if (image != mono_get_corlib ()) return NULL; class_ns = m_class_get_name_space (cmethod->klass); class_name = m_class_get_name (cmethod->klass); // If cmethod->klass is nested, the namespace is on the enclosing class. if (m_class_get_nested_in (cmethod->klass)) class_ns = m_class_get_name_space (m_class_get_nested_in (cmethod->klass)); #if defined(TARGET_ARM64) || defined(TARGET_AMD64) if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector128") || !strcmp (class_name, "Vector64")) return emit_sri_vector (cfg, cmethod, fsig, args); } if (!strcmp (class_ns, "System.Runtime.Intrinsics")) { if (!strcmp (class_name, "Vector128`1") || !strcmp (class_name, "Vector64`1")) return emit_vector64_vector128_t (cfg, cmethod, fsig, args); } #endif // defined(TARGET_ARM64) || defined(TARGET_AMD64) #if defined(TARGET_ARM64) if (!strcmp (class_ns, "System.Numerics") && !strcmp (class_name, "Vector")){ return emit_sri_vector (cfg, cmethod, fsig, args); } #endif // defined(TARGET_ARM64) return emit_simd_intrinsics (class_ns, class_name, cfg, cmethod, fsig, args); } /* * Windows x64 value type ABI uses reg/stack references (ArgValuetypeAddrInIReg/ArgValuetypeAddrOnStack) * for function arguments. When using SIMD intrinsics arguments optimized into OP_ARG needs to be decomposed * into correspondig SIMD LOADX/STOREX instructions. */ #if defined(TARGET_WIN32) && defined(TARGET_AMD64) static gboolean decompose_vtype_opt_uses_simd_intrinsics (MonoCompile *cfg, MonoInst *ins) { if (cfg->uses_simd_intrinsics & MONO_CFG_USES_SIMD_INTRINSICS) return TRUE; switch (ins->opcode) { case OP_XMOVE: case OP_XZERO: case OP_XPHI: case OP_LOADX_MEMBASE: case OP_LOADX_ALIGNED_MEMBASE: case OP_STOREX_MEMBASE: case OP_STOREX_ALIGNED_MEMBASE_REG: return TRUE; default: return FALSE; } } static void decompose_vtype_opt_load_arg (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, gint32 *sreg_int32) { guint32 *sreg = (guint32*)sreg_int32; MonoInst *src_var = get_vreg_to_inst (cfg, *sreg); if (src_var && src_var->opcode == OP_ARG && src_var->klass && MONO_CLASS_IS_SIMD (cfg, src_var->klass)) { MonoInst *varload_ins, *load_ins; NEW_VARLOADA (cfg, varload_ins, src_var, src_var->inst_vtype); mono_bblock_insert_before_ins (bb, ins, varload_ins); MONO_INST_NEW (cfg, load_ins, OP_LOADX_MEMBASE); load_ins->klass = src_var->klass; load_ins->type = STACK_VTYPE; load_ins->sreg1 = varload_ins->dreg; load_ins->dreg = alloc_xreg (cfg); mono_bblock_insert_after_ins (bb, varload_ins, load_ins); *sreg = load_ins->dreg; } } static void decompose_vtype_opt_store_arg (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins, gint32 *dreg_int32) { guint32 *dreg = (guint32*)dreg_int32; MonoInst *dest_var = get_vreg_to_inst (cfg, *dreg); if (dest_var && dest_var->opcode == OP_ARG && dest_var->klass && MONO_CLASS_IS_SIMD (cfg, dest_var->klass)) { MonoInst *varload_ins, *store_ins; *dreg = alloc_xreg (cfg); NEW_VARLOADA (cfg, varload_ins, dest_var, dest_var->inst_vtype); mono_bblock_insert_after_ins (bb, ins, varload_ins); MONO_INST_NEW (cfg, store_ins, OP_STOREX_MEMBASE); store_ins->klass = dest_var->klass; store_ins->type = STACK_VTYPE; store_ins->sreg1 = *dreg; store_ins->dreg = varload_ins->dreg; mono_bblock_insert_after_ins (bb, varload_ins, store_ins); } } void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins) { if ((cfg->opt & MONO_OPT_SIMD) && decompose_vtype_opt_uses_simd_intrinsics(cfg, ins)) { const char *spec = INS_INFO (ins->opcode); if (spec [MONO_INST_SRC1] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg1)); if (spec [MONO_INST_SRC2] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg2)); if (spec [MONO_INST_SRC3] == 'x') decompose_vtype_opt_load_arg (cfg, bb, ins, &(ins->sreg3)); if (spec [MONO_INST_DEST] == 'x') decompose_vtype_opt_store_arg (cfg, bb, ins, &(ins->dreg)); } } #else void mono_simd_decompose_intrinsic (MonoCompile *cfg, MonoBasicBlock *bb, MonoInst *ins) { } #endif /*defined(TARGET_WIN32) && defined(TARGET_AMD64)*/ void mono_simd_simplify_indirection (MonoCompile *cfg) { } #endif /* DISABLE_JIT */ #endif /* MONO_ARCH_SIMD_INTRINSICS */ #if defined(TARGET_AMD64) void ves_icall_System_Runtime_Intrinsics_X86_X86Base___cpuidex (int abcd[4], int function_id, int subfunction_id) { #ifndef MONO_CROSS_COMPILE mono_hwcap_x86_call_cpuidex (function_id, subfunction_id, &abcd [0], &abcd [1], &abcd [2], &abcd [3]); #endif } #endif MONO_EMPTY_SOURCE_FILE (simd_intrinsics_netcore);
1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/mono/mono/mini/simd-methods.h
METHOD2(".ctor", ctor) METHOD(CopyTo) METHOD(Equals) METHOD(GreaterThan) METHOD(GreaterThanOrEqual) METHOD(LessThan) METHOD(LessThanOrEqual) METHOD(Min) METHOD(Max) METHOD(MinScalar) METHOD(MaxScalar) METHOD(PopCount) METHOD(LeadingZeroCount) METHOD(get_Count) METHOD(get_IsHardwareAccelerated) METHOD(get_IsSupported) METHOD(get_AllBitsSet) METHOD(get_Item) METHOD(get_One) METHOD(get_Zero) METHOD(op_Addition) METHOD(op_BitwiseAnd) METHOD(op_BitwiseOr) METHOD(op_Division) METHOD(op_Equality) METHOD(op_ExclusiveOr) METHOD(op_Explicit) METHOD(op_Inequality) METHOD(op_Multiply) METHOD(op_Subtraction) // Vector METHOD(ConvertToInt32) METHOD(ConvertToInt32WithTruncation) METHOD(ConvertToUInt32) METHOD(ConvertToInt64) METHOD(ConvertToInt64WithTruncation) METHOD(ConvertToUInt64) METHOD(ConvertToSingle) METHOD(ConvertToDouble) METHOD(Narrow) METHOD(Widen) // Vector64, Vector128, Vector256 METHOD(As) METHOD(AsByte) METHOD(AsDouble) METHOD(AsInt16) METHOD(AsInt32) METHOD(AsInt64) METHOD(AsSByte) METHOD(AsSingle) METHOD(AsUInt16) METHOD(AsUInt32) METHOD(AsUInt64) METHOD(AsVector128) METHOD(AsVector2) METHOD(AsVector256) METHOD(AsVector3) METHOD(AsVector4) METHOD(BitwiseAnd) METHOD(BitwiseOr) METHOD(Create) METHOD(CreateScalar) METHOD(CreateScalarUnsafe) METHOD(ConditionalSelect) METHOD(EqualsAll) METHOD(EqualsAny) METHOD(GetElement) METHOD(GetLower) METHOD(GetUpper) METHOD(ToScalar) METHOD(ToVector128) METHOD(ToVector128Unsafe) METHOD(ToVector256) METHOD(ToVector256Unsafe) METHOD(WithElement) METHOD(WithLower) METHOD(WithUpper) // Bmi1 METHOD(AndNot) METHOD(BitFieldExtract) METHOD(ExtractLowestSetBit) METHOD(GetMaskUpToLowestSetBit) METHOD(ResetLowestSetBit) METHOD(TrailingZeroCount) // Bmi2 METHOD(ZeroHighBits) METHOD(MultiplyNoFlags) METHOD(ParallelBitDeposit) METHOD(ParallelBitExtract) // Sse METHOD(Add) METHOD(CompareGreaterThanOrEqual) METHOD(CompareLessThanOrEqual) METHOD(CompareNotEqual) METHOD(CompareNotGreaterThan) METHOD(CompareNotGreaterThanOrEqual) METHOD(CompareNotLessThan) METHOD(CompareNotLessThanOrEqual) METHOD(CompareScalarGreaterThan) METHOD(CompareScalarGreaterThanOrEqual) METHOD(CompareScalarLessThan) METHOD(CompareScalarLessThanOrEqual) METHOD(CompareScalarNotEqual) METHOD(CompareScalarNotGreaterThan) METHOD(CompareScalarNotGreaterThanOrEqual) METHOD(CompareScalarNotLessThan) METHOD(CompareScalarNotLessThanOrEqual) METHOD(CompareScalarOrderedEqual) METHOD(CompareScalarOrderedGreaterThan) METHOD(CompareScalarOrderedGreaterThanOrEqual) METHOD(CompareScalarOrderedLessThan) METHOD(CompareScalarOrderedLessThanOrEqual) METHOD(CompareScalarOrderedNotEqual) METHOD(CompareScalarUnorderedEqual) METHOD(CompareScalarUnorderedGreaterThan) METHOD(CompareScalarUnorderedGreaterThanOrEqual) METHOD(CompareScalarUnorderedLessThan) METHOD(CompareScalarUnorderedLessThanOrEqual) METHOD(CompareScalarUnorderedNotEqual) METHOD(CompareOrdered) METHOD(CompareUnordered) METHOD(CompareScalarOrdered) METHOD(CompareScalarUnordered) METHOD(ConvertScalarToVector128Single) METHOD(Divide) METHOD(DivideScalar) METHOD(Store) METHOD(StoreFence) METHOD(StoreHigh) METHOD(StoreLow) METHOD(Subtract) METHOD(SubtractScalar) METHOD(CompareEqual) METHOD(Extract) METHOD(LoadHigh) METHOD(LoadLow) METHOD(LoadVector128) METHOD(LoadScalarVector128) METHOD(MoveHighToLow) METHOD(MoveLowToHigh) METHOD(MoveMask) METHOD(MoveScalar) METHOD(Multiply) METHOD(MultiplyAddAdjacent) METHOD(MultiplyScalar) METHOD(Shuffle) METHOD(UnpackHigh) METHOD(UnpackLow) METHOD(Prefetch0) METHOD(Prefetch1) METHOD(Prefetch2) METHOD(PrefetchNonTemporal) METHOD(Reciprocal) METHOD(ReciprocalScalar) METHOD(ReciprocalSqrt) METHOD(ReciprocalSqrtScalar) METHOD(Sqrt) METHOD(SqrtScalar) // Sse2 METHOD(AddSaturate) METHOD(AddScalar) METHOD(And) METHOD(Average) METHOD(Or) METHOD(LoadAlignedVector128) METHOD(Xor) METHOD(CompareGreaterThan) METHOD(CompareScalarEqual) METHOD(ConvertScalarToVector128Double) METHOD(ConvertScalarToVector128Int32) METHOD(ConvertScalarToVector128Int64) METHOD(ConvertScalarToVector128UInt32) METHOD(ConvertScalarToVector128UInt64) METHOD(ConvertToVector128Double) METHOD(ConvertToVector128Int32) METHOD(ConvertToVector128Int32WithTruncation) METHOD(ConvertToVector128Single) METHOD(MaskMove) METHOD(MultiplyHigh) METHOD(MultiplyLow) METHOD(PackSignedSaturate) METHOD(PackUnsignedSaturate) METHOD(ShuffleHigh) METHOD(ShuffleLow) METHOD(SubtractSaturate) METHOD(SumAbsoluteDifferences) METHOD(StoreScalar) METHOD(StoreAligned) METHOD(StoreAlignedNonTemporal) METHOD(StoreNonTemporal) METHOD(ShiftLeftLogical) METHOD(ShiftLeftLogical128BitLane) METHOD(ShiftRightArithmetic) METHOD(ShiftRightLogical) METHOD(ShiftRightLogical128BitLane) METHOD(CompareLessThan) METHOD(LoadFence) METHOD(MemoryFence) // Sse3 METHOD(HorizontalAdd) METHOD(HorizontalSubtract) METHOD(AddSubtract) METHOD(LoadAndDuplicateToVector128) METHOD(LoadDquVector128) METHOD(MoveAndDuplicate) METHOD(MoveHighAndDuplicate) METHOD(MoveLowAndDuplicate) // Ssse3 METHOD(Abs) // Also used by ARM64 METHOD(AlignRight) METHOD(HorizontalAddSaturate) METHOD(HorizontalSubtractSaturate) METHOD(MultiplyHighRoundScale) METHOD(Sign) // Sse41 METHOD(Blend) METHOD(BlendVariable) METHOD(Ceiling) METHOD(CeilingScalar) METHOD(ConvertToVector128Int16) METHOD(ConvertToVector128Int64) METHOD(Floor) METHOD(FloorScalar) METHOD(Insert) METHOD(LoadAlignedVector128NonTemporal) METHOD(RoundCurrentDirectionScalar) METHOD(RoundToNearestInteger) METHOD(RoundToNearestIntegerScalar) METHOD(RoundToNegativeInfinity) METHOD(RoundToNegativeInfinityScalar) METHOD(RoundToPositiveInfinity) METHOD(RoundToPositiveInfinityScalar) METHOD(RoundToZero) METHOD(RoundToZeroScalar) METHOD(RoundCurrentDirection) METHOD(MinHorizontal) METHOD(TestC) METHOD(TestNotZAndNotC) METHOD(TestZ) METHOD(DotProduct) METHOD(MultipleSumAbsoluteDifferences) // Sse42 METHOD(Crc32) // Aes METHOD(Decrypt) METHOD(DecryptLast) METHOD(Encrypt) METHOD(EncryptLast) METHOD(InverseMixColumns) METHOD(KeygenAssist) METHOD(PolynomialMultiplyWideningLower) METHOD(PolynomialMultiplyWideningUpper) // Pclmulqdq METHOD(CarrylessMultiply) // ArmBase METHOD(LeadingSignCount) METHOD(ReverseElementBits) // Crc32 METHOD(ComputeCrc32) METHOD(ComputeCrc32C) // X86Base METHOD(BitScanForward) METHOD(BitScanReverse) // Crypto METHOD(FixedRotate) METHOD(HashUpdateChoose) METHOD(HashUpdateMajority) METHOD(HashUpdateParity) METHOD(HashUpdate1) METHOD(HashUpdate2) METHOD(ScheduleUpdate0) METHOD(ScheduleUpdate1) METHOD(MixColumns) // AdvSimd METHOD(AbsSaturate) METHOD(AbsSaturateScalar) METHOD(AbsScalar) METHOD(AbsoluteCompareGreaterThan) METHOD(AbsoluteCompareGreaterThanOrEqual) METHOD(AbsoluteCompareGreaterThanOrEqualScalar) METHOD(AbsoluteCompareGreaterThanScalar) METHOD(AbsoluteCompareLessThan) METHOD(AbsoluteCompareLessThanOrEqual) METHOD(AbsoluteCompareLessThanOrEqualScalar) METHOD(AbsoluteCompareLessThanScalar) METHOD(AbsoluteDifference) METHOD(AbsoluteDifferenceAdd) METHOD(AbsoluteDifferenceScalar) METHOD(AbsoluteDifferenceWideningLower) METHOD(AbsoluteDifferenceWideningLowerAndAdd) METHOD(AbsoluteDifferenceWideningUpper) METHOD(AbsoluteDifferenceWideningUpperAndAdd) METHOD(AddAcross) METHOD(AddAcrossWidening) METHOD(AddHighNarrowingLower) METHOD(AddHighNarrowingUpper) METHOD(AddPairwise) METHOD(AddPairwiseScalar) METHOD(AddPairwiseWidening) METHOD(AddPairwiseWideningAndAdd) METHOD(AddPairwiseWideningAndAddScalar) METHOD(AddPairwiseWideningScalar) METHOD(AddRoundedHighNarrowingLower) METHOD(AddRoundedHighNarrowingUpper) METHOD(AddSaturateScalar) METHOD(AddWideningLower) METHOD(AddWideningUpper) METHOD(BitwiseClear) METHOD(BitwiseSelect) METHOD(CompareEqualScalar) METHOD(CompareGreaterThanOrEqualScalar) METHOD(CompareGreaterThanScalar) METHOD(CompareLessThanOrEqualScalar) METHOD(CompareLessThanScalar) METHOD(CompareTest) METHOD(CompareTestScalar) METHOD(ConvertToDoubleScalar) METHOD(ConvertToDoubleUpper) METHOD(ConvertToInt32RoundAwayFromZero) METHOD(ConvertToInt32RoundAwayFromZeroScalar) METHOD(ConvertToInt32RoundToEven) METHOD(ConvertToInt32RoundToEvenScalar) METHOD(ConvertToInt32RoundToNegativeInfinity) METHOD(ConvertToInt32RoundToNegativeInfinityScalar) METHOD(ConvertToInt32RoundToPositiveInfinity) METHOD(ConvertToInt32RoundToPositiveInfinityScalar) METHOD(ConvertToInt32RoundToZero) METHOD(ConvertToInt32RoundToZeroScalar) METHOD(ConvertToInt64RoundAwayFromZero) METHOD(ConvertToInt64RoundAwayFromZeroScalar) METHOD(ConvertToInt64RoundToEven) METHOD(ConvertToInt64RoundToEvenScalar) METHOD(ConvertToInt64RoundToNegativeInfinity) METHOD(ConvertToInt64RoundToNegativeInfinityScalar) METHOD(ConvertToInt64RoundToPositiveInfinity) METHOD(ConvertToInt64RoundToPositiveInfinityScalar) METHOD(ConvertToInt64RoundToZero) METHOD(ConvertToInt64RoundToZeroScalar) METHOD(ConvertToSingleLower) METHOD(ConvertToSingleRoundToOddLower) METHOD(ConvertToSingleRoundToOddUpper) METHOD(ConvertToSingleScalar) METHOD(ConvertToSingleUpper) METHOD(ConvertToUInt32RoundAwayFromZero) METHOD(ConvertToUInt32RoundAwayFromZeroScalar) METHOD(ConvertToUInt32RoundToEven) METHOD(ConvertToUInt32RoundToEvenScalar) METHOD(ConvertToUInt32RoundToNegativeInfinity) METHOD(ConvertToUInt32RoundToNegativeInfinityScalar) METHOD(ConvertToUInt32RoundToPositiveInfinity) METHOD(ConvertToUInt32RoundToPositiveInfinityScalar) METHOD(ConvertToUInt32RoundToZero) METHOD(ConvertToUInt32RoundToZeroScalar) METHOD(ConvertToUInt64RoundAwayFromZero) METHOD(ConvertToUInt64RoundAwayFromZeroScalar) METHOD(ConvertToUInt64RoundToEven) METHOD(ConvertToUInt64RoundToEvenScalar) METHOD(ConvertToUInt64RoundToNegativeInfinity) METHOD(ConvertToUInt64RoundToNegativeInfinityScalar) METHOD(ConvertToUInt64RoundToPositiveInfinity) METHOD(ConvertToUInt64RoundToPositiveInfinityScalar) METHOD(ConvertToUInt64RoundToZero) METHOD(ConvertToUInt64RoundToZeroScalar) METHOD(DuplicateSelectedScalarToVector128) METHOD(DuplicateSelectedScalarToVector64) METHOD(DuplicateToVector128) METHOD(DuplicateToVector64) METHOD(ExtractNarrowingLower) METHOD(ExtractNarrowingSaturateLower) METHOD(ExtractNarrowingSaturateScalar) METHOD(ExtractNarrowingSaturateUnsignedLower) METHOD(ExtractNarrowingSaturateUnsignedScalar) METHOD(ExtractNarrowingSaturateUnsignedUpper) METHOD(ExtractNarrowingSaturateUpper) METHOD(ExtractNarrowingUpper) METHOD(ExtractVector128) METHOD(ExtractVector64) METHOD(FusedAddHalving) METHOD(FusedAddRoundedHalving) METHOD(FusedMultiplyAdd) METHOD(FusedMultiplyAddByScalar) METHOD(FusedMultiplyAddBySelectedScalar) METHOD(FusedMultiplyAddNegatedScalar) METHOD(FusedMultiplyAddScalar) METHOD(FusedMultiplyAddScalarBySelectedScalar) METHOD(FusedMultiplySubtract) METHOD(FusedMultiplySubtractByScalar) METHOD(FusedMultiplySubtractBySelectedScalar) METHOD(FusedMultiplySubtractNegatedScalar) METHOD(FusedMultiplySubtractScalar) METHOD(FusedMultiplySubtractScalarBySelectedScalar) METHOD(FusedSubtractHalving) METHOD(InsertScalar) METHOD(InsertSelectedScalar) METHOD(LoadAndInsertScalar) METHOD(LoadAndReplicateToVector128) METHOD(LoadAndReplicateToVector64) METHOD(LoadPairScalarVector64) METHOD(LoadPairScalarVector64NonTemporal) METHOD(LoadPairVector128) METHOD(LoadPairVector128NonTemporal) METHOD(LoadPairVector64) METHOD(LoadPairVector64NonTemporal) METHOD(LoadVector64) METHOD(MaxAcross) METHOD(MaxNumber) METHOD(MaxNumberAcross) METHOD(MaxNumberPairwise) METHOD(MaxNumberPairwiseScalar) METHOD(MaxNumberScalar) METHOD(MaxPairwise) METHOD(MaxPairwiseScalar) METHOD(MinAcross) METHOD(MinNumber) METHOD(MinNumberAcross) METHOD(MinNumberPairwise) METHOD(MinNumberPairwiseScalar) METHOD(MinNumberScalar) METHOD(MinPairwise) METHOD(MinPairwiseScalar) METHOD(MultiplyAdd) METHOD(MultiplyAddByScalar) METHOD(MultiplyAddBySelectedScalar) METHOD(MultiplyByScalar) METHOD(MultiplyBySelectedScalar) METHOD(MultiplyBySelectedScalarWideningLower) METHOD(MultiplyBySelectedScalarWideningLowerAndAdd) METHOD(MultiplyBySelectedScalarWideningLowerAndSubtract) METHOD(MultiplyBySelectedScalarWideningUpper) METHOD(MultiplyBySelectedScalarWideningUpperAndAdd) METHOD(MultiplyBySelectedScalarWideningUpperAndSubtract) METHOD(MultiplyDoublingByScalarSaturateHigh) METHOD(MultiplyDoublingBySelectedScalarSaturateHigh) METHOD(MultiplyDoublingSaturateHigh) METHOD(MultiplyDoublingSaturateHighScalar) METHOD(MultiplyDoublingScalarBySelectedScalarSaturateHigh) METHOD(MultiplyDoublingWideningAndAddSaturateScalar) METHOD(MultiplyDoublingWideningAndSubtractSaturateScalar) METHOD(MultiplyDoublingWideningLowerAndAddSaturate) METHOD(MultiplyDoublingWideningLowerAndSubtractSaturate) METHOD(MultiplyDoublingWideningLowerByScalarAndAddSaturate) METHOD(MultiplyDoublingWideningLowerByScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningSaturateLower) METHOD(MultiplyDoublingWideningSaturateLowerByScalar) METHOD(MultiplyDoublingWideningSaturateLowerBySelectedScalar) METHOD(MultiplyDoublingWideningSaturateScalar) METHOD(MultiplyDoublingWideningSaturateScalarBySelectedScalar) METHOD(MultiplyDoublingWideningSaturateUpper) METHOD(MultiplyDoublingWideningSaturateUpperByScalar) METHOD(MultiplyDoublingWideningSaturateUpperBySelectedScalar) METHOD(MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperAndAddSaturate) METHOD(MultiplyDoublingWideningUpperAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperByScalarAndAddSaturate) METHOD(MultiplyDoublingWideningUpperByScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate) METHOD(MultiplyExtended) METHOD(MultiplyExtendedByScalar) METHOD(MultiplyExtendedBySelectedScalar) METHOD(MultiplyExtendedScalar) METHOD(MultiplyExtendedScalarBySelectedScalar) METHOD(MultiplyRoundedDoublingByScalarSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarSaturateHigh) METHOD(MultiplyRoundedDoublingSaturateHigh) METHOD(MultiplyRoundedDoublingSaturateHighScalar) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh) METHOD(MultiplyScalarBySelectedScalar) METHOD(MultiplySubtract) METHOD(MultiplySubtractByScalar) METHOD(MultiplySubtractBySelectedScalar) METHOD(MultiplyWideningLower) METHOD(MultiplyWideningLowerAndAdd) METHOD(MultiplyWideningLowerAndSubtract) METHOD(MultiplyWideningUpper) METHOD(MultiplyWideningUpperAndAdd) METHOD(MultiplyWideningUpperAndSubtract) METHOD(Negate) METHOD(NegateSaturate) METHOD(NegateSaturateScalar) METHOD(NegateScalar) METHOD(Not) METHOD(OrNot) METHOD(OnesComplement) METHOD(PolynomialMultiply) METHOD(ReciprocalEstimate) METHOD(ReciprocalEstimateScalar) METHOD(ReciprocalExponentScalar) METHOD(ReciprocalSquareRootEstimate) METHOD(ReciprocalSquareRootEstimateScalar) METHOD(ReciprocalSquareRootStep) METHOD(ReciprocalSquareRootStepScalar) METHOD(ReciprocalStep) METHOD(ReciprocalStepScalar) METHOD(ReverseElement16) METHOD(ReverseElement32) METHOD(ReverseElement8) METHOD(RoundAwayFromZero) METHOD(RoundAwayFromZeroScalar) METHOD(RoundToNearest) METHOD(RoundToNearestScalar) METHOD(ShiftArithmetic) METHOD(ShiftArithmeticRounded) METHOD(ShiftArithmeticRoundedSaturate) METHOD(ShiftArithmeticRoundedSaturateScalar) METHOD(ShiftArithmeticRoundedScalar) METHOD(ShiftArithmeticSaturate) METHOD(ShiftArithmeticSaturateScalar) METHOD(ShiftArithmeticScalar) METHOD(ShiftLeftAndInsert) METHOD(ShiftLeftAndInsertScalar) METHOD(ShiftLeftLogicalSaturate) METHOD(ShiftLeftLogicalSaturateScalar) METHOD(ShiftLeftLogicalSaturateUnsigned) METHOD(ShiftLeftLogicalSaturateUnsignedScalar) METHOD(ShiftLeftLogicalScalar) METHOD(ShiftLeftLogicalWideningLower) METHOD(ShiftLeftLogicalWideningUpper) METHOD(ShiftLogical) METHOD(ShiftLogicalRounded) METHOD(ShiftLogicalRoundedSaturate) METHOD(ShiftLogicalRoundedSaturateScalar) METHOD(ShiftLogicalRoundedScalar) METHOD(ShiftLogicalSaturate) METHOD(ShiftLogicalSaturateScalar) METHOD(ShiftLogicalScalar) METHOD(ShiftRightAndInsert) METHOD(ShiftRightAndInsertScalar) METHOD(ShiftRightArithmeticAdd) METHOD(ShiftRightArithmeticAddScalar) METHOD(ShiftRightArithmeticNarrowingSaturateLower) METHOD(ShiftRightArithmeticNarrowingSaturateScalar) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedLower) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedScalar) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedUpper) METHOD(ShiftRightArithmeticNarrowingSaturateUpper) METHOD(ShiftRightArithmeticRounded) METHOD(ShiftRightArithmeticRoundedAdd) METHOD(ShiftRightArithmeticRoundedAddScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateLower) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedLower) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedUpper) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUpper) METHOD(ShiftRightArithmeticRoundedScalar) METHOD(ShiftRightArithmeticScalar) METHOD(ShiftRightLogicalAdd) METHOD(ShiftRightLogicalAddScalar) METHOD(ShiftRightLogicalNarrowingLower) METHOD(ShiftRightLogicalNarrowingSaturateLower) METHOD(ShiftRightLogicalNarrowingSaturateScalar) METHOD(ShiftRightLogicalNarrowingSaturateUpper) METHOD(ShiftRightLogicalNarrowingUpper) METHOD(ShiftRightLogicalRounded) METHOD(ShiftRightLogicalRoundedAdd) METHOD(ShiftRightLogicalRoundedAddScalar) METHOD(ShiftRightLogicalRoundedNarrowingLower) METHOD(ShiftRightLogicalRoundedNarrowingSaturateLower) METHOD(ShiftRightLogicalRoundedNarrowingSaturateScalar) METHOD(ShiftRightLogicalRoundedNarrowingSaturateUpper) METHOD(ShiftRightLogicalRoundedNarrowingUpper) METHOD(ShiftRightLogicalRoundedScalar) METHOD(ShiftRightLogicalScalar) METHOD(SignExtendWideningLower) METHOD(SignExtendWideningUpper) METHOD(StorePair) METHOD(StorePairNonTemporal) METHOD(StorePairScalar) METHOD(StorePairScalarNonTemporal) METHOD(StoreSelectedScalar) METHOD(SubtractHighNarrowingLower) METHOD(SubtractHighNarrowingUpper) METHOD(SubtractRoundedHighNarrowingLower) METHOD(SubtractRoundedHighNarrowingUpper) METHOD(SubtractSaturateScalar) METHOD(SubtractWideningLower) METHOD(SubtractWideningUpper) METHOD(TransposeEven) METHOD(TransposeOdd) METHOD(UnzipEven) METHOD(UnzipOdd) METHOD(VectorTableLookup) METHOD(VectorTableLookupExtension) METHOD(ZeroExtendWideningLower) METHOD(ZeroExtendWideningUpper) METHOD(ZipHigh) METHOD(ZipLow) // Arm.Rdm METHOD(MultiplyRoundedDoublingAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingAndSubtractSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh) // Arm.Rdm.Arm64 METHOD(MultiplyRoundedDoublingAndAddSaturateHighScalar) METHOD(MultiplyRoundedDoublingAndSubtractSaturateHighScalar) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh) // Arm.Dp METHOD(DotProductBySelectedQuadruplet)
METHOD2(".ctor", ctor) METHOD(CopyTo) METHOD(Equals) METHOD(GreaterThan) METHOD(GreaterThanAll) METHOD(GreaterThanAny) METHOD(GreaterThanOrEqual) METHOD(GreaterThanOrEqualAll) METHOD(GreaterThanOrEqualAny) METHOD(LessThan) METHOD(LessThanAll) METHOD(LessThanAny) METHOD(LessThanOrEqual) METHOD(LessThanOrEqualAll) METHOD(LessThanOrEqualAny) METHOD(Min) METHOD(Max) METHOD(MinScalar) METHOD(MaxScalar) METHOD(PopCount) METHOD(LeadingZeroCount) METHOD(get_Count) METHOD(get_IsHardwareAccelerated) METHOD(get_IsSupported) METHOD(get_AllBitsSet) METHOD(get_Item) METHOD(get_One) METHOD(get_Zero) METHOD(op_Addition) METHOD(op_BitwiseAnd) METHOD(op_BitwiseOr) METHOD(op_Division) METHOD(op_Equality) METHOD(op_ExclusiveOr) METHOD(op_Explicit) METHOD(op_Inequality) METHOD(op_Multiply) METHOD(op_Subtraction) // Vector METHOD(ConvertToInt32) METHOD(ConvertToInt32WithTruncation) METHOD(ConvertToUInt32) METHOD(ConvertToInt64) METHOD(ConvertToInt64WithTruncation) METHOD(ConvertToUInt64) METHOD(ConvertToSingle) METHOD(ConvertToDouble) METHOD(Narrow) METHOD(Widen) // Vector64, Vector128, Vector256 METHOD(As) METHOD(AsByte) METHOD(AsDouble) METHOD(AsInt16) METHOD(AsInt32) METHOD(AsInt64) METHOD(AsSByte) METHOD(AsSingle) METHOD(AsUInt16) METHOD(AsUInt32) METHOD(AsUInt64) METHOD(AsVector128) METHOD(AsVector2) METHOD(AsVector256) METHOD(AsVector3) METHOD(AsVector4) METHOD(BitwiseAnd) METHOD(BitwiseOr) METHOD(Create) METHOD(CreateScalar) METHOD(CreateScalarUnsafe) METHOD(ConditionalSelect) METHOD(EqualsAll) METHOD(EqualsAny) METHOD(GetElement) METHOD(GetLower) METHOD(GetUpper) METHOD(ToScalar) METHOD(ToVector128) METHOD(ToVector128Unsafe) METHOD(ToVector256) METHOD(ToVector256Unsafe) METHOD(WithElement) METHOD(WithLower) METHOD(WithUpper) // Bmi1 METHOD(AndNot) METHOD(BitFieldExtract) METHOD(ExtractLowestSetBit) METHOD(GetMaskUpToLowestSetBit) METHOD(ResetLowestSetBit) METHOD(TrailingZeroCount) // Bmi2 METHOD(ZeroHighBits) METHOD(MultiplyNoFlags) METHOD(ParallelBitDeposit) METHOD(ParallelBitExtract) // Sse METHOD(Add) METHOD(CompareGreaterThanOrEqual) METHOD(CompareLessThanOrEqual) METHOD(CompareNotEqual) METHOD(CompareNotGreaterThan) METHOD(CompareNotGreaterThanOrEqual) METHOD(CompareNotLessThan) METHOD(CompareNotLessThanOrEqual) METHOD(CompareScalarGreaterThan) METHOD(CompareScalarGreaterThanOrEqual) METHOD(CompareScalarLessThan) METHOD(CompareScalarLessThanOrEqual) METHOD(CompareScalarNotEqual) METHOD(CompareScalarNotGreaterThan) METHOD(CompareScalarNotGreaterThanOrEqual) METHOD(CompareScalarNotLessThan) METHOD(CompareScalarNotLessThanOrEqual) METHOD(CompareScalarOrderedEqual) METHOD(CompareScalarOrderedGreaterThan) METHOD(CompareScalarOrderedGreaterThanOrEqual) METHOD(CompareScalarOrderedLessThan) METHOD(CompareScalarOrderedLessThanOrEqual) METHOD(CompareScalarOrderedNotEqual) METHOD(CompareScalarUnorderedEqual) METHOD(CompareScalarUnorderedGreaterThan) METHOD(CompareScalarUnorderedGreaterThanOrEqual) METHOD(CompareScalarUnorderedLessThan) METHOD(CompareScalarUnorderedLessThanOrEqual) METHOD(CompareScalarUnorderedNotEqual) METHOD(CompareOrdered) METHOD(CompareUnordered) METHOD(CompareScalarOrdered) METHOD(CompareScalarUnordered) METHOD(ConvertScalarToVector128Single) METHOD(Divide) METHOD(DivideScalar) METHOD(Store) METHOD(StoreFence) METHOD(StoreHigh) METHOD(StoreLow) METHOD(Subtract) METHOD(SubtractScalar) METHOD(CompareEqual) METHOD(Extract) METHOD(LoadHigh) METHOD(LoadLow) METHOD(LoadVector128) METHOD(LoadScalarVector128) METHOD(MoveHighToLow) METHOD(MoveLowToHigh) METHOD(MoveMask) METHOD(MoveScalar) METHOD(Multiply) METHOD(MultiplyAddAdjacent) METHOD(MultiplyScalar) METHOD(Shuffle) METHOD(UnpackHigh) METHOD(UnpackLow) METHOD(Prefetch0) METHOD(Prefetch1) METHOD(Prefetch2) METHOD(PrefetchNonTemporal) METHOD(Reciprocal) METHOD(ReciprocalScalar) METHOD(ReciprocalSqrt) METHOD(ReciprocalSqrtScalar) METHOD(Sqrt) METHOD(SqrtScalar) // Sse2 METHOD(AddSaturate) METHOD(AddScalar) METHOD(And) METHOD(Average) METHOD(Or) METHOD(LoadAlignedVector128) METHOD(Xor) METHOD(CompareGreaterThan) METHOD(CompareScalarEqual) METHOD(ConvertScalarToVector128Double) METHOD(ConvertScalarToVector128Int32) METHOD(ConvertScalarToVector128Int64) METHOD(ConvertScalarToVector128UInt32) METHOD(ConvertScalarToVector128UInt64) METHOD(ConvertToVector128Double) METHOD(ConvertToVector128Int32) METHOD(ConvertToVector128Int32WithTruncation) METHOD(ConvertToVector128Single) METHOD(MaskMove) METHOD(MultiplyHigh) METHOD(MultiplyLow) METHOD(PackSignedSaturate) METHOD(PackUnsignedSaturate) METHOD(ShuffleHigh) METHOD(ShuffleLow) METHOD(SubtractSaturate) METHOD(SumAbsoluteDifferences) METHOD(StoreScalar) METHOD(StoreAligned) METHOD(StoreAlignedNonTemporal) METHOD(StoreNonTemporal) METHOD(ShiftLeftLogical) METHOD(ShiftLeftLogical128BitLane) METHOD(ShiftRightArithmetic) METHOD(ShiftRightLogical) METHOD(ShiftRightLogical128BitLane) METHOD(CompareLessThan) METHOD(LoadFence) METHOD(MemoryFence) // Sse3 METHOD(HorizontalAdd) METHOD(HorizontalSubtract) METHOD(AddSubtract) METHOD(LoadAndDuplicateToVector128) METHOD(LoadDquVector128) METHOD(MoveAndDuplicate) METHOD(MoveHighAndDuplicate) METHOD(MoveLowAndDuplicate) // Ssse3 METHOD(Abs) // Also used by ARM64 METHOD(AlignRight) METHOD(HorizontalAddSaturate) METHOD(HorizontalSubtractSaturate) METHOD(MultiplyHighRoundScale) METHOD(Sign) // Sse41 METHOD(Blend) METHOD(BlendVariable) METHOD(Ceiling) METHOD(CeilingScalar) METHOD(ConvertToVector128Int16) METHOD(ConvertToVector128Int64) METHOD(Floor) METHOD(FloorScalar) METHOD(Insert) METHOD(LoadAlignedVector128NonTemporal) METHOD(RoundCurrentDirectionScalar) METHOD(RoundToNearestInteger) METHOD(RoundToNearestIntegerScalar) METHOD(RoundToNegativeInfinity) METHOD(RoundToNegativeInfinityScalar) METHOD(RoundToPositiveInfinity) METHOD(RoundToPositiveInfinityScalar) METHOD(RoundToZero) METHOD(RoundToZeroScalar) METHOD(RoundCurrentDirection) METHOD(MinHorizontal) METHOD(TestC) METHOD(TestNotZAndNotC) METHOD(TestZ) METHOD(DotProduct) METHOD(MultipleSumAbsoluteDifferences) // Sse42 METHOD(Crc32) // Aes METHOD(Decrypt) METHOD(DecryptLast) METHOD(Encrypt) METHOD(EncryptLast) METHOD(InverseMixColumns) METHOD(KeygenAssist) METHOD(PolynomialMultiplyWideningLower) METHOD(PolynomialMultiplyWideningUpper) // Pclmulqdq METHOD(CarrylessMultiply) // ArmBase METHOD(LeadingSignCount) METHOD(ReverseElementBits) // Crc32 METHOD(ComputeCrc32) METHOD(ComputeCrc32C) // X86Base METHOD(BitScanForward) METHOD(BitScanReverse) // Crypto METHOD(FixedRotate) METHOD(HashUpdateChoose) METHOD(HashUpdateMajority) METHOD(HashUpdateParity) METHOD(HashUpdate1) METHOD(HashUpdate2) METHOD(ScheduleUpdate0) METHOD(ScheduleUpdate1) METHOD(MixColumns) // AdvSimd METHOD(AbsSaturate) METHOD(AbsSaturateScalar) METHOD(AbsScalar) METHOD(AbsoluteCompareGreaterThan) METHOD(AbsoluteCompareGreaterThanOrEqual) METHOD(AbsoluteCompareGreaterThanOrEqualScalar) METHOD(AbsoluteCompareGreaterThanScalar) METHOD(AbsoluteCompareLessThan) METHOD(AbsoluteCompareLessThanOrEqual) METHOD(AbsoluteCompareLessThanOrEqualScalar) METHOD(AbsoluteCompareLessThanScalar) METHOD(AbsoluteDifference) METHOD(AbsoluteDifferenceAdd) METHOD(AbsoluteDifferenceScalar) METHOD(AbsoluteDifferenceWideningLower) METHOD(AbsoluteDifferenceWideningLowerAndAdd) METHOD(AbsoluteDifferenceWideningUpper) METHOD(AbsoluteDifferenceWideningUpperAndAdd) METHOD(AddAcross) METHOD(AddAcrossWidening) METHOD(AddHighNarrowingLower) METHOD(AddHighNarrowingUpper) METHOD(AddPairwise) METHOD(AddPairwiseScalar) METHOD(AddPairwiseWidening) METHOD(AddPairwiseWideningAndAdd) METHOD(AddPairwiseWideningAndAddScalar) METHOD(AddPairwiseWideningScalar) METHOD(AddRoundedHighNarrowingLower) METHOD(AddRoundedHighNarrowingUpper) METHOD(AddSaturateScalar) METHOD(AddWideningLower) METHOD(AddWideningUpper) METHOD(BitwiseClear) METHOD(BitwiseSelect) METHOD(CompareEqualScalar) METHOD(CompareGreaterThanOrEqualScalar) METHOD(CompareGreaterThanScalar) METHOD(CompareLessThanOrEqualScalar) METHOD(CompareLessThanScalar) METHOD(CompareTest) METHOD(CompareTestScalar) METHOD(ConvertToDoubleScalar) METHOD(ConvertToDoubleUpper) METHOD(ConvertToInt32RoundAwayFromZero) METHOD(ConvertToInt32RoundAwayFromZeroScalar) METHOD(ConvertToInt32RoundToEven) METHOD(ConvertToInt32RoundToEvenScalar) METHOD(ConvertToInt32RoundToNegativeInfinity) METHOD(ConvertToInt32RoundToNegativeInfinityScalar) METHOD(ConvertToInt32RoundToPositiveInfinity) METHOD(ConvertToInt32RoundToPositiveInfinityScalar) METHOD(ConvertToInt32RoundToZero) METHOD(ConvertToInt32RoundToZeroScalar) METHOD(ConvertToInt64RoundAwayFromZero) METHOD(ConvertToInt64RoundAwayFromZeroScalar) METHOD(ConvertToInt64RoundToEven) METHOD(ConvertToInt64RoundToEvenScalar) METHOD(ConvertToInt64RoundToNegativeInfinity) METHOD(ConvertToInt64RoundToNegativeInfinityScalar) METHOD(ConvertToInt64RoundToPositiveInfinity) METHOD(ConvertToInt64RoundToPositiveInfinityScalar) METHOD(ConvertToInt64RoundToZero) METHOD(ConvertToInt64RoundToZeroScalar) METHOD(ConvertToSingleLower) METHOD(ConvertToSingleRoundToOddLower) METHOD(ConvertToSingleRoundToOddUpper) METHOD(ConvertToSingleScalar) METHOD(ConvertToSingleUpper) METHOD(ConvertToUInt32RoundAwayFromZero) METHOD(ConvertToUInt32RoundAwayFromZeroScalar) METHOD(ConvertToUInt32RoundToEven) METHOD(ConvertToUInt32RoundToEvenScalar) METHOD(ConvertToUInt32RoundToNegativeInfinity) METHOD(ConvertToUInt32RoundToNegativeInfinityScalar) METHOD(ConvertToUInt32RoundToPositiveInfinity) METHOD(ConvertToUInt32RoundToPositiveInfinityScalar) METHOD(ConvertToUInt32RoundToZero) METHOD(ConvertToUInt32RoundToZeroScalar) METHOD(ConvertToUInt64RoundAwayFromZero) METHOD(ConvertToUInt64RoundAwayFromZeroScalar) METHOD(ConvertToUInt64RoundToEven) METHOD(ConvertToUInt64RoundToEvenScalar) METHOD(ConvertToUInt64RoundToNegativeInfinity) METHOD(ConvertToUInt64RoundToNegativeInfinityScalar) METHOD(ConvertToUInt64RoundToPositiveInfinity) METHOD(ConvertToUInt64RoundToPositiveInfinityScalar) METHOD(ConvertToUInt64RoundToZero) METHOD(ConvertToUInt64RoundToZeroScalar) METHOD(DuplicateSelectedScalarToVector128) METHOD(DuplicateSelectedScalarToVector64) METHOD(DuplicateToVector128) METHOD(DuplicateToVector64) METHOD(ExtractNarrowingLower) METHOD(ExtractNarrowingSaturateLower) METHOD(ExtractNarrowingSaturateScalar) METHOD(ExtractNarrowingSaturateUnsignedLower) METHOD(ExtractNarrowingSaturateUnsignedScalar) METHOD(ExtractNarrowingSaturateUnsignedUpper) METHOD(ExtractNarrowingSaturateUpper) METHOD(ExtractNarrowingUpper) METHOD(ExtractVector128) METHOD(ExtractVector64) METHOD(FusedAddHalving) METHOD(FusedAddRoundedHalving) METHOD(FusedMultiplyAdd) METHOD(FusedMultiplyAddByScalar) METHOD(FusedMultiplyAddBySelectedScalar) METHOD(FusedMultiplyAddNegatedScalar) METHOD(FusedMultiplyAddScalar) METHOD(FusedMultiplyAddScalarBySelectedScalar) METHOD(FusedMultiplySubtract) METHOD(FusedMultiplySubtractByScalar) METHOD(FusedMultiplySubtractBySelectedScalar) METHOD(FusedMultiplySubtractNegatedScalar) METHOD(FusedMultiplySubtractScalar) METHOD(FusedMultiplySubtractScalarBySelectedScalar) METHOD(FusedSubtractHalving) METHOD(InsertScalar) METHOD(InsertSelectedScalar) METHOD(LoadAndInsertScalar) METHOD(LoadAndReplicateToVector128) METHOD(LoadAndReplicateToVector64) METHOD(LoadPairScalarVector64) METHOD(LoadPairScalarVector64NonTemporal) METHOD(LoadPairVector128) METHOD(LoadPairVector128NonTemporal) METHOD(LoadPairVector64) METHOD(LoadPairVector64NonTemporal) METHOD(LoadVector64) METHOD(MaxAcross) METHOD(MaxNumber) METHOD(MaxNumberAcross) METHOD(MaxNumberPairwise) METHOD(MaxNumberPairwiseScalar) METHOD(MaxNumberScalar) METHOD(MaxPairwise) METHOD(MaxPairwiseScalar) METHOD(MinAcross) METHOD(MinNumber) METHOD(MinNumberAcross) METHOD(MinNumberPairwise) METHOD(MinNumberPairwiseScalar) METHOD(MinNumberScalar) METHOD(MinPairwise) METHOD(MinPairwiseScalar) METHOD(MultiplyAdd) METHOD(MultiplyAddByScalar) METHOD(MultiplyAddBySelectedScalar) METHOD(MultiplyByScalar) METHOD(MultiplyBySelectedScalar) METHOD(MultiplyBySelectedScalarWideningLower) METHOD(MultiplyBySelectedScalarWideningLowerAndAdd) METHOD(MultiplyBySelectedScalarWideningLowerAndSubtract) METHOD(MultiplyBySelectedScalarWideningUpper) METHOD(MultiplyBySelectedScalarWideningUpperAndAdd) METHOD(MultiplyBySelectedScalarWideningUpperAndSubtract) METHOD(MultiplyDoublingByScalarSaturateHigh) METHOD(MultiplyDoublingBySelectedScalarSaturateHigh) METHOD(MultiplyDoublingSaturateHigh) METHOD(MultiplyDoublingSaturateHighScalar) METHOD(MultiplyDoublingScalarBySelectedScalarSaturateHigh) METHOD(MultiplyDoublingWideningAndAddSaturateScalar) METHOD(MultiplyDoublingWideningAndSubtractSaturateScalar) METHOD(MultiplyDoublingWideningLowerAndAddSaturate) METHOD(MultiplyDoublingWideningLowerAndSubtractSaturate) METHOD(MultiplyDoublingWideningLowerByScalarAndAddSaturate) METHOD(MultiplyDoublingWideningLowerByScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningLowerBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningLowerBySelectedScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningSaturateLower) METHOD(MultiplyDoublingWideningSaturateLowerByScalar) METHOD(MultiplyDoublingWideningSaturateLowerBySelectedScalar) METHOD(MultiplyDoublingWideningSaturateScalar) METHOD(MultiplyDoublingWideningSaturateScalarBySelectedScalar) METHOD(MultiplyDoublingWideningSaturateUpper) METHOD(MultiplyDoublingWideningSaturateUpperByScalar) METHOD(MultiplyDoublingWideningSaturateUpperBySelectedScalar) METHOD(MultiplyDoublingWideningScalarBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningScalarBySelectedScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperAndAddSaturate) METHOD(MultiplyDoublingWideningUpperAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperByScalarAndAddSaturate) METHOD(MultiplyDoublingWideningUpperByScalarAndSubtractSaturate) METHOD(MultiplyDoublingWideningUpperBySelectedScalarAndAddSaturate) METHOD(MultiplyDoublingWideningUpperBySelectedScalarAndSubtractSaturate) METHOD(MultiplyExtended) METHOD(MultiplyExtendedByScalar) METHOD(MultiplyExtendedBySelectedScalar) METHOD(MultiplyExtendedScalar) METHOD(MultiplyExtendedScalarBySelectedScalar) METHOD(MultiplyRoundedDoublingByScalarSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarSaturateHigh) METHOD(MultiplyRoundedDoublingSaturateHigh) METHOD(MultiplyRoundedDoublingSaturateHighScalar) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarSaturateHigh) METHOD(MultiplyScalarBySelectedScalar) METHOD(MultiplySubtract) METHOD(MultiplySubtractByScalar) METHOD(MultiplySubtractBySelectedScalar) METHOD(MultiplyWideningLower) METHOD(MultiplyWideningLowerAndAdd) METHOD(MultiplyWideningLowerAndSubtract) METHOD(MultiplyWideningUpper) METHOD(MultiplyWideningUpperAndAdd) METHOD(MultiplyWideningUpperAndSubtract) METHOD(Negate) METHOD(NegateSaturate) METHOD(NegateSaturateScalar) METHOD(NegateScalar) METHOD(Not) METHOD(OrNot) METHOD(OnesComplement) METHOD(PolynomialMultiply) METHOD(ReciprocalEstimate) METHOD(ReciprocalEstimateScalar) METHOD(ReciprocalExponentScalar) METHOD(ReciprocalSquareRootEstimate) METHOD(ReciprocalSquareRootEstimateScalar) METHOD(ReciprocalSquareRootStep) METHOD(ReciprocalSquareRootStepScalar) METHOD(ReciprocalStep) METHOD(ReciprocalStepScalar) METHOD(ReverseElement16) METHOD(ReverseElement32) METHOD(ReverseElement8) METHOD(RoundAwayFromZero) METHOD(RoundAwayFromZeroScalar) METHOD(RoundToNearest) METHOD(RoundToNearestScalar) METHOD(ShiftArithmetic) METHOD(ShiftArithmeticRounded) METHOD(ShiftArithmeticRoundedSaturate) METHOD(ShiftArithmeticRoundedSaturateScalar) METHOD(ShiftArithmeticRoundedScalar) METHOD(ShiftArithmeticSaturate) METHOD(ShiftArithmeticSaturateScalar) METHOD(ShiftArithmeticScalar) METHOD(ShiftLeftAndInsert) METHOD(ShiftLeftAndInsertScalar) METHOD(ShiftLeftLogicalSaturate) METHOD(ShiftLeftLogicalSaturateScalar) METHOD(ShiftLeftLogicalSaturateUnsigned) METHOD(ShiftLeftLogicalSaturateUnsignedScalar) METHOD(ShiftLeftLogicalScalar) METHOD(ShiftLeftLogicalWideningLower) METHOD(ShiftLeftLogicalWideningUpper) METHOD(ShiftLogical) METHOD(ShiftLogicalRounded) METHOD(ShiftLogicalRoundedSaturate) METHOD(ShiftLogicalRoundedSaturateScalar) METHOD(ShiftLogicalRoundedScalar) METHOD(ShiftLogicalSaturate) METHOD(ShiftLogicalSaturateScalar) METHOD(ShiftLogicalScalar) METHOD(ShiftRightAndInsert) METHOD(ShiftRightAndInsertScalar) METHOD(ShiftRightArithmeticAdd) METHOD(ShiftRightArithmeticAddScalar) METHOD(ShiftRightArithmeticNarrowingSaturateLower) METHOD(ShiftRightArithmeticNarrowingSaturateScalar) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedLower) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedScalar) METHOD(ShiftRightArithmeticNarrowingSaturateUnsignedUpper) METHOD(ShiftRightArithmeticNarrowingSaturateUpper) METHOD(ShiftRightArithmeticRounded) METHOD(ShiftRightArithmeticRoundedAdd) METHOD(ShiftRightArithmeticRoundedAddScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateLower) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedLower) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedScalar) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUnsignedUpper) METHOD(ShiftRightArithmeticRoundedNarrowingSaturateUpper) METHOD(ShiftRightArithmeticRoundedScalar) METHOD(ShiftRightArithmeticScalar) METHOD(ShiftRightLogicalAdd) METHOD(ShiftRightLogicalAddScalar) METHOD(ShiftRightLogicalNarrowingLower) METHOD(ShiftRightLogicalNarrowingSaturateLower) METHOD(ShiftRightLogicalNarrowingSaturateScalar) METHOD(ShiftRightLogicalNarrowingSaturateUpper) METHOD(ShiftRightLogicalNarrowingUpper) METHOD(ShiftRightLogicalRounded) METHOD(ShiftRightLogicalRoundedAdd) METHOD(ShiftRightLogicalRoundedAddScalar) METHOD(ShiftRightLogicalRoundedNarrowingLower) METHOD(ShiftRightLogicalRoundedNarrowingSaturateLower) METHOD(ShiftRightLogicalRoundedNarrowingSaturateScalar) METHOD(ShiftRightLogicalRoundedNarrowingSaturateUpper) METHOD(ShiftRightLogicalRoundedNarrowingUpper) METHOD(ShiftRightLogicalRoundedScalar) METHOD(ShiftRightLogicalScalar) METHOD(SignExtendWideningLower) METHOD(SignExtendWideningUpper) METHOD(StorePair) METHOD(StorePairNonTemporal) METHOD(StorePairScalar) METHOD(StorePairScalarNonTemporal) METHOD(StoreSelectedScalar) METHOD(SubtractHighNarrowingLower) METHOD(SubtractHighNarrowingUpper) METHOD(SubtractRoundedHighNarrowingLower) METHOD(SubtractRoundedHighNarrowingUpper) METHOD(SubtractSaturateScalar) METHOD(SubtractWideningLower) METHOD(SubtractWideningUpper) METHOD(TransposeEven) METHOD(TransposeOdd) METHOD(UnzipEven) METHOD(UnzipOdd) METHOD(VectorTableLookup) METHOD(VectorTableLookupExtension) METHOD(ZeroExtendWideningLower) METHOD(ZeroExtendWideningUpper) METHOD(ZipHigh) METHOD(ZipLow) // Arm.Rdm METHOD(MultiplyRoundedDoublingAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingAndSubtractSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingBySelectedScalarAndSubtractSaturateHigh) // Arm.Rdm.Arm64 METHOD(MultiplyRoundedDoublingAndAddSaturateHighScalar) METHOD(MultiplyRoundedDoublingAndSubtractSaturateHighScalar) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarAndAddSaturateHigh) METHOD(MultiplyRoundedDoublingScalarBySelectedScalarAndSubtractSaturateHigh) // Arm.Dp METHOD(DotProductBySelectedQuadruplet)
1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/tests/BuildWasmApps/testassets/native-libs/variadic.c
#include <stdarg.h> int sum(int n, ...) { int result = 0; va_list ptr; va_start(ptr, n); for (int i = 0; i < n; i++) result += va_arg(ptr, int); va_end(ptr); return result; }
#include <stdarg.h> int sum(int n, ...) { int result = 0; va_list ptr; va_start(ptr, n); for (int i = 0; i < n; i++) result += va_arg(ptr, int); va_end(ptr); return result; }
-1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/coreclr/pal/inc/rt/oaidl.h
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // // // =========================================================================== // File: oaidl.h // // =========================================================================== #ifndef __OAIDL_H__ #define __OAIDL_H__ #include "rpc.h" #include "rpcndr.h" #include "unknwn.h" typedef struct tagEXCEPINFO { WORD wCode; WORD wReserved; BSTR bstrSource; BSTR bstrDescription; BSTR bstrHelpFile; DWORD dwHelpContext; PVOID pvReserved; HRESULT (__stdcall *pfnDeferredFillIn)(struct tagEXCEPINFO *); SCODE scode; } EXCEPINFO, * LPEXCEPINFO; typedef interface IErrorInfo IErrorInfo; typedef /* [unique] */ IErrorInfo *LPERRORINFO; EXTERN_C const IID IID_IErrorInfo; interface IErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE GetGUID( /* [out] */ GUID *pGUID) = 0; virtual HRESULT STDMETHODCALLTYPE GetSource( /* [out] */ BSTR *pBstrSource) = 0; virtual HRESULT STDMETHODCALLTYPE GetDescription( /* [out] */ BSTR *pBstrDescription) = 0; virtual HRESULT STDMETHODCALLTYPE GetHelpFile( /* [out] */ BSTR *pBstrHelpFile) = 0; virtual HRESULT STDMETHODCALLTYPE GetHelpContext( /* [out] */ DWORD *pdwHelpContext) = 0; }; typedef interface ICreateErrorInfo ICreateErrorInfo; EXTERN_C const IID IID_ICreateErrorInfo; typedef /* [unique] */ ICreateErrorInfo *LPCREATEERRORINFO; interface ICreateErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE SetGUID( /* [in] */ REFGUID rguid) = 0; virtual HRESULT STDMETHODCALLTYPE SetSource( /* [in] */ LPOLESTR szSource) = 0; virtual HRESULT STDMETHODCALLTYPE SetDescription( /* [in] */ LPOLESTR szDescription) = 0; virtual HRESULT STDMETHODCALLTYPE SetHelpFile( /* [in] */ LPOLESTR szHelpFile) = 0; virtual HRESULT STDMETHODCALLTYPE SetHelpContext( /* [in] */ DWORD dwHelpContext) = 0; }; STDAPI SetErrorInfo(ULONG dwReserved, IErrorInfo FAR* perrinfo); STDAPI GetErrorInfo(ULONG dwReserved, IErrorInfo FAR* FAR* pperrinfo); STDAPI CreateErrorInfo(ICreateErrorInfo FAR* FAR* pperrinfo); typedef interface ISupportErrorInfo ISupportErrorInfo; typedef /* [unique] */ ISupportErrorInfo *LPSUPPORTERRORINFO; EXTERN_C const IID IID_ISupportErrorInfo; interface ISupportErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE InterfaceSupportsErrorInfo( /* [in] */ REFIID riid) = 0; }; #endif //__OAIDL_H__
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. // // // =========================================================================== // File: oaidl.h // // =========================================================================== #ifndef __OAIDL_H__ #define __OAIDL_H__ #include "rpc.h" #include "rpcndr.h" #include "unknwn.h" typedef struct tagEXCEPINFO { WORD wCode; WORD wReserved; BSTR bstrSource; BSTR bstrDescription; BSTR bstrHelpFile; DWORD dwHelpContext; PVOID pvReserved; HRESULT (__stdcall *pfnDeferredFillIn)(struct tagEXCEPINFO *); SCODE scode; } EXCEPINFO, * LPEXCEPINFO; typedef interface IErrorInfo IErrorInfo; typedef /* [unique] */ IErrorInfo *LPERRORINFO; EXTERN_C const IID IID_IErrorInfo; interface IErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE GetGUID( /* [out] */ GUID *pGUID) = 0; virtual HRESULT STDMETHODCALLTYPE GetSource( /* [out] */ BSTR *pBstrSource) = 0; virtual HRESULT STDMETHODCALLTYPE GetDescription( /* [out] */ BSTR *pBstrDescription) = 0; virtual HRESULT STDMETHODCALLTYPE GetHelpFile( /* [out] */ BSTR *pBstrHelpFile) = 0; virtual HRESULT STDMETHODCALLTYPE GetHelpContext( /* [out] */ DWORD *pdwHelpContext) = 0; }; typedef interface ICreateErrorInfo ICreateErrorInfo; EXTERN_C const IID IID_ICreateErrorInfo; typedef /* [unique] */ ICreateErrorInfo *LPCREATEERRORINFO; interface ICreateErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE SetGUID( /* [in] */ REFGUID rguid) = 0; virtual HRESULT STDMETHODCALLTYPE SetSource( /* [in] */ LPOLESTR szSource) = 0; virtual HRESULT STDMETHODCALLTYPE SetDescription( /* [in] */ LPOLESTR szDescription) = 0; virtual HRESULT STDMETHODCALLTYPE SetHelpFile( /* [in] */ LPOLESTR szHelpFile) = 0; virtual HRESULT STDMETHODCALLTYPE SetHelpContext( /* [in] */ DWORD dwHelpContext) = 0; }; STDAPI SetErrorInfo(ULONG dwReserved, IErrorInfo FAR* perrinfo); STDAPI GetErrorInfo(ULONG dwReserved, IErrorInfo FAR* FAR* pperrinfo); STDAPI CreateErrorInfo(ICreateErrorInfo FAR* FAR* pperrinfo); typedef interface ISupportErrorInfo ISupportErrorInfo; typedef /* [unique] */ ISupportErrorInfo *LPSUPPORTERRORINFO; EXTERN_C const IID IID_ISupportErrorInfo; interface ISupportErrorInfo : public IUnknown { public: virtual HRESULT STDMETHODCALLTYPE InterfaceSupportsErrorInfo( /* [in] */ REFIID riid) = 0; }; #endif //__OAIDL_H__
-1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/tests/profiler/native/rejitprofiler/sigparse.h
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. #ifndef __PROFILER_SIGNATURE_PARSER__ #define __PROFILER_SIGNATURE_PARSER__ /* Sig ::= MethodDefSig | MethodRefSig | StandAloneMethodSig | FieldSig | PropertySig | LocalVarSig MethodDefSig ::= [[HASTHIS] [EXPLICITTHIS]] (DEFAULT|VARARG|GENERIC GenParamCount) ParamCount RetType Param* MethodRefSig ::= [[HASTHIS] [EXPLICITTHIS]] VARARG ParamCount RetType Param* [SENTINEL Param+] StandAloneMethodSig ::= [[HASTHIS] [EXPLICITTHIS]] (DEFAULT|VARARG|C|STDCALL|THISCALL|FASTCALL) ParamCount RetType Param* [SENTINEL Param+] FieldSig ::= FIELD CustomMod* Type PropertySig ::= PROPERTY [HASTHIS] ParamCount CustomMod* Type Param* LocalVarSig ::= LOCAL_SIG Count (TYPEDBYREF | ([CustomMod] [Constraint])* [BYREF] Type)+ ------------- CustomMod ::= ( CMOD_OPT | CMOD_REQD ) ( TypeDefEncoded | TypeRefEncoded ) Constraint ::= #define ELEMENT_TYPE_PINNED Param ::= CustomMod* ( TYPEDBYREF | [BYREF] Type ) RetType ::= CustomMod* ( VOID | TYPEDBYREF | [BYREF] Type ) Type ::= ( BOOLEAN | CHAR | I1 | U1 | U2 | U2 | I4 | U4 | I8 | U8 | R4 | R8 | I | U | | VALUETYPE TypeDefOrRefEncoded | CLASS TypeDefOrRefEncoded | STRING | OBJECT | PTR CustomMod* VOID | PTR CustomMod* Type | FNPTR MethodDefSig | FNPTR MethodRefSig | ARRAY Type ArrayShape | SZARRAY CustomMod* Type | GENERICINST (CLASS | VALUETYPE) TypeDefOrRefEncoded GenArgCount Type* | VAR Number | MVAR Number ArrayShape ::= Rank NumSizes Size* NumLoBounds LoBound* TypeDefOrRefEncoded ::= TypeDefEncoded | TypeRefEncoded TypeDefEncoded ::= 32-bit-3-part-encoding-for-typedefs-and-typerefs TypeRefEncoded ::= 32-bit-3-part-encoding-for-typedefs-and-typerefs ParamCount ::= 29-bit-encoded-integer GenArgCount ::= 29-bit-encoded-integer Count ::= 29-bit-encoded-integer Rank ::= 29-bit-encoded-integer NumSizes ::= 29-bit-encoded-integer Size ::= 29-bit-encoded-integer NumLoBounds ::= 29-bit-encoded-integer LoBounds ::= 29-bit-encoded-integer Number ::= 29-bit-encoded-integer */ #define ELEMENT_TYPE_END 0x00 //Marks end of a list #define ELEMENT_TYPE_VOID 0x01 #define ELEMENT_TYPE_BOOLEAN 0x02 #define ELEMENT_TYPE_CHAR 0x03 #define ELEMENT_TYPE_I1 0x04 #define ELEMENT_TYPE_U1 0x05 #define ELEMENT_TYPE_I2 0x06 #define ELEMENT_TYPE_U2 0x07 #define ELEMENT_TYPE_I4 0x08 #define ELEMENT_TYPE_U4 0x09 #define ELEMENT_TYPE_I8 0x0a #define ELEMENT_TYPE_U8 0x0b #define ELEMENT_TYPE_R4 0x0c #define ELEMENT_TYPE_R8 0x0d #define ELEMENT_TYPE_STRING 0x0e #define ELEMENT_TYPE_PTR 0x0f // Followed by type #define ELEMENT_TYPE_BYREF 0x10 // Followed by type #define ELEMENT_TYPE_VALUETYPE 0x11 // Followed by TypeDef or TypeRef token #define ELEMENT_TYPE_CLASS 0x12 // Followed by TypeDef or TypeRef token #define ELEMENT_TYPE_VAR 0x13 // Generic parameter in a generic type definition, represented as number #define ELEMENT_TYPE_ARRAY 0x14 // type rank boundsCount bound1 ... loCount lo1 ... #define ELEMENT_TYPE_GENERICINST 0x15 // Generic type instantiation. Followed by type type-arg-count type-1 ... type-n #define ELEMENT_TYPE_TYPEDBYREF 0x16 #define ELEMENT_TYPE_I 0x18 // System.IntPtr #define ELEMENT_TYPE_U 0x19 // System.UIntPtr #define ELEMENT_TYPE_FNPTR 0x1b // Followed by full method signature #define ELEMENT_TYPE_OBJECT 0x1c // System.Object #define ELEMENT_TYPE_SZARRAY 0x1d // Single-dim array with 0 lower bound #define ELEMENT_TYPE_MVAR 0x1e // Generic parameter in a generic method definition,represented as number #define ELEMENT_TYPE_CMOD_REQD 0x1f // Required modifier : followed by a TypeDef or TypeRef token #define ELEMENT_TYPE_CMOD_OPT 0x20 // Optional modifier : followed by a TypeDef or TypeRef token #define ELEMENT_TYPE_INTERNAL 0x21 // Implemented within the CLI #define ELEMENT_TYPE_MODIFIER 0x40 // Or'd with following element types #define ELEMENT_TYPE_SENTINEL 0x41 // Sentinel for vararg method signature #define ELEMENT_TYPE_PINNED 0x45 // Denotes a local variable that points at a pinned object #define SIG_METHOD_DEFAULT 0x00 // default calling convention #define SIG_METHOD_C 0x01 // C calling convention #define SIG_METHOD_STDCALL 0x02 // Stdcall calling convention #define SIG_METHOD_THISCALL 0x03 // thiscall calling convention #define SIG_METHOD_FASTCALL 0x04 // fastcall calling convention #define SIG_METHOD_VARARG 0x05 // vararg calling convention #define SIG_FIELD 0x06 // encodes a field #define SIG_LOCAL_SIG 0x07 // used for the .locals directive #define SIG_PROPERTY 0x08 // used to encode a property #define SIG_GENERIC 0x10 // used to indicate that the method has one or more generic parameters. #define SIG_HASTHIS 0x20 // used to encode the keyword instance in the calling convention #define SIG_EXPLICITTHIS 0x40 // used to encode the keyword explicit in the calling convention #define SIG_INDEX_TYPE_TYPEDEF 0x00 // ParseTypeDefOrRefEncoded returns this as the out index type for typedefs #define SIG_INDEX_TYPE_TYPEREF 0x01 // ParseTypeDefOrRefEncoded returns this as the out index type for typerefs #define SIG_INDEX_TYPE_TYPESPEC 0x02 // ParseTypeDefOrRefEncoded returns this as the out index type for typespecs typedef unsigned char sig_byte; typedef unsigned char sig_elem_type; typedef unsigned char sig_index_type; typedef unsigned int sig_index; typedef unsigned int sig_count; typedef unsigned int sig_mem_number; class SigParser { private: sig_byte *pbBase; sig_byte *pbCur; sig_byte *pbEnd; public: bool Parse(sig_byte *blob, sig_count len); private: bool ParseByte(sig_byte *pbOut); bool ParseNumber(sig_count *pOut); bool ParseTypeDefOrRefEncoded(sig_index_type *pOutIndexType, sig_index *pOutIndex); bool ParseMethod(sig_elem_type); bool ParseField(sig_elem_type); bool ParseProperty(sig_elem_type); bool ParseLocals(sig_elem_type); bool ParseLocal(); bool ParseOptionalCustomMods(); bool ParseOptionalCustomModsOrConstraint(); bool ParseCustomMod(); bool ParseRetType(); bool ParseType(); bool ParseParam(); bool ParseArrayShape(); protected: // subtype these methods to create your parser side-effects //---------------------------------------------------- // a method with given elem_type virtual void NotifyBeginMethod(sig_elem_type elem_type) = 0; virtual void NotifyEndMethod() = 0; // the method has a this pointer virtual void NotifyHasThis() = 0; // total parameters for the method virtual void NotifyParamCount(sig_count) = 0; // starting a return type virtual void NotifyBeginRetType() = 0; virtual void NotifyEndRetType() = 0; // starting a parameter virtual void NotifyBeginParam() = 0; virtual void NotifyEndParam() = 0; // sentinel indication the location of the "..." in the method signature virtual void NotifySentinal() = 0; // number of generic parameters in this method signature (if any) virtual void NotifyGenericParamCount(sig_count) = 0; //---------------------------------------------------- // a field with given elem_type virtual void NotifyBeginField(sig_elem_type elem_type) = 0; virtual void NotifyEndField() = 0; //---------------------------------------------------- // a block of locals with given elem_type (always just LOCAL_SIG for now) virtual void NotifyBeginLocals(sig_elem_type elem_type) = 0; virtual void NotifyEndLocals() = 0; // count of locals with a block virtual void NotifyLocalsCount(sig_count) = 0; // starting a new local within a local block virtual void NotifyBeginLocal() = 0; virtual void NotifyEndLocal() = 0; // the only constraint available to locals at the moment is ELEMENT_TYPE_PINNED virtual void NotifyConstraint(sig_elem_type elem_type) = 0; //---------------------------------------------------- // a property with given element type virtual void NotifyBeginProperty(sig_elem_type elem_type) = 0; virtual void NotifyEndProperty() = 0; //---------------------------------------------------- // starting array shape information for array types virtual void NotifyBeginArrayShape() = 0; virtual void NotifyEndArrayShape() = 0; // array rank (total number of dimensions) virtual void NotifyRank(sig_count) = 0; // number of dimensions with specified sizes followed by the size of each virtual void NotifyNumSizes(sig_count) = 0; virtual void NotifySize(sig_count) = 0; // BUG BUG lower bounds can be negative, how can this be encoded? // number of dimensions with specified lower bounds followed by lower bound of each virtual void NotifyNumLoBounds(sig_count) = 0; virtual void NotifyLoBound(sig_count) = 0; //---------------------------------------------------- // starting a normal type (occurs in many contexts such as param, field, local, etc) virtual void NotifyBeginType() = 0; virtual void NotifyEndType() = 0; virtual void NotifyTypedByref() = 0; // the type has the 'byref' modifier on it -- this normally proceeds the type definition in the context // the type is used, so for instance a parameter might have the byref modifier on it // so this happens before the BeginType in that context virtual void NotifyByref() = 0; // the type is "VOID" (this has limited uses, function returns and void pointer) virtual void NotifyVoid() = 0; // the type has the indicated custom modifiers (which can be optional or required) virtual void NotifyCustomMod(sig_elem_type cmod, sig_index_type indexType, sig_index index) = 0; // the type is a simple type, the elem_type defines it fully virtual void NotifyTypeSimple(sig_elem_type elem_type) = 0; // the type is specified by the given index of the given index type (normally a type index in the type metadata) // this callback is normally qualified by other ones such as NotifyTypeClass or NotifyTypeValueType virtual void NotifyTypeDefOrRef(sig_index_type indexType, int index) = 0; // the type is an instance of a generic // elem_type indicates value_type or class // indexType and index indicate the metadata for the type in question // number indicates the number of type specifications for the generic types that will follow virtual void NotifyTypeGenericInst(sig_elem_type elem_type, sig_index_type indexType, sig_index index, sig_mem_number number) = 0; // the type is the type of the nth generic type parameter for the class virtual void NotifyTypeGenericTypeVariable(sig_mem_number number) = 0; // the type is the type of the nth generic type parameter for the member virtual void NotifyTypeGenericMemberVariable(sig_mem_number number) = 0; // the type will be a value type virtual void NotifyTypeValueType() = 0; // the type will be a class virtual void NotifyTypeClass() = 0; // the type is a pointer to a type (nested type notifications follow) virtual void NotifyTypePointer() = 0; // the type is a function pointer, followed by the type of the function virtual void NotifyTypeFunctionPointer() = 0; // the type is an array, this is followed by the array shape, see above, as well as modifiers and element type virtual void NotifyTypeArray() = 0; // the type is a simple zero-based array, this has no shape but does have custom modifiers and element type virtual void NotifyTypeSzArray() = 0; }; //---------------------------------------------------- #endif // __PROFILER_SIGNATURE_PARSER__
// Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. #ifndef __PROFILER_SIGNATURE_PARSER__ #define __PROFILER_SIGNATURE_PARSER__ /* Sig ::= MethodDefSig | MethodRefSig | StandAloneMethodSig | FieldSig | PropertySig | LocalVarSig MethodDefSig ::= [[HASTHIS] [EXPLICITTHIS]] (DEFAULT|VARARG|GENERIC GenParamCount) ParamCount RetType Param* MethodRefSig ::= [[HASTHIS] [EXPLICITTHIS]] VARARG ParamCount RetType Param* [SENTINEL Param+] StandAloneMethodSig ::= [[HASTHIS] [EXPLICITTHIS]] (DEFAULT|VARARG|C|STDCALL|THISCALL|FASTCALL) ParamCount RetType Param* [SENTINEL Param+] FieldSig ::= FIELD CustomMod* Type PropertySig ::= PROPERTY [HASTHIS] ParamCount CustomMod* Type Param* LocalVarSig ::= LOCAL_SIG Count (TYPEDBYREF | ([CustomMod] [Constraint])* [BYREF] Type)+ ------------- CustomMod ::= ( CMOD_OPT | CMOD_REQD ) ( TypeDefEncoded | TypeRefEncoded ) Constraint ::= #define ELEMENT_TYPE_PINNED Param ::= CustomMod* ( TYPEDBYREF | [BYREF] Type ) RetType ::= CustomMod* ( VOID | TYPEDBYREF | [BYREF] Type ) Type ::= ( BOOLEAN | CHAR | I1 | U1 | U2 | U2 | I4 | U4 | I8 | U8 | R4 | R8 | I | U | | VALUETYPE TypeDefOrRefEncoded | CLASS TypeDefOrRefEncoded | STRING | OBJECT | PTR CustomMod* VOID | PTR CustomMod* Type | FNPTR MethodDefSig | FNPTR MethodRefSig | ARRAY Type ArrayShape | SZARRAY CustomMod* Type | GENERICINST (CLASS | VALUETYPE) TypeDefOrRefEncoded GenArgCount Type* | VAR Number | MVAR Number ArrayShape ::= Rank NumSizes Size* NumLoBounds LoBound* TypeDefOrRefEncoded ::= TypeDefEncoded | TypeRefEncoded TypeDefEncoded ::= 32-bit-3-part-encoding-for-typedefs-and-typerefs TypeRefEncoded ::= 32-bit-3-part-encoding-for-typedefs-and-typerefs ParamCount ::= 29-bit-encoded-integer GenArgCount ::= 29-bit-encoded-integer Count ::= 29-bit-encoded-integer Rank ::= 29-bit-encoded-integer NumSizes ::= 29-bit-encoded-integer Size ::= 29-bit-encoded-integer NumLoBounds ::= 29-bit-encoded-integer LoBounds ::= 29-bit-encoded-integer Number ::= 29-bit-encoded-integer */ #define ELEMENT_TYPE_END 0x00 //Marks end of a list #define ELEMENT_TYPE_VOID 0x01 #define ELEMENT_TYPE_BOOLEAN 0x02 #define ELEMENT_TYPE_CHAR 0x03 #define ELEMENT_TYPE_I1 0x04 #define ELEMENT_TYPE_U1 0x05 #define ELEMENT_TYPE_I2 0x06 #define ELEMENT_TYPE_U2 0x07 #define ELEMENT_TYPE_I4 0x08 #define ELEMENT_TYPE_U4 0x09 #define ELEMENT_TYPE_I8 0x0a #define ELEMENT_TYPE_U8 0x0b #define ELEMENT_TYPE_R4 0x0c #define ELEMENT_TYPE_R8 0x0d #define ELEMENT_TYPE_STRING 0x0e #define ELEMENT_TYPE_PTR 0x0f // Followed by type #define ELEMENT_TYPE_BYREF 0x10 // Followed by type #define ELEMENT_TYPE_VALUETYPE 0x11 // Followed by TypeDef or TypeRef token #define ELEMENT_TYPE_CLASS 0x12 // Followed by TypeDef or TypeRef token #define ELEMENT_TYPE_VAR 0x13 // Generic parameter in a generic type definition, represented as number #define ELEMENT_TYPE_ARRAY 0x14 // type rank boundsCount bound1 ... loCount lo1 ... #define ELEMENT_TYPE_GENERICINST 0x15 // Generic type instantiation. Followed by type type-arg-count type-1 ... type-n #define ELEMENT_TYPE_TYPEDBYREF 0x16 #define ELEMENT_TYPE_I 0x18 // System.IntPtr #define ELEMENT_TYPE_U 0x19 // System.UIntPtr #define ELEMENT_TYPE_FNPTR 0x1b // Followed by full method signature #define ELEMENT_TYPE_OBJECT 0x1c // System.Object #define ELEMENT_TYPE_SZARRAY 0x1d // Single-dim array with 0 lower bound #define ELEMENT_TYPE_MVAR 0x1e // Generic parameter in a generic method definition,represented as number #define ELEMENT_TYPE_CMOD_REQD 0x1f // Required modifier : followed by a TypeDef or TypeRef token #define ELEMENT_TYPE_CMOD_OPT 0x20 // Optional modifier : followed by a TypeDef or TypeRef token #define ELEMENT_TYPE_INTERNAL 0x21 // Implemented within the CLI #define ELEMENT_TYPE_MODIFIER 0x40 // Or'd with following element types #define ELEMENT_TYPE_SENTINEL 0x41 // Sentinel for vararg method signature #define ELEMENT_TYPE_PINNED 0x45 // Denotes a local variable that points at a pinned object #define SIG_METHOD_DEFAULT 0x00 // default calling convention #define SIG_METHOD_C 0x01 // C calling convention #define SIG_METHOD_STDCALL 0x02 // Stdcall calling convention #define SIG_METHOD_THISCALL 0x03 // thiscall calling convention #define SIG_METHOD_FASTCALL 0x04 // fastcall calling convention #define SIG_METHOD_VARARG 0x05 // vararg calling convention #define SIG_FIELD 0x06 // encodes a field #define SIG_LOCAL_SIG 0x07 // used for the .locals directive #define SIG_PROPERTY 0x08 // used to encode a property #define SIG_GENERIC 0x10 // used to indicate that the method has one or more generic parameters. #define SIG_HASTHIS 0x20 // used to encode the keyword instance in the calling convention #define SIG_EXPLICITTHIS 0x40 // used to encode the keyword explicit in the calling convention #define SIG_INDEX_TYPE_TYPEDEF 0x00 // ParseTypeDefOrRefEncoded returns this as the out index type for typedefs #define SIG_INDEX_TYPE_TYPEREF 0x01 // ParseTypeDefOrRefEncoded returns this as the out index type for typerefs #define SIG_INDEX_TYPE_TYPESPEC 0x02 // ParseTypeDefOrRefEncoded returns this as the out index type for typespecs typedef unsigned char sig_byte; typedef unsigned char sig_elem_type; typedef unsigned char sig_index_type; typedef unsigned int sig_index; typedef unsigned int sig_count; typedef unsigned int sig_mem_number; class SigParser { private: sig_byte *pbBase; sig_byte *pbCur; sig_byte *pbEnd; public: bool Parse(sig_byte *blob, sig_count len); private: bool ParseByte(sig_byte *pbOut); bool ParseNumber(sig_count *pOut); bool ParseTypeDefOrRefEncoded(sig_index_type *pOutIndexType, sig_index *pOutIndex); bool ParseMethod(sig_elem_type); bool ParseField(sig_elem_type); bool ParseProperty(sig_elem_type); bool ParseLocals(sig_elem_type); bool ParseLocal(); bool ParseOptionalCustomMods(); bool ParseOptionalCustomModsOrConstraint(); bool ParseCustomMod(); bool ParseRetType(); bool ParseType(); bool ParseParam(); bool ParseArrayShape(); protected: // subtype these methods to create your parser side-effects //---------------------------------------------------- // a method with given elem_type virtual void NotifyBeginMethod(sig_elem_type elem_type) = 0; virtual void NotifyEndMethod() = 0; // the method has a this pointer virtual void NotifyHasThis() = 0; // total parameters for the method virtual void NotifyParamCount(sig_count) = 0; // starting a return type virtual void NotifyBeginRetType() = 0; virtual void NotifyEndRetType() = 0; // starting a parameter virtual void NotifyBeginParam() = 0; virtual void NotifyEndParam() = 0; // sentinel indication the location of the "..." in the method signature virtual void NotifySentinal() = 0; // number of generic parameters in this method signature (if any) virtual void NotifyGenericParamCount(sig_count) = 0; //---------------------------------------------------- // a field with given elem_type virtual void NotifyBeginField(sig_elem_type elem_type) = 0; virtual void NotifyEndField() = 0; //---------------------------------------------------- // a block of locals with given elem_type (always just LOCAL_SIG for now) virtual void NotifyBeginLocals(sig_elem_type elem_type) = 0; virtual void NotifyEndLocals() = 0; // count of locals with a block virtual void NotifyLocalsCount(sig_count) = 0; // starting a new local within a local block virtual void NotifyBeginLocal() = 0; virtual void NotifyEndLocal() = 0; // the only constraint available to locals at the moment is ELEMENT_TYPE_PINNED virtual void NotifyConstraint(sig_elem_type elem_type) = 0; //---------------------------------------------------- // a property with given element type virtual void NotifyBeginProperty(sig_elem_type elem_type) = 0; virtual void NotifyEndProperty() = 0; //---------------------------------------------------- // starting array shape information for array types virtual void NotifyBeginArrayShape() = 0; virtual void NotifyEndArrayShape() = 0; // array rank (total number of dimensions) virtual void NotifyRank(sig_count) = 0; // number of dimensions with specified sizes followed by the size of each virtual void NotifyNumSizes(sig_count) = 0; virtual void NotifySize(sig_count) = 0; // BUG BUG lower bounds can be negative, how can this be encoded? // number of dimensions with specified lower bounds followed by lower bound of each virtual void NotifyNumLoBounds(sig_count) = 0; virtual void NotifyLoBound(sig_count) = 0; //---------------------------------------------------- // starting a normal type (occurs in many contexts such as param, field, local, etc) virtual void NotifyBeginType() = 0; virtual void NotifyEndType() = 0; virtual void NotifyTypedByref() = 0; // the type has the 'byref' modifier on it -- this normally proceeds the type definition in the context // the type is used, so for instance a parameter might have the byref modifier on it // so this happens before the BeginType in that context virtual void NotifyByref() = 0; // the type is "VOID" (this has limited uses, function returns and void pointer) virtual void NotifyVoid() = 0; // the type has the indicated custom modifiers (which can be optional or required) virtual void NotifyCustomMod(sig_elem_type cmod, sig_index_type indexType, sig_index index) = 0; // the type is a simple type, the elem_type defines it fully virtual void NotifyTypeSimple(sig_elem_type elem_type) = 0; // the type is specified by the given index of the given index type (normally a type index in the type metadata) // this callback is normally qualified by other ones such as NotifyTypeClass or NotifyTypeValueType virtual void NotifyTypeDefOrRef(sig_index_type indexType, int index) = 0; // the type is an instance of a generic // elem_type indicates value_type or class // indexType and index indicate the metadata for the type in question // number indicates the number of type specifications for the generic types that will follow virtual void NotifyTypeGenericInst(sig_elem_type elem_type, sig_index_type indexType, sig_index index, sig_mem_number number) = 0; // the type is the type of the nth generic type parameter for the class virtual void NotifyTypeGenericTypeVariable(sig_mem_number number) = 0; // the type is the type of the nth generic type parameter for the member virtual void NotifyTypeGenericMemberVariable(sig_mem_number number) = 0; // the type will be a value type virtual void NotifyTypeValueType() = 0; // the type will be a class virtual void NotifyTypeClass() = 0; // the type is a pointer to a type (nested type notifications follow) virtual void NotifyTypePointer() = 0; // the type is a function pointer, followed by the type of the function virtual void NotifyTypeFunctionPointer() = 0; // the type is an array, this is followed by the array shape, see above, as well as modifiers and element type virtual void NotifyTypeArray() = 0; // the type is a simple zero-based array, this has no shape but does have custom modifiers and element type virtual void NotifyTypeSzArray() = 0; }; //---------------------------------------------------- #endif // __PROFILER_SIGNATURE_PARSER__
-1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/coreclr/pal/src/libunwind/src/ia64/Lscript.c
#define UNW_LOCAL_ONLY #include <libunwind.h> #if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY) #include "Gscript.c" #endif
#define UNW_LOCAL_ONLY #include <libunwind.h> #if defined(UNW_LOCAL_ONLY) && !defined(UNW_REMOTE_ONLY) #include "Gscript.c" #endif
-1
dotnet/runtime
65,889
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE
Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
simonrozsival
"2022-02-25T12:07:00Z"
"2022-03-09T14:24:14Z"
e32f3b61cd41e6a97ebe8f512ff673b63ff40640
cbcc616cf386b88e49f97f74182ffff241528179
[Mono] Add SIMD intrinsics for Vector64/128 "All" and "Any" variants of GT/GE/LT/LE. Related to #64072 - `GreaterThanAll` - `GreaterThanAny` - `GreaterThanOrEqualAll` - `GreaterThanOrEqualAny` - `LessThanAll` - `LessThanAny` - `LessThanOrEqualAll` - `LessThanOrEqualAny`
./src/mono/mono/metadata/w32event-win32.c
/** * \file * Runtime support for managed Event on Win32 * * Author: * Ludovic Henry (luhenry@microsoft.com) * * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include "w32event.h" #include <windows.h> #include <winbase.h> #include <mono/metadata/handle.h> #include <mono/utils/mono-error-internals.h> #include "icall-decl.h" void mono_w32event_init (void) { } gpointer mono_w32event_create (gboolean manual, gboolean initial) { return CreateEvent (NULL, manual, initial, NULL); } gboolean mono_w32event_close (gpointer handle) { return CloseHandle (handle); } void mono_w32event_set (gpointer handle) { SetEvent (handle); }
/** * \file * Runtime support for managed Event on Win32 * * Author: * Ludovic Henry (luhenry@microsoft.com) * * Licensed under the MIT license. See LICENSE file in the project root for full license information. */ #include "w32event.h" #include <windows.h> #include <winbase.h> #include <mono/metadata/handle.h> #include <mono/utils/mono-error-internals.h> #include "icall-decl.h" void mono_w32event_init (void) { } gpointer mono_w32event_create (gboolean manual, gboolean initial) { return CreateEvent (NULL, manual, initial, NULL); } gboolean mono_w32event_close (gpointer handle) { return CloseHandle (handle); } void mono_w32event_set (gpointer handle) { SetEvent (handle); }
-1