licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.9.4 | 36b450a98090c6b6c57cc84fa647013402c84650 | docs | 2206 | ```@meta
CurrentModule = EarthSciData
```
# EarthSciData: Earth Science Data Loaders and Interpolators
Documentation for [EarthSciData](https://github.com/EarthSciML/EarthSciData.jl).
## Installation
```julia
using Pkg
Pkg.add("EarthSciMLData")
```
## Feature Summary
This package contains data loaders for use with the [EarthSciML](https://earthsci.dev/) ecosystem.
## Feature List
* Loader for [GEOS-FP](https://gmao.gsfc.nasa.gov/GMAO_products/NRT_products.php) data.
* Loader for [2016 NEI](https://gaftp.epa.gov/Air/) emissions data.
* Data outputters:
* [`NetCDFOutputter`](@ref)
## Contributing
* Please refer to the
[SciML ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://github.com/SciML/ColPrac/blob/master/README.md)
for guidance on PRs, issues, and other matters relating to contributing.
## Reproducibility
```@raw html
<details><summary>The documentation of this EarthSciML package was built using these direct dependencies,</summary>
```
```@example
using Pkg # hide
Pkg.status() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>and using this machine and Julia version.</summary>
```
```@example
using InteractiveUtils # hide
versioninfo() # hide
```
```@raw html
</details>
```
```@raw html
<details><summary>A more complete overview of all dependencies and their versions is also provided.</summary>
```
```@example
using Pkg # hide
Pkg.status(;mode = PKGMODE_MANIFEST) # hide
```
```@raw html
</details>
```
```@raw html
You can also download the
<a href="
```
```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml",String))["version"]
name = TOML.parse(read("../../Project.toml",String))["name"]
link = Markdown.MD("https://github.com/EarthSciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/Manifest.toml")
```
```@raw html
">manifest</a> file and the
<a href="
```
```@eval
using TOML
using Markdown
version = TOML.parse(read("../../Project.toml",String))["version"]
name = TOML.parse(read("../../Project.toml",String))["name"]
link = Markdown.MD("https://github.com/EarthSciML/"*name*".jl/tree/gh-pages/v"*version*"/assets/Project.toml")
```
```@raw html
">project</a> file.
``` | EarthSciData | https://github.com/EarthSciML/EarthSciData.jl.git |
|
[
"MIT"
] | 0.9.4 | 36b450a98090c6b6c57cc84fa647013402c84650 | docs | 1351 | # 2016 US EPA National Emissions Inventory (NEI) data
We have a data loader for CMAQ-formatted monthly US National Emissions Inventory data for year 2016,[`NEI2016MonthlyEmis`](@ref).
## Download Configuration
Because there is an issue with the EPA's FTP server that we download the data from you may need to set the following environment variable before using it:
In Julia:
```julia
ENV["JULIA_NO_VERIFY_HOSTS"] = "gaftp.epa.gov"
```
or in a bash shell:
```bash
export JULIA_NO_VERIFY_HOSTS=gaftp.epa.gov
```
## Equations
This is what its equation system looks like:
```@example nei2016
using EarthSciData, ModelingToolkit, DynamicQuantities, DataFrames
using ModelingToolkit: t
using DynamicQuantities: dimension
@parameters lat, [unit=u"rad"], lon, [unit=u"rad"], lev [unit=u"rad"]
emis, emis_updater = NEI2016MonthlyEmis("mrggrid_withbeis_withrwc", lon, lat, lev)
```
## Variables
Here are the variables in tabular format:
```@example nei2016
table(vars) = DataFrame(
:Name => [string(Symbolics.tosymbol(v, escape=false)) for v ∈ vars],
:Units => [dimension(ModelingToolkit.get_unit(v)) for v ∈ vars],
:Description => [ModelingToolkit.getdescription(v) for v in vars],
)
table(unknowns(emis))
```
## Parameters
Finally, here are the parameters in tabular format:
```@example nei2016
table(parameters(emis))
``` | EarthSciData | https://github.com/EarthSciML/EarthSciData.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 68 | module CStructures
include("layout.jl")
include("cstruct.jl")
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 9753 |
export Caccessor, CStruct, CVector, CStructAccess, CStructGuarded
import Base: length, size, pointer, show, unsafe_convert, Fix1
import Base: propertynames, getproperty, setproperty!, getindex, setindex!
abstract type CStructAccess{T} end
"""
CStruct{T}(p::Ptr)
Given a C-type pointer `p` to a C-struct and the equivalent Julia struct
with the same memory layout `T`, provide read and write access to the fields.
`T` must be a bits type.
Example:
struct T <: Layout
a::Cint
b::Cdouble
end
a = Vector{UInt8}(undef, 100)
p = pointer(a) # usually the data are coming from C
cs = CStruct{T}(p)
cs.a = 1234
cs.b = 3.5
"""
struct CStruct{T} <: CStructAccess{T}
pointer::Ptr{Nothing}
function CStruct{T}(p::Ptr) where T
isbitstype(T) || throw(ArgumentError("$T is not a bitstype"))
new{T}(p)
end
CStruct{T}(data) where T = CStruct{T}(pointer(data))
CStruct(data) = CStruct(pointer(data))
CStruct(p::Ptr{T}) where T = CStruct{T}(p)
end
struct CStructGuarded{T,D} <: CStructAccess{T}
cs::CStruct{T}
guard::Vector{D}
function CStructGuarded{T}(data::Vector{D}) where {T,D<:Union{Integer,Ptr}}
new{T,D}(CStruct{T}(data), data)
end
end
CStructGuarded(::Type{T}, src=()) where T = CStructGuarded{T}(Cserialize(T, src))
"""
CVector
Abstract vector type for julia objects used to access elements of C-vectors,
which are based by plain C memory. Memory layout is described by `Layout` structs.
"""
struct CVector{T} <: AbstractVector{T}
pointer::Ptr{Nothing}
length::Int
function CVector{T}(p::Ptr, length::Integer=-1) where T
isbitstype(T) || throw(ArgumentError("$T is not a bitstype"))
new{T}(p, length)
end
end
# accessing the fields represented by CStruct
# to access the pointer use function `pointer`
propertynames(::CStruct{T}) where T = fieldnames(T)
propertynames(::CStructGuarded{T}) where T = fieldnames(T)
function getproperty(cs::CStruct{T}, field::Symbol) where T
fp = pointer_for_field(cs, field)
get_from_pointer(fp, cs)
end
getproperty(sg::CStructGuarded, field::Symbol) = getproperty(getfield(sg, :cs), field)
function setproperty!(cs::CStruct{T}, field::Symbol, v) where T
fp = pointer_for_field(cs, field)
set_at_pointer!(fp, v)
end
setproperty!(sg::CStructGuarded, field::Symbol, v) = setproperty!(getfield(sg, :cs), field, v)
function getindex(cv::CVector{T}, i::Integer) where T
p = pointer_for_index(cv, i)
get_from_pointer(p, cv)
end
function getindex(cv::CVector{T}, r::OrdinalRange) where T
[getindex(cv, i) for i in r]
end
function setindex!(cv::CVector{T}, v, i::Integer) where T
p = pointer_for_index(cv, i)
set_at_pointer!(p, v)
end
size(cv::CVector) = (length(cv),)
"""
pointer(::Union{CStruct,CVector})
length(::CVector)
get the internal fields of accessors
"""
pointer(cs::CStruct) = getfield(cs, :pointer)
pointer(cs::CVector) = getfield(cs, :pointer)
length(cv::CVector) = getfield(cv, :length)
pointer(sg::CStructGuarded) = pointer(getfield(sg, :cs))
function show(io::IO, x::CStructAccess{T}) where T
show(io, typeof(x))
print(io, '(')
nf = length(T.types)
if !Base.show_circular(io, x)
recur_io = IOContext(io, Pair{Symbol,Any}(:SHOWN_SET, x),
Pair{Symbol,Any}(:typeinfo, Any))
for i in 1:nf
f = fieldname(T, i)
show(recur_io, getproperty(x, f))
if i < nf
print(io, ", ")
end
end
end
print(io, ')')
end
function show(io::IO, x::CVector{T}) where T
show(io, typeof(x))
print(io, '[')
nf = length(x)
if nf < 0
print(io, "#= unknown length =#")
elseif !Base.show_circular(io, x)
recur_io = IOContext(io, Pair{Symbol,Any}(:SHOWN_SET, x),
Pair{Symbol,Any}(:typeinfo, Any))
for i in 1:nf
show(recur_io, getindex(x, i))
if i < nf
print(io, ", ")
end
end
end
print(io, ']')
end
"""
get_from_pointer(::Ptr{T})
For bits types simply load value, convert to Julia accessor if required.
For struct types, create CStruct accessor.
For vector types, create CVector accessor.
"""
function get_from_pointer(fp::Ptr{FT}, parent) where FT <: Ptr
v = unsafe_load(fp)
v == C_NULL ? nothing : get_from_pointer(v, parent)
end
function get_from_pointer(fp::Ptr{FT}, parent) where {T,FT<:LVector{T}}
CVector{T}(fp, get_length(FT, parent))
end
function get_from_pointer(fp::Ptr{FT}, parent) where FT <: Layout
CStruct{FT}(fp)
end
function get_from_pointer(fp::Ptr{FT}, parent) where FT <: Cstring
v = unsafe_load(fp)
v == Cstring(C_NULL) ? "" : unsafe_string(Ptr{UInt8}(v))
end
function get_from_pointer(fp::Ptr{FT}, parent) where FT
if FT <: Nothing
fp
elseif isbitstype(FT)
unsafe_load(fp)
else
throw(ArgumentError("not supported layout type: $FT"))
end
end
"""
set_at_pointer(:Ptr, value)
Convert to C primitive or composed object. Store bytes at memory position.
"""
function set_at_pointer!(fp::Ptr{FT}, v) where FT
w = unsafe_convert(FT, Base.cconvert(FT, v))
unsafe_store!(fp, w)
end
"""
pointer_for_field(cs::CStruct{T}, fieldname) where T
For `cs` return pointer to member field `fieldname`.
The pointer has type `Ptr{fieldtype(T, i)}` with `i` the number of the field
within struct type `T`.
"""
function pointer_for_field(cs::CStruct{T}, field::Symbol) where T
i = findfirst(Fix1(isequal, field), fieldnames(T))
i === nothing && throw(ArgumentError("type $T has no field $field"))
Ptr{fieldtype(T, i)}(getfield(cs, :pointer) + fieldoffset(T, i))
end
function pointer_for_index(cv::CVector{T}, i::Integer) where T
Ptr{T}(getfield(cv, :pointer) + sizeof(T) * (i - 1))
end
unsafe_convert(::Type{Ptr{T}}, cs::CStructAccess{T}) where T = Ptr{T}(pointer(cs))
unsafe_convert(::Type{Ptr{Vector{T}}}, cs::CVector{T}) where T = Ptr{Vector{T}}(pointer(cs))
"""
p = pointer(a::Vector{T})::Ptr{T}
return pointer to `a[1]`. The existence of the resulting Ptr will not protect the object
from garbage collection, so you must ensure that the object remains referenced for the whole
time that the Ptr will be used.
The condition `a[i] === unsafe_load(p, i)` is usually true.
Given `p` it is possible to access arbitrary bits data by byte offset and type `S` using
`unsafe_load(Ptr{S}(p + offset))`.
This function is mainly used to simulate a C memory in the data
area of vector `a`.
"""
pointer_from_vector_obs(a::Vector{T}) where T = unsafe_convert(Ptr{T}, a)
export default_value, default_type, construct
function default_type(::Type{T}) where T
isconcretetype(T) || throw(ArgumentError("no default type defined for $T"))
T
end
default_type(::Type{<:AbstractArray{T,N}}) where {T,N} = Array{T,N}
default_type(::Type{T}) where T<:Real = isconcretetype(T) ? T : Bool
default_type(::Type{T}) where T<:AbstractIrrational = isconcretetype(T) ? T : Irrational
default_type(::Type{T}) where T<:AbstractFloat = isconcretetype(T) ? T : Float64
default_type(::Type{T}) where T<:Signed = isconcretetype(T) ? T : Int
default_type(::Type{T}) where T<:Unsigned = isconcretetype(T) ? T : UInt
default_type(::Type{T}) where T<:Rational = isconcretetype(T) ? T : Rational{default_type(Signed)}
default_type(::Type{T}) where {S,T<:Rational{S}} = isconcretetype(T) ? T : Rational{default_type(S)}
default_type(::Type{T}) where T<:Complex = Complex{default_type(Real)}
default_type(::Type{T}) where {S,T<:Complex{S}} = Complex{default_type(S)}
default_type(::Type{T}) where T<:AbstractString = isconcretetype(T) ? T : String
default_value(::Type{T}) where T = _default_value(default_type(T))
default_value(::Type{A}) where {T,N,A<:AbstractArray{T,N}} = default_type(A)(undef,zeros(Int,N)...)
default_value(::Type{T}) where T<:Number = default_type(T)(0)
default_value(::Type{T}) where T<:AbstractIrrational = default_type(T)(ℯ)
default_value(::Type{T}) where {S,T<:Complex{S}} = default_type(T)(default_value(S))
default_value(::Type{T}) where T<:AbstractString = default_type(T)("")
function _default_value(::Type{T}) where T
@assert isconcretetype(T)
ft = fieldtypes(T)
fv = default_value.(ft)
construct(T, fv...)
end
default_value(::Type{T}, v) where T<:Union{Number,AbstractString} = convert(T, v)
function default_value(::Type{T}, v) where {S,T<:AbstractArray{S}}
convert(T, [default_value(S, x) for x in v])
end
function default_value(::Type{T}, v) where T
f = fieldnames(T)
n = length(f)
r = Vector{Any}(undef, n)
for i in 1:n
fn = fieldname(T, i)
ft = fieldtype(T, i)
r[i] = hasproperty(v, fn) ? default_value(ft, getproperty(v, fn)) : default_value(ft)
end
construct(T, r...)
end
construct(::Type{T}, args...) where T = construct(Val(!ismutabletype(T)), T, args...)
function construct(::Val{true}, ::Type{T}, args...) where T
r = Vector{T}(undef, 1)
p = pointer(r)
_construct!(T, p, args)
r[1]
end
function construct(::Val{false}, ::Type{T}, args...) where T
r = _construct_any(T)
p = pointer_from_objref(r)
_construct!(T, p, args)
r
end
function _construct!(::Type{T}, p::Ptr, args) where T
n = min(length(args), fieldcount(T))
for i = 1:n
off = fieldoffset(T, i)
ft = fieldtype(T, i)
v = convert(ft, args[i])
q = p + off
if isbitstype(ft)
unsafe_store!(Ptr{ft}(q), v)
else
unsafe_store!(Ptr{Ptr{Nothing}}(q), pointer_from_objref(v))
end
end
end
function _construct_any(::Type{T}) where T
m = first(methods(T))
at = m.sig.types[2:end]
T(default_value.(at)...)
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 11556 | export Layout, LForwardReference, LFixedVector, LVarVector
export is_layout_fixed, is_layout_variable, simple_size, total_size, Cserialize
"""
Layout
All structs used to describe the memory layout (of a C-data structure) need to be
subtypes of this.
Some controlling objects used in such templates to describe vectors and pointers
have also this type.
A `Layout` structure and a memory pointer are needed to construct an `CAccessor` object.
"""
abstract type Layout end
abstract type LVector{T} <: Layout end
Base.eltype(::Type{<:LVector{T}}) where T = T
# Layout Elements
"""
LFixedVector{T,N}
Denote a fixed size vector with element type `T` and size `N`.
"""
struct LFixedVector{T,N} <: LVector{T}
p::NTuple{N,T}
end
get_length(::Type{LFixedVector{T,N}}, ::Any) where {T,N} = N
Base.eltype(::Type{LFixedVector{T,N}}) where {T,N} = T
"""
LVarVector{T,F}
Denote a variable length vector with element type `T` in a template.
`F` is a function, which calculates the length of the vector, given the
accessor object containing the vector.
Example:
struct A <: Layout
len::Int
vec::NVarVector{Float64, (x) -> x.len}
end
"""
struct LVarVector{T,F} <: LVector{T}
p::NTuple{1,T}
end
function get_length(::Type{LVarVector{T,F}}, x) where {T,F}
F isa Symbol ? getproperty(x, F) :
F isa Integer ? getindex(x, F) :
F isa Function ? F(x) :
0
end
function set_length!(::Type{LVarVector{T,F}}, t::Tuple{U,Ptr}, v) where {T,F,U}
S, p = t
ix = 1
G = Nothing
if F isa Symbol
# setproperty!(x, v, F)
S <: Layout || return v
i = findfirst(isequal(F), fieldnames(S))
i === nothing && return v
G = fieldtype(S, i)
p += fieldoffset(S, i)
elseif F isa Integer
# setindex!(x, v, F)
S <: LFixedVector || return v
G = eltype(S)
ix = Int(F)
ix <= 0 && return v
end
G === Nothing && return v
vv = G(v)
unsafe_store!(Ptr{G}(p), vv, ix)
vv
end
Base.eltype(::Type{LVarVector{T,F}}) where {T,F} = T
struct LForwardReference{M,L} <: Layout
p::Ptr{Nothing}
end
Base.eltype(::Type{LForwardReference{M,L}}) where {M,L} = M.name.module.eval(L)
const TEMPLATE_FIXED = true
const TEMPLATE_VAR = false
"""
is_template_variable(type)
Has the layout described by `type` a variable size
(for example variable sized vector in last field of a struct)?
"""
is_layout_variable(T::Type, deep::Bool=false) = !is_layout_fixed(T, deep)
"""
is_template_fixed(type)
Has the layout described by `type` a fixed size.
"""
is_layout_fixed(T::Type, deep::Bool=false) = is_layout_fixed(T, deep, Dict())
function is_layout_fixed(::Type{T}, deep::Bool, dup) where T
isprimitivetype(T) || throw(ArgumentError("$T is not a supported layout type"))
TEMPLATE_FIXED
end
function is_layout_fixed(::Type{S}, deep::Bool, dup) where {T,S<:Ptr{T}}
T <: Ptr && throw(ArgumentError("$S is not a supported layout type"))
get!(dup, S) do
dup[S] = TEMPLATE_FIXED
d = is_layout_fixed(T, deep, dup)
deep ? d : TEMPLATE_FIXED
end
end
function is_layout_fixed(::Type{S}, deep::Bool, dup) where {S<:LForwardReference}
is_layout_fixed(Ptr{eltype(S)}, deep, dup)
end
function is_layout_fixed(::Type{S}, deep::Bool, dup) where {T,N,S<:LFixedVector{T,N}}
get!(dup, S) do
dup[S] = TEMPLATE_FIXED
k = is_layout_fixed(T, deep, dup)
if N > 1 && k == TEMPLATE_VAR
throw(ArgumentError("$S with variable length elements"))
end
N == 0 ? TEMPLATE_FIXED : k
end
end
function is_layout_fixed(::Type{S}, deep::Bool, dup) where {T,S<:LVarVector{T}}
get!(dup, S) do
dup[S] = TEMPLATE_VAR
is_layout_fixed(T, deep, dup)
TEMPLATE_VAR
end
end
function is_layout_fixed(::Type{T}, deep::Bool, dup) where {T<:Layout}
get!(dup, T) do
k = dup[T] = TEMPLATE_FIXED
if !isbitstype(T)
text = isconcretetype(T) ? "bits" : "concrete"
throw(ArgumentError("$T is not a $text type struct"))
end
fields = fieldnames(T)
n = length(fields)
for i = 1:n
f = fields[i]
F = fieldtype(T, f)
k = is_layout_fixed(F, deep, dup)
if i < n && k == TEMPLATE_VAR
throw(ArgumentError("$F has variable length in '$T.$f' - not last field"))
end
end
k
end
end
function align(p::Integer, s::Integer=sizeof(Ptr)) # s must be 2^n
t = s - 1
(p + t ) & ~t
end
function align(p::Integer, ::Type{T}) where T
align(p, Base.aligned_sizeof(T))
end
simple_size(T::Type, veclens) = blength(T, veclens, Val(false))
total_size(T::Type, veclens) = blength(T, veclens, Val(true))
function blength(::Type{T}, veclens, v::Val{P}) where {P,F,N,T<:LFixedVector{F,N}}
s = sizeof(T)
j = 0
for _ = 1:N
j, s = blength_helper(F, veclens, j, s, v, T)
end
if j < length(veclens)
throw(ArgumentError("too many variable length specifiers for $T only $j are needed"))
end
s
end
function blength(::Type{T}, veclens, v::Val{P}) where {P,S,T<:LVarVector{S}}
isempty(veclens) && return 0
n = first(veclens)
n == 0 && return 0
n < 0 && throw(ArgumentError("negative vector length '$n' not allowed"))
vl(i) = i < length(veclens) ? veclens[i+1] : ()
sum(blength(S, vl(i), v) for i = 1:n)
end
blength(::Type{Ptr{T}}, veclens, v::Val{P}) where {P,T} = P ? blength(T, veclens, v) : 0
function blength(::Type{T}, veclens, v::Val{P}) where {P,T<:Layout}
s = sizeof(T)
j = 0
for i = 1:fieldcount(T)
F = fieldtype(T, i)
j, s = blength_helper(F, veclens, j, s, v, T)
end
if j < length(veclens)
throw(ArgumentError("too many variable length specifiers for $T- only $j are needed"))
end
s
end
blength(::Type{T}, veclens, ::Val) where T = sizeof(T)
function blength_helper(::Type{F}, veclens, j, s, v::Val{P}, T) where {P,F}
if is_layout_variable(F, true)
j += 1
if j > length(veclens)
throw(ArgumentError("not enough variable length specifiers for $T"))
end
vl = veclens[j]
else
vl = ()
end
al = Base.datatype_alignment(F)
s = align(s, al)
s += F <: LVarVector ? blength(F, vl, v) :
P && F <: Union{Ptr,LForwardReference} ? blength(eltype(F), vl, v) : 0
j, s
end
"""
Cserialize(::Type{T}, source::Any)
Convert the julia object `source` into a byte vector to be used in C.
The process is controlled by the layout type recursively.
The resulting vector contains only data described in `T`.
The field, vector element or bit data required by `T` are taken from `source`
if available in a corresponding part. Other data are filled with 0-bytes.
If `T` is a structure, corresponding fields in source are by the same name.
If `T` is a vector, corresponding elements in source are by the same index.
If `T` is a `Ptr{S}`, the space for a pointer is reserved and filled with
the offset integer (of same size), while the `S` object is appended at the end
of the serialization stream.
Finally all offset integers are replaced by actual pointers.
"""
function Cserialize(::Type{T}, src) where T
buf = UInt8[]
rea = Int[]
off = Cserialize!(T, src, buf, 0, rea)
resize!(buf, off)
relocate!(buf, rea)
end
function Cserialize!(::Type{T}, src, buf, off, rea::Vector{Int}) where T
ctx = Tuple{Integer,Type,Any}[]
noff = _Cserialize!(T, src, buf, off, rea, ctx)
for (poff, F, src) in ctx
noff = align(noff)
ensure!(buf, poff, sizeof(Ptr))
p = pointer(buf) + poff
q = noff
Base.unsafe_store!(Ptr{Ptr{F}}(p), Ptr{F}(q))
noff = Cserialize!(F, src, buf, noff, rea)
end
align(noff)
end
function _Cserialize!(::Type{T}, src::S, buf::Vector{UInt8}, off::Integer, rea, ctx) where {T<:Layout,S}
ensure!(buf, off, sizeof(T))
noff = off
for i = 1:fieldcount(T)
F = fieldtype(T, i)
f = fieldname(T, i)
x = fieldoffset(T, i)
if hasproperty(src, f)
noff = _Cserialize!(F, getproperty(src, f), buf, off + x, rea, ctx)
elseif F <: LVarVector
throw(ArgumentError("need src vector $f to determine length"))
else
noff = off + x + Base.aligned_sizeof(F)
end
end
noff
end
function _Cserialize!(::Type{<:LFixedVector{T,N}}, src::AbstractVector, buf::Vector{UInt8}, off::Integer, rea, ctx) where {T,N}
as = Base.aligned_sizeof(T)
ensure!(buf, off, as * N)
n = length(src)
for i = 1:min(N, n)
off = _Cserialize!(T, src[i], buf, off, rea, ctx)
off = align(off, T)
end
if N > n
off += as * (N - n)
end
off
end
function _Cserialize!(::Type{S}, src::AbstractVector, buf::Vector{UInt8}, off::Integer, rea, ctx) where {T,F,S<:LVarVector{T,F}}
for i = 1:length(src)
off = _Cserialize!(T, src[i], buf, off, rea, ctx)
off = align(off, T)
end
off
end
function _Cserialize!(::Type{Cstring}, src::Nothing, buf::Vector{UInt8}, off::Integer, rea, ctx)
alignptr(off)
end
function _Cserialize!(::Type{Cstring}, src, buf::Vector{UInt8}, off::Integer, rea, ctx)
s = string(src)
n = length(s) + 1
v = codeunits(s)
pushall!(rea, ctx, off, LFixedVector{UInt8,n}, v)
alignptr(off)
end
function _Cserialize!(::Type{T}, src, buf::Vector{UInt8}, off::Integer, rea, ctx) where T
if isbitstype(T)
s = Base.aligned_sizeof(T)
ensure!(buf, off, s)
p = pointer(buf) + off
Base.unsafe_store!(Ptr{T}(p), convert(T, src))
off + s
else
throw(ArgumentError("cannot serialize type $T"))
end
end
function _Cserialize!(::Type{Ptr{T}}, src::Nothing, buf::Vector{UInt8}, off::Integer, rea, ctx) where T
alignptr(off)
end
function _Cserialize!(::Type{Ptr{T}}, src, buf::Vector{UInt8}, off::Integer, rea, ctx) where T
pushall!(rea, ctx, off, T, src)
alignptr(off)
end
alignptr(off) = align(off + sizeof(Ptr))
"""
pushall!(relocs::Vector{Int}, ctx::Vector{Tuple}, offset, Type, value}
push! the `offset` to `relocs` and the tuple `(offset, Type, value)` in `ctx`.
The `relocs` are finally used to replace offset values by pointers.
The `ctx` is used push back processing for later serializing.
"""
function pushall!(rea::Vector{<:Integer}, ctx::Vector{<:Tuple}, off, T, v)
push!(rea, off)
push!(ctx, (off, T, v))
end
"""
ensure!(buf::Vector, off, size)
Ensure that the size of `buf` is at least `off + size` by maybe resizing `buf`.
Added space is filled with zero bytes.
"""
function ensure!(buf::Vector{UInt8}, off::Integer, siz::Integer)
n = sizeof(buf)
m = off + siz
if n < m
resize!(buf, m)
for i = n+1:m
buf[i] = 0
end
end
buf
end
"""
relocate!(buffer::Vector, offsets)
In vector `buffer`, at the byte offsets stored in `offsets`, offset values (into buffer) are
stored as `Int` values. The are replaced by `Ptr` values into the data area of `buffer`.
It is essential, that the data area is not changed after this process, that means no
`resize!`, `push!`, etc. are allowed after this final fix of pointer values to be used in
C-calls.
"""
function relocate!(buf::AbstractVector, rea::AbstractVector{<:Integer})
p0 = pointer(buf)
for off in rea
p = p0 + off
q = unsafe_load(Ptr{UInt}(p)) + p0
unsafe_store!(Ptr{Ptr{UInt8}}(p), q)
end
buf
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 2566 |
struct A1 <: Layout
a::Bool
b::Cint
c::Float64
d::Cstring
end
@testset "field access" begin
a = fill(UInt64(0), 3)
p = pointer(a)
cs = CStruct{A1}(p)
@test cs.a == 0
@test cs.b == 0
@test cs.c == 0
@test cs.d == ""
v1 = true
v2 = 0x12345678
v3 = 47.11
v4 = "hallo"
cs.a = v1
cs.b = v2
cs.c = v3
cs.d = v4
@test cs.a == v1
@test cs.b == v2
@test cs.c == v3
@test cs.d == v4
end
@testset "index access" begin
a = fill(UInt64(0), 100)
p = pointer(a)
cv = CVector{Int}(p, 3)
@test length(cv) == 3
cv[1:3] .= (1, 2, 3)
@test cv[2] == 2
@test cv[[1,3]] == [1, 3]
end
struct A2 <: Layout
a::Ptr{A2}
end
@testset "self-referencing" begin
a = fill(UInt8(0), 100)
p = pointer(a)
cs = CStruct{A2}(p)
@test cs.a === nothing
io = IOBuffer()
show(io, cs)
@test String(take!(io)) == "CStruct{A2}(nothing)"
cs.a = cs
@test cs.a === cs
show(io, cs)
@test String(take!(io)) == "CStruct{A2}(CStruct{A2}(#= circular reference @-1 =#))"
end
struct A3 <: Layout
len::Int
vec::LVarVector{Float64, (x) -> x.len}
end
@testset "variable vector at end of struct" begin
a = fill(Int(0), 1024)
p = pointer(a)
LEN = 25
cs = CStruct{A3}(p)
cs.len = LEN
@test cs.vec isa CVector{Float64}
@test length(cs.vec) == cs.len == LEN
end
struct A4 <: Layout
len::Int
vec::Ptr{LVarVector{Float64, (x) -> x.len}}
end
@testset "pointer to variable vector" begin
a = fill(Int(0), 1024)
p = pointer(a)
a[2] = p + 32
LEN = 25
cs = CStruct{A4}(p)
cs.len = LEN
@test cs.vec isa CVector{Float64}
@test length(cs.vec) == cs.len == LEN
end
struct B <: Layout
a::Int
b::LVarVector{Float64, :a}
end
@testset "variable length vector in struct" begin
@test_throws ArgumentError Cserialize(B, ())
cs = CStruct{B}(Cserialize(B, (a=0, b=Float64[])))
@test length(cs.b) == cs.a == 0
cs = CStruct{B}(Cserialize(B, (a=2, b=[1.0; 2; 3])))
@test length(cs.b) == cs.a == 2
cs.a = 3
@test length(cs.b) == cs.a == 3
end
struct I1
a::Int8
b::Int16
I1() = new(1, 1)
end
mutable struct M1
a::Int8
b::Float64
M1() = new(2, 2)
end
struct I2
a::Int16
b::M1
I2() = new(3, M1())
end
mutable struct M2
a::Int16
b::M1
M2() = new(4, M1())
end
@testset "construct function $T" for (T, a1, a2) in ((I1, 12, 12), (M1, 12, 12), (I2, 12, M1()), (M2, 12, M1()))
@test construct(T, a1, a2) isa T
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 1347 |
struct L0
a::Int
end
struct L1 <: Layout
a::Int
end
struct L2 <: Layout
a::LForwardReference{L2, :L3}
end
struct L3 <: Layout
a::L2
end
struct L4 <: Layout
a::LForwardReference{L4, :L_not_defined}
end
struct L5 <: Layout
a::LVarVector{Int,1}
end
struct L6 <: Layout
a::LVarVector{Int,1}
b::Int
end
struct L7 <: Layout
a::Ptr{Ptr{Int}}
end
@testset "simple layout templates" begin
@test_throws ArgumentError is_layout_fixed(L0)
@test_throws ArgumentError is_layout_fixed(Vector{Int})
@test is_layout_fixed(Float64)
@test is_layout_fixed(L1)
@test is_layout_fixed(Ptr{L1})
@test is_layout_fixed(Ptr{L5})
@test is_layout_fixed(LFixedVector{Int,10})
@test_throws ArgumentError is_layout_fixed(LFixedVector{Int})
@test is_layout_variable(LVarVector{Int,1})
@test is_layout_fixed(L2)
@test_throws UndefVarError is_layout_fixed(L4)
@test !is_layout_fixed(L5)
@test_throws ArgumentError is_layout_fixed(L6)
@test_throws ArgumentError is_layout_fixed(L7)
@test is_layout_fixed(LFixedVector{L1,1})
@test is_layout_fixed(LFixedVector{L5,0})
@test !is_layout_fixed(LFixedVector{L5,1})
@test_throws ArgumentError !is_layout_fixed(LFixedVector{L5,2})
@test !is_layout_fixed(LVarVector{L1,1})
@test !is_layout_fixed(LVarVector{L5,0})
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | code | 114 | using CStructures
using Test
@testset "CStructures" begin
include("cstruct.jl")
include("layout.jl")
end
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.1.0 | a320d7857f292ebcb970b71c628dadf5a6e6e6e6 | docs | 4152 | # CStructures
[](https://github.com/KlausC/CStructures.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/KlausC/CStructures.jl)
## Purpose
C data structures are different from Julia structures in that they do not keep type information.
While most primitive types and bitstype structures of Julia have an identical memory layout as the corresponding
C data, complexities arise when pointers and strings are embedded.
This package handles the situation, when pointered C-data generated in C have to be accessed in Julia code.
It is also possible in Julia to construct a byte array, which can be processed by a C program given the
layout of the C-structures.
This package was fundamental to implementing the [FuseApi](https://github.com/KlausC/FuseApi.jl) package.
## Installation
]add CStructures
## Usage
using CStructures
struct LayoutType <: Layout
field::Int
end
cs = CStruct{LayoutType}(c_data)
cs.field = cs.field + 1
se = Cserialize(LayoutType, (field = 42,))
ccall((:cf, :libc), Cvoid, (Ptr{LayoutType},), se)
cg = CStructGuarded(se)
cg.field = 43
## Layout Elements
A data layout is described by a Julia bitstype struct, a fixed or variable vector descriptor, a `Cstring`, or a
reference descriptor.
### Bits Types
All primitve types defined in Julia, which have an identical C-representation can be used as layout descriptions.
That includes immutable structs of such types, which have the `isbitstype` attribute.
### String Type
The special type `Cstring` is used to represent a `char*` pointer. It occupies the space of any `Ptr`.
### Reference Types
To describe `C`-pointers to primitive objects or C-structures, the `Ptr{T}` notion is used.
Here `T` is a Julia type name. It needs to be defined in the code before the usage.
To support referencing types, which will be defined later, as is possible in `C`, the
special construct `LForwardReference{:S}` was introduced which uses the symbolized name `:S` of
the referenced type, which can be defined later. This feature will become obsolete, as soon as `Julia`
will support forward references [PR#32658](https://github.com/JuliaLang/julia/pull/32658).
Element type `T` must be a reference type.
### Vector Types
A fixed length vector of `N` elements of type `T` is denoted `LFixedVector{T,N}`. It has the size of
`NTuple{N,T}`, where `T` is any of the supported types. A pointer to a vector is `Ptr{LFixedVector{T,N}}`
or `LForwardReference{LFixedVector{T,N}}`.
A variable length vector is denoted `LVarVector{T,F}` with the same restrictions to `T` as for fixed vectors.
It can be embedded as the last element of a layout structure or the element type of a reference.
The actual length can be calculated by `F(x)` with `x` the current structure, where the vector is referenced.
Typically that uses to be `F = (x) -> x.fieldname`, the vector length stored in an integer field.
### Layout Types
An bitstype struct, which is a subtype of `Layout` is composed of fields, which have an identical memory layout as
a corresponding C-struct.
The fields may be any of the mentioned layout types, but not directly self-referential
(only via `Ptr` or `LForwardReference`).
## Accessor Objects
### Accessing C-Data
### Serializing Julia-Data according to Layout
Complex Layout types don't have instances in general.
## Example
struct Commandline <: Layout
argc::CInt
argv::Ptr{LVarVector{Cstring, (x)->x.argc + 1}}
end
# Note `argv[1:argc]` correspond to argv[0] ... argv[argc-1] in `C` and `argv[argc+1] = CNull`.
Assuming a `C`-function returns a pointer to a `C`-structure with the layout of `Commandline`, that could be
`C`-code like
struct Commandline {
size_t argc; char** argv
}
The in `Julia` that could be accessed:
p = ccall(:argfunction, Ptr{Commandline}, ())
cline = CStruct(p)
cline.argc::CInt
cline.argv[i]::Union{String,Nothing}
| CStructures | https://github.com/KlausC/CStructures.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 282 | using ArtifactUtils, Artifacts
add_artifact!(
"../Artifacts.toml",
"refractiveindex.info",
"https://github.com/polyanskiy/refractiveindex.info-database/archive/v2024-08-14.tar.gz",
# lazy=true, # need to use LazyArtifacts for this
force=true,
clear=false,
)
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 681 | using RefractiveIndex
using Documenter
DocMeta.setdocmeta!(RefractiveIndex, :DocTestSetup, :(using RefractiveIndex); recursive=true)
makedocs(;
modules=[RefractiveIndex],
authors="Alex Ames <alexander.m.ames@gmail.com> and contributors",
repo="https://github.com/stillyslalom/RefractiveIndex.jl/blob/{commit}{path}#{line}",
sitename="RefractiveIndex.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://stillyslalom.github.io/RefractiveIndex.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/stillyslalom/RefractiveIndex.jl",
)
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 9211 | module RefractiveIndex
using HTTP.URIs: unescapeuri
using PrecompileTools
using DelimitedFiles: readdlm
using Serialization
using Scratch
using Pkg.Artifacts
using YAML
# using Interpolations
# using Interpolations: deduplicate_knots!
using BasicInterpolators
using Unitful: @u_str, uparse, uconvert, ustrip, AbstractQuantity
import Base: getindex, show
export RefractiveMaterial, dispersion, extinction, showmetadata, specifications
const RI_INFO_ROOT = Ref{String}()
const RI_LIB = Dict{Tuple{String, String, String}, NamedTuple{(:name, :path), Tuple{String, String}}}()
const DB_VERSION = "refractiveindex.info-database-2024-08-14"
const DB_INDEX_CACHE_PATH = joinpath(@get_scratch!(DB_VERSION), "RI_index_cache.jls")
RI_INFO_ROOT[] = joinpath(artifact"refractiveindex.info", DB_VERSION, "database")
include("init.jl")
include("dispersionformulas.jl")
_init_cache()
copy!(RI_LIB, Serialization.deserialize(DB_INDEX_CACHE_PATH))
struct RefractiveMaterial{DF<:DispersionFormula}
name::String
reference::String
comment::String
dispersion::DF
λrange::Tuple{Float64, Float64}
specs::Dict{Symbol, Any}
data::Dict{Symbol, Any}
end
const DISPERSIONFORMULAE = Dict(
"formula 1" => Sellmeier,
"formula 2" => Sellmeier2,
"formula 3" => Polynomial,
"formula 4" => RIInfo,
"formula 5" => Cauchy,
"formula 6" => Gases,
"formula 7" => Herzberger,
"formula 8" => Retro,
"formula 9" => Exotic,
"tabulated nk" => TabulatedNK,
"tabulated n" => TabulatedN,
"tabulated k" => TabulatedK,
)
function str2tuple(str)
arr = parse.(Float64, split(str))
ntuple(i -> arr[i], length(arr))
end
function DispersionFormula(data)
DF = DISPERSIONFORMULAE[data[:type]]
if haskey(data, :coefficients)
λrange = str2tuple(data[:wavelength_range])
return DF(str2tuple(data[:coefficients])), λrange
else
raw = readdlm(IOBuffer(data[:data]), Float64)
λrange = extrema(@view raw[:, 1])
return DF(raw), λrange
end
end
"""
RefractiveMaterial(shelf, book, page)
Load the refractive index data for the material corresponding to the specified
shelf, book, and page within the [refractiveindex.info](https://refractiveindex.info/) database. The data
can be queried by calling the returned `RefractiveMaterial` object at a given wavelength.
In the case of database entries with multiple types of dispersion data (e.g. both
raw dispersion data and dispersion formula coefficients), a vector of `RefractiveMaterial`s
is returned for each data type.
# Examples
```julia-repl
julia> MgLiTaO3 = RefractiveMaterial("other", "Mg-LiTaO3", "Moutzouris-o")
"Mg-LiTaO3 (Moutzouris et al. 2011: n(o) 0.450-1.551 µm; 8 mol.% Mg)"
julia> MgLiTaO3(0.45) # default unit is microns
2.2373000025056826
julia> using Unitful
julia> MgLiTaO3(450u"nm") # auto-conversion from generic Unitful.jl length units
2.2373000025056826
julia> MgLiTaO3(450e-9, "m") # strings can be used to specify units (parsing is cached)
2.2373000025056826
julia> Hikari_F1 = RefractiveMaterial("glass", "HIKARI-F", "F1")
2-element Vector{RefractiveMaterial}:
HIKARI-F (F1) - Polynomial
HIKARI-F (F1) - TabulatedK
```
"""
function RefractiveMaterial(shelf, book, page)
metadata = RI_LIB[(shelf, book, page)]
path = joinpath(RI_INFO_ROOT[], "data-nk", metadata.path)
isfile(path) || @error "Specified material does not exist"
yaml = YAML.load_file(path; dicttype=Dict{Symbol, Any})
reference = get(yaml, :REFERENCES, "")
comment = get(yaml, :COMMENTS, "")
specs = get(yaml, :SPECS, Dict{Symbol, Any}())
data = get(yaml, :DATA, Dict{Symbol, String}[])
if length(data) == 1
DF, λrange = DispersionFormula(only(data))
return RefractiveMaterial(
string(book, " ($(metadata.name))"),
reference,
comment,
DF,
λrange,
specs,
only(data)
)
else
return map(data) do datum
DF, λrange = DispersionFormula(datum)
RefractiveMaterial(
string(book, " ($(metadata.name))"),
reference,
comment,
DF,
λrange,
specs,
datum
)
end
end
end
"""
RefractiveMaterial(url::String)
Extracts the shelf, book, and page from a refractiveindex.info URL and loads
the corresponding data from the local database (does not require an active internet connection).
!!! warning
The refractiveindex.info website is regularly updated and may contain materials not yet
available in the local copy of the database, which is updated on a roughly annual basis.
Future versions of this package may allow these new entries to be automatically downloaded
on demand.
# Examples
```julia-repl
julia> Ar = RefractiveMaterial("https://refractiveindex.info/?shelf=main&book=Ar&page=Peck-15C")
"Ar (Peck and Fisher 1964: n 0.47-2.06 µm; 15 °C)"
julia> describe(Ar)
Name: Ar (Peck and Fisher 1964: n 0.47–2.06 µm; 15 °C)
Reference: E. R. Peck and D. J. Fisher. Dispersion of argon, <a href="https://doi.org/10.1364/JOSA.54.001362"><i>J. Opt. Soc. Am.</i> <b>54</b>, 1362-1364 (1964)</a>
Comments: 15 °C, 760 torr (101.325 kPa)
Dispersion Formula: Gases
Wavelength Range: (0.4679, 2.0587)
Specifications: Dict{Symbol, Any}(:temperature => "15 °C", :wavelength_vacuum => true, :pressure => "101325 Pa", :n_absolute => true)
```
"""
function RefractiveMaterial(url::String)
ue_url = unescapeuri(url)
r = r"refractiveindex.info\/\?shelf=(?'shelf'\w+)&book=(?'book'.*)&page=(?'page'.*)"
m = match(r, ue_url)
isnothing(m) && @error "Invalid refractiveindex.info url"
RefractiveMaterial(String(m["shelf"]),
String(m["book"]),
String(m["page"]))
end
show(io::IO, ::MIME"text/plain", m::RefractiveMaterial{DF}) where {DF} = print(io, m.name, " - ", nameof(typeof(m.dispersion)))
"""
showmetadata(rm::RefractiveMaterial)
Prints the metadata for the material `rm` to the terminal.
# Examples
```julia-repl
julia> Ar = RefractiveMaterial("main", "Ar", "Peck-15C")
Ar (Peck and Fisher 1964: n 0.47–2.06 µm; 15 °C) - Gases
julia> showmetadata(Ar)
Name: Ar (Peck and Fisher 1964: n 0.47–2.06 µm; 15 °C)
Reference: E. R. Peck and D. J. Fisher. Dispersion of argon, <a href="https://doi.org/10.1364/JOSA.54.001362"><i>J. Opt. Soc. Am.</i> <b>54</b>, 1362-1364 (1964)</a>
Comments: 15 °C, 760 torr (101.325 kPa)
Dispersion Formula: Gases
Wavelength Range: (0.4679, 2.0587)
Specifications: Dict{Symbol, Any}(:temperature => "15 °C", :wavelength_vacuum => true, :pressure => "101325 Pa", :n_absolute => true)
```
"""
function showmetadata(rm::RefractiveMaterial)
println("Name: ", rm.name)
println("Reference: ", rm.reference)
println("Comments: ", rm.comment)
println("Dispersion Formula: ", nameof(typeof(rm.dispersion)))
println("Wavelength Range: ", rm.λrange)
println("Specifications: ", rm.specs)
end
"""
specifications(rm::RefractiveMaterial)
Returns a `Dict` containing the measurement specifications for the material `rm`.
# Examples
```julia-repl
julia> using Unitful
julia> specs = specifications(Ar)
Dict{Symbol, Any} with 4 entries:
:temperature => "15 °C"
:wavelength_vacuum => true
:pressure => "101325 Pa"
:n_absolute => true
julia> T, P = [uparse(replace(specs[s], ' ' => '*')) for s in (:temperature, :pressure)]
2-element Vector{Quantity{Int64}}:
15 °C
101325 Pa
```
"""
function specifications(rm::RefractiveMaterial)
rm.specs
end
"""
dispersion(m::RefractiveMaterial, λ::Float64)
Returns the refractive index of the material `m` at the wavelength `λ` (in microns). An error is thrown if the material does not have refractive index data.
"""
dispersion(m::RefractiveMaterial, λ::Float64) = m.dispersion(λ)
dispersion(m::RefractiveMaterial{T}, λ::Float64) where {T <: Union{TabulatedN, TabulatedNK}} = m.dispersion.n(λ)
dispersion(m::RefractiveMaterial{TabulatedK}, λ::Float64) = throw(ArgumentError("Material does not have refractive index data"))
"""
extinction(m::RefractiveMaterial, λ::Float64)
Returns the extinction coefficient of the material `m` at the wavelength `λ` (in microns). An error is thrown if the material does not have extinction data.
"""
extinction(m::RefractiveMaterial{T}, λ::Float64) where {T <: Union{TabulatedK, TabulatedNK}} = m.dispersion.k(λ)
extinction(m::RefractiveMaterial, λ::Float64) = throw(ArgumentError("Material does not have extinction data"))
(m::RefractiveMaterial)(λ::Float64) = dispersion(m, λ)
(m::RefractiveMaterial)(λ::AbstractQuantity) = dispersion(m, ustrip(Float64, u"μm", λ))
const DIM_TO_MICRON = Dict("nm" => 1e-3, "um" => 1.0, "mm" => 1e3, "cm" => 1e4, "m" => 1e6)
_to_micron(dim) = get!(DIM_TO_MICRON, dim) do
ustrip(Float64, u"μm", 1.0*uparse(dim))::Float64
end
# ustrip(Float64, uparse(dim), 1.0u"μm")
(m::RefractiveMaterial)(λ, dim::String) = m(λ*_to_micron(dim))#*_dim_to_micron(dim))
# (m::RefractiveMaterial{T})(λ::Float64) where {T <: Union{TabulatedN, TabulatedNK}} = m.dispersion.n(λ)
include("precompile.jl")
end
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 3785 | abstract type DispersionFormula end
getindex(d::DispersionFormula, i) = getindex(d.coefs, i)
struct Sellmeier{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Sellmeier{N})(λ) where {N}
rhs = c[1]
for i = 2:2:N
rhs += c[i]*λ^2 / (λ^2 - c[i+1]^2)
end
return sqrt(rhs + 1)
end
struct Sellmeier2{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Sellmeier2{N})(λ) where {N}
rhs = c[1]
for i = 2:2:N
rhs += c[i]*λ^2 / (λ^2 - c[i+1])
end
return sqrt(rhs + 1)
end
struct Polynomial{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Polynomial{N})(λ) where {N}
rhs = c[1]
for i = 2:2:N
rhs += c[i]*λ^c[i+1]
end
return sqrt(rhs)
end
struct RIInfo{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::RIInfo{N})(λ) where {N}
rhs = c[1]
for i = 2:4:min(N, 9)
rhs += (c[i]*λ^c[i+1]) / (λ^2 - c[i+2]^c[i+3])
end
for i = 10:2:N
rhs += c[i]*λ^c[i+1]
end
return sqrt(rhs)
end
struct Cauchy{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Cauchy{N})(λ) where {N}
rhs = c[1]
for i = 2:2:N
rhs += c[i]*λ^c[i+1]
end
return rhs
end
struct Gases{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Gases{N})(λ) where {N}
rhs = c[1]
for i = 2:2:N
rhs += c[i] / (c[i+1] - 1/λ^2)
end
return rhs + 1
end
struct Herzberger{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Herzberger{N})(λ) where {N}
rhs = c[1]
rhs += c[2] / (λ^2 - 0.028)
rhs += c[3] * (1/(λ^2 - 0.028))^2
for i = 4:N
pow = 2*(i - 3)
rhs += c[i]*λ^pow
end
return rhs
end
struct Retro{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Retro{N})(λ) where {N}
rhs = c[1] + c[2]*λ^2 / (λ^2 - c[3]) + c[4]*λ^2
return sqrt((-2rhs - 1) / (rhs - 1))
end
struct Exotic{N} <: DispersionFormula
coefs::NTuple{N,Float64}
end
function (c::Exotic{N})(λ) where {N}
rhs = c[1] + c[2]/(λ^2 - c[3]) + c[4]*(λ - c[5]) / ((λ - c[5])^2 + c[6])
return sqrt(rhs)
end
abstract type Tabulated <: DispersionFormula end
# _linear_itp(knots, values) = extrapolate(interpolate((deduplicate_knots!(knots),), values, Gridded(Linear())), Throw())
# const ITP_TYPE = typeof(_linear_itp([1.0, 2.0], [1.0, 2.0]))
_linear_itp(knots, values) = LinearInterpolator(knots, values, WeakBoundaries())
const ITP_TYPE = LinearInterpolator{Float64, WeakBoundaries}
function _fix_sorting(raw)
# several entries are not sorted by wavelength, so we need to sort them
if !issorted(@views raw[:, 1])
raw = sortslices(raw, dims=1, by=first)
end
# workaround for two bad entries with only one wavelength:
# ("other", "CR-39", "poly") => (name = "Polymer; n 0.58929 µm", path = "other/commercial plastics/CR-39/poly.yml")
# ("other", "CR-39", "mono") => (name = "Monomer; n 0.58929 µm", path = "other/commercial plastics/CR-39/mono.yml")
if size(raw, 1) == 1
raw = [raw; raw]
end
return raw
end
struct TabulatedNK <: Tabulated
n::ITP_TYPE
k::ITP_TYPE
end
function TabulatedNK(raw::Matrix{Float64})
raw = _fix_sorting(raw)
λ = raw[:, 1]
n = raw[:, 2]
k = raw[:, 3]
TabulatedNK(_linear_itp(λ, n), _linear_itp(λ, k))
end
struct TabulatedN <: Tabulated
n::ITP_TYPE
end
function TabulatedN(raw::Matrix{Float64})
raw = _fix_sorting(raw)
λ = raw[:, 1]
n = raw[:, 2]
TabulatedN(_linear_itp(λ, n))
end
struct TabulatedK <: Tabulated
k::ITP_TYPE
end
function TabulatedK(raw::Matrix{Float64})
raw = _fix_sorting(raw)
λ = raw[:, 1]
k = raw[:, 2]
TabulatedK(_linear_itp(λ, k))
end
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 759 | function _init_cache()
if !isfile(DB_INDEX_CACHE_PATH)
lib = YAML.load_file(joinpath(RI_INFO_ROOT[], "catalog-nk.yml"), dicttype=Dict{String, Any})
for shelf in lib
shelfname = shelf["SHELF"]
for book in shelf["content"]
haskey(book, "DIVIDER") && continue
bookname = book["BOOK"]
for page in book["content"]
haskey(page, "DIVIDER") && continue
pagename = string(page["PAGE"])
RI_LIB[(shelfname, bookname, pagename)] = (name = page["name"], path=page["data"])
end
end
end
Serialization.serialize(DB_INDEX_CACHE_PATH, RI_LIB)
end
end
function __init__()
end
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 1274 | ## Precompilation
@setup_workload begin
# Putting some things in `setup` can reduce the size of the
# precompile file and potentially make loading faster.
function midrange(material)
λmin, λmax = material.λrange
return λmin + 0.5(λmax - λmin)
end
function exercise(material)
@show material
material(midrange(material))
end
@compile_workload begin
redirect_stdout(devnull) do
exercise(RefractiveMaterial("main", "Ar", "Grace-liquid-90K"))
exercise(RefractiveMaterial("main", "CdTe", "Marple"))
exercise(RefractiveMaterial("other", "Mg-LiTaO3", "Moutzouris-o"))
exercise(RefractiveMaterial("main", "ZnTe", "Li"))
exercise(RefractiveMaterial("main", "SF6", "Vukovic"))
exercise(RefractiveMaterial("main", "He", "Mansfield"))
exercise(RefractiveMaterial("main", "Si", "Edwards"))
exercise(RefractiveMaterial("main", "AgBr", "Schröter"))
exercise(RefractiveMaterial("organic", "urea","Rosker-e"))
exercise(RefractiveMaterial("main", "ZnO", "Stelling"))
exercise(RefractiveMaterial("https://refractiveindex.info/?shelf=main&book=MgAl2O4&page=Tropf"))
end
end
end | RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | code | 2305 | using RefractiveIndex
using Test
using Aqua
function midrange(material)
λmin, λmax = material.λrange
return λmin + 0.5(λmax - λmin)
end
function testRM(material, n_ref)
n = material(midrange(material))
# Compare only fractional parts
isapprox(n_ref % 1, n % 1, rtol=1e-3)
end
@testset "RefractiveIndex.jl" begin
@testset "Dispersion formulas" begin
# Sellmeier
@test testRM(RefractiveMaterial("main", "Ar", "Grace-liquid-90K"), 1.2281)
# Sellmeier-2
@test testRM(RefractiveMaterial("main", "CdTe", "Marple"), 2.7273)
# Polynomial
@test testRM(RefractiveMaterial("other", "Mg-LiTaO3", "Moutzouris-o"), 2.1337)
# RefractiveIndex.INFO
@test testRM(RefractiveMaterial("main", "ZnTe", "Li"), 2.6605)
# Cauchy
@test testRM(RefractiveMaterial("main", "SF6", "Vukovic"), 1.00072071)
# Gases
@test testRM(RefractiveMaterial("main", "He", "Mansfield"), 1.000034724)
# Herzberger
@test testRM(RefractiveMaterial("main", "Si", "Edwards"), 3.4208)
# Retro
@test testRM(RefractiveMaterial("main", "AgBr", "Schröter"), 2.2600)
# Exotic
@test testRM(RefractiveMaterial("organic", "urea","Rosker-e"), 1.6000)
end
@testset "Tabular data" begin
# RefractiveNK
@test testRM(RefractiveMaterial("main", "ZnO", "Stelling"), 1.5970)
end
@testset "Multiple dispersion data entries" begin # (https://github.com/stillyslalom/RefractiveIndex.jl/issues/14)
# Hikari-F1 (Polynomial, TabulatedK)
HikariF1 = @test_nowarn RefractiveMaterial("glass", "HIKARI-F", "F1")
@test length(HikariF1) == 2
@test isapprox(extinction(HikariF1[2], 0.35), 4.5265e-7, rtol=1e-3)
end
@testset "Unit parsing" begin
MgLiTaO3 = RefractiveMaterial("other", "Mg-LiTaO3", "Moutzouris-o")
@test MgLiTaO3(450e-9, "m") ≈ 2.2373000025056826
@test MgLiTaO3(1.771653543e-5, "inch") ≈ 2.2373000025056826
end
end
@testset "Database" begin
# Load all database entries
for (shelf, book, page) in keys(RefractiveIndex.RI_LIB)
@test_nowarn RefractiveMaterial(shelf, book, page)
end
end
@testset "Aqua" begin
Aqua.test_all(RefractiveIndex; ambiguities=false)
end
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | docs | 1793 | # RefractiveIndex
[](https://stillyslalom.github.io/RefractiveIndex.jl/stable)
[](https://stillyslalom.github.io/RefractiveIndex.jl/dev)
[](https://github.com/stillyslalom/RefractiveIndex.jl/actions)
Provides an offline interface to [refractiveindex.info](http://refractiveindex.info).
### Examples
```
julia> MgLiTaO3 = RefractiveMaterial("other", "Mg-LiTaO3", "Moutzouris-o")
Mg-LiTaO3 (Moutzouris et al. 2011: n(o) 0.450–1.551 µm; 8 mol.% Mg) - Polynomial
julia> MgLiTaO3(0.45) # default unit is microns
2.2373000025056826
julia> using Unitful
julia> MgLiTaO3(450u"nm") # auto-conversion from generic Unitful.jl length units
2.2373000025056826
julia> MgLiTaO3(450e-9, "m") # strings can be used to specify units (parsing is cached)
2.2373000025056826
julia> Ar = RefractiveMaterial("https://refractiveindex.info/?shelf=main&book=Ar&page=Peck-15C")
Ar (Peck and Fisher 1964: n 0.47–2.06 µm; 15 °C) - Gases
julia> Ar(532, "nm")
1.0002679711455778
```
In the case of database entries with multiple types of dispersion data (e.g. both raw dispersion data and dispersion formula coefficients), a vector of `RefractiveMaterial`s is returned for each data type:
```julia
julia> RefractiveMaterial("glass", "HIKARI-F", "F1")
2-element Vector{RefractiveMaterial}:
HIKARI-F (F1) - Polynomial
HIKARI-F (F1) - TabulatedK
```
The database is currently limited to dispersion and extinction ('n-k') data. Future versions of the package may include the new [n₂](https://refractiveindex.info/n2) (nonlinear index) database - please file an issue if this functionality is important to you.
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.4.2 | efe21f9d2c9b928ec77d9cbbea50b648d274a6a9 | docs | 125 | ```@meta
CurrentModule = RefractiveIndex
```
# RefractiveIndex
```@index
```
```@autodocs
Modules = [RefractiveIndex]
```
| RefractiveIndex | https://github.com/stillyslalom/RefractiveIndex.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 2778 | const PROJECT_DIR = (@__DIR__) |> dirname
const TORCH_LIB_DIR = joinpath(PROJECT_DIR, "csrc/libtorch/lib")
const TORCH_LIB_BUILD_DIR = joinpath(PROJECT_DIR, "deps/lib")
const JULIA_THC_GENERATOR = joinpath(PROJECT_DIR, "src/thc/thc-generator.jl")
function build_locally()
LIBTORCH_URL = if Sys.islinux()
"https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.4.0%2Bcpu.zip"
elseif Sys.isapple()
"https://download.pytorch.org/libtorch/cpu/libtorch-macos-1.4.0.zip"
# elseif Sys.iswindows()
# "https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-1.4.0.zip"
else
error("Your OS $(Sys.MACHINE) is not supported.")
end
if !isdir(TORCH_LIB_DIR)
zipfile = download(LIBTORCH_URL)
cd(joinpath(PROJECT_DIR, "csrc")) do
run(`unzip $(zipfile)`)
end
isdir(TORCH_LIB_DIR) || error("Failed to get libtorch.")
end
isdir(TORCH_LIB_BUILD_DIR) || mkdir(TORCH_LIB_BUILD_DIR)
cd(TORCH_LIB_BUILD_DIR) do
cmd_cmake = `cmake -DCMAKE_PREFIX_PATH=$(joinpath(PROJECT_DIR, "csrc/libtorch")) ../../csrc`
run(cmd_cmake)
run(`make torch_capi`)
end
end
function include_remote_script(version_str)
# build_script_url = "https://github.com/TuringLang/ThArrays.jl/releases/download/v$(version_str)/build_TorchCAPIDylib.v$(version_str).jl"
# download, un tar
dest = "libtorch_capi.$(version_str).tar.gz"
tarball_url = if Sys.islinux()
"https://github.com/TuringLang/ThArrays.jl/releases/download/v$(version_str)/TorchCAPIDylib.v$(version_str).x86_64-linux-gnu-gcc8.tar.gz"
elseif Sys.isapple()
"https://github.com/TuringLang/ThArrays.jl/releases/download/v$(version_str)/TorchCAPIDylib.v$(version_str).x86_64-apple-darwin14.tar.gz"
else
error("Your OS $(Sys.MACHINE) is not supported.")
end
try
tarball = download(tarball_url, dest)
cd(@__DIR__) do
run(`tar zxvf $(tarball)`)
end
catch
@warn "download $(tarball_url) failed."
return false
end
return true
end
function get_version_str()
path = joinpath(@__DIR__, "../Project.toml")
version_reg = r"version\s*=\s*\"(.*)\""
open(path) do file
lines = readlines(file)
for line in lines
m = match(version_reg, line)
if isa(m, RegexMatch) return m.captures[1] end
end
end
end
version_str = get_version_str() |> strip |> (x) -> lstrip(x, ['v'])
if !isempty(get(ENV, "THARRAYS_DEV", "")) || !include_remote_script(version_str)
@warn "try to build libtorch_capi locally."
build_locally()
end
JULIA_EXE = joinpath(Sys.BINDIR, "julia")
run(`$(JULIA_EXE) $(JULIA_THC_GENERATOR)`)
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1966 | # Note that this script can accept some limited command-line arguments, run
# `julia build_tarballs.jl --help` to see a usage message.
using Pkg
using BinaryBuilder
name = "TorchCAPIDylib"
version_str = Pkg.TOML.parsefile(joinpath(@__DIR__, "../Project.toml"))["version"] |> strip |> (x) -> lstrip(x, ['v'])
version = VersionNumber(version_str)
# see https://github.com/JuliaPackaging/BinaryBuilder.jl/issues/336
ENV["CI_COMMIT_TAG"] = ENV["TRAVIS_TAG"] = "v" * version_str
event_file = get(ENV, "GITHUB_EVENT_PATH", "")
# run(`cat $event_file`)
# Collection of sources required to build Libtask
function get_commit_id()
ref = "HEAD"
gaction = get(ENV, "GITHUB_ACTIONS", "")
if !isempty(gaction)
# .pull_request.head.sha, .release.tag_name,
ref = readlines(`jq --raw-output '.pull_request.head.sha' $event_file`)[1]
if ref == "null"
ref = readlines(`jq --raw-output '.release.tag_name' $event_file`)[1]
end
end
if ref == "null"
ref = "HEAD"
end
return readlines(`git rev-parse $ref`)[1]
end
sources = [
"https://github.com/TuringLang/ThArrays.jl.git" => get_commit_id(),
]
# Bash recipe for building across all platforms
script = read(joinpath(dirname(@__FILE__), "build_dylib.sh"), String)
# These are the platforms we will build for by default, unless further
# platforms are passed in on the command line
platforms = [
Linux(:x86_64, libc=:glibc, compiler_abi=CompilerABI(:gcc8)),
# MacOS(:x86_64), # can't build it on MacOS SDK
]
# The products that we will ensure are always built
products(prefix) = [
LibraryProduct(prefix, "libtorch_capi", :libtorch_capi)
]
# Dependencies that must be installed before this package can be built
dependencies = [
]
# Build the tarballs, and possibly a `build.jl` as well.
# build_file = "products/build_$(name).v$(version_str).jl"
build_tarballs(ARGS, name, version, sources, script, platforms, products, dependencies)
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 471 | using Documenter, ThArrays
makedocs(
modules=[ThArrays],
sitename="ThArrays",
pages = [
"Home" => "index.md",
"Tensor" => "tensor.md",
"AD" => "ad.md",
"TorchScript" => "torchscript.md",
"Reference" => "reference.md",
],
format = Documenter.HTML(prettyurls = haskey(ENV, "GITHUB_EVENT_PATH")))
deploydocs(
repo = "github.com/TuringLang/ThArrays.jl.git",
devbranch = "master",
devurl = "dev",
)
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1395 | module ThArrays
using Libdl
using Requires
export TorchNumber, Tensor, Scalar, eltype_id,
ThC, ThAD, TrackerAD, ThJIT,
Device, CPU, CUDA, to, on
const PROJECT_DIR = (@__DIR__) |> dirname
function __init__()
push!(Libdl.DL_LOAD_PATH, joinpath(PROJECT_DIR, "deps/lib"))
Libdl.dlopen(joinpath(PROJECT_DIR, "deps/lib/libtorch_capi"))
@async handle_error_in_julia()
@require Tracker = "9f7883ad-71c0-57eb-9f7f-b5c9e6d3789c" @eval include("compat/tracker.jl")
end
function handle_error_in_julia()
err_handler = "jl_error"
ccall((:set_error_handler, :libtorch_capi),
Cvoid, (Cstring, Csize_t),
pointer(err_handler), length(err_handler))
end
const TYPE_MAP = Dict{Type, Int8}(
### float
Float16 => 5,
Float32 => 6,
Float64 => 7,
### bool and char
Bool => 11,
# Char => 1, # Char in Julia is not single byte
### int
Int8 => 1,
# UInt8 => 1,
Int16 => 2,
# UInt16 => 2,
Int32 => 3,
# UInt32 => 3,
Int64 => 4,
# UInt64 => 4,
# Int128 => ?,
# UInt128 => ?,
)
const REVERSE_TYPE_MAP = Dict(reverse(p) for p in TYPE_MAP)
TorchNumber = Union{Float16, Float32, Float64,
Bool,
Int8, Int16, Int32, Int64}
include("tensor.jl")
include("scalar.jl")
include("thc/thc.jl")
include("common-methods.jl")
include("ad.jl")
include("th-jit.jl")
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 2380 | module ThAD
using MacroTools: @forward
using ..ThArrays: Tensor, Scalar, TorchNumber
using ..ThC
import ..ThC: grad, requires_grad!
function has_grad(a::Tensor)
ret = ccall((:tensor_method_has_grad, :libtorch_capi),
Cint, (Ptr{Cvoid},), a.pointer)
return ret != 0
end
function get_grad(a::Tensor, default=nothing)
if has_grad(a)
return grad(a)
else
default == nothing ? ThC.zeros_like(a) : default
end
end
function backward(a::Tensor, d::Union{Ptr{Nothing}, Tensor}=C_NULL;
keep_graph::Bool=false, create_graph::Bool=false)
if d isa Tensor
d = d.pointer
end
ccall((:tensor_method_backward, :libtorch_capi),
Ptr{Cvoid}, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
a.pointer, d, keep_graph, create_graph)
nothing
end
reset_grad!(t::Tensor) = ThC.zero!(ThC.grad(t))
requires_grad!(t::Tensor, r::Bool) = requires_grad!(t, r ? 1 : 0)
function gradient(f, data...; d::Union{Ptr{Nothing}, Tensor}=C_NULL)
tensors = map(d -> Tensor(d, requires_grad=true), data)
return gradient(f, tensors...; d=d)
end
function gradient(f, tensors::Vararg{Tensor}; d::Union{Ptr{Nothing}, Tensor}=C_NULL)
result = f(tensors...)
backward(result, d)
return ThC.grad.(tensors)
end
# tracker compatible API
struct Params
order::Vector{Any}
params::IdDict{Any, Bool}
Params() = new([], IdDict())
end
@forward Params.order Base.iterate, Base.length
function Base.push!(ps::Params, x)
if !haskey(ps.params, x)
push!(ps.order, x)
ps.params[x] = true
end
return ps
end
Base.push!(ps::Params, x...) = (foreach(x -> push!(ps, x), x); ps)
Params(xs) = push!(Params(), xs...)
data(t::Tensor) = convert(Array, t)
data(t::Tensor{T, 0}) where T = t[]
param(x) = x
param(x::Number) = Tensor(float(x); requires_grad=true)
param(xs::AbstractArray) = Tensor(float.(xs); requires_grad=true)
function forward(f, ps::Params)
y = f()
back = (d) -> begin
g = IdDict()
# reset grad!
foreach((t) -> has_grad(t) && reset_grad!(t), ps)
backward(y, param(d))
foreach((t) -> g[t] = ThC.grad(t), ps)
return g
end
return data(y), back
end
function forward(f, args...)
args = param.(args)
y, back = forward(()->f(args...), Params(args))
y, (d) -> getindex.(Ref(back(d)), args)
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 3527 | # broadcast
Base.Broadcast.broadcasted(f, t::Tensor, args...) = f(t, args...)
Base.Broadcast.broadcasted(::typeof(Base.:*), a::Tensor, b::Tensor) = ThC.mul(a, b)
# operators
Base.:+(r::TorchNumber, t::Tensor) = ThC.add1(t, r)
Base.:+(t::Tensor, r::TorchNumber) = r + t
Base.:+(a::Tensor{T, N}, b::Tensor{T, N}) where {T, N} = ThC.opt_add(a, b)
Base.:-(r::TorchNumber, t::Tensor) = ThC.ones_like(t) * r - t
Base.:-(t::Tensor, r::TorchNumber) = ThC.sub1(t, r)
Base.:-(a::Tensor{T, N}, b::Tensor{T, N}) where {T, N} = ThC.sub(a, b)
Base.:-(a::Tensor) = 0 - a
Base.:*(r::TorchNumber, t::Tensor) = ThC.mul1(t, r)
Base.:*(t::Tensor, r::TorchNumber) = r * t
Base.:*(a::Tensor{T, N}, b::Tensor{T, N}) where {T, N} = ThC.mm(a, b)
Base.:/(n::TorchNumber, t::Tensor) = ThC.ones_like(t) * n / t
Base.:/(t::Tensor, n::TorchNumber) = ThC.div1(t, n)
Base.:/(a::Tensor{T, N}, b::Tensor{T, N}) where {T, N} = ThC.div(a, b)
Base.div(n::TorchNumber, t::Tensor, r::RoundingMode=RoundToZero) = n / t
Base.div(t::Tensor, n::TorchNumber, r::RoundingMode=RoundToZero) = t / n
Base.div(a::Tensor{T, N}, b::Tensor{T, N}, r::RoundingMode=RoundToZero) where {T, N} = div(a, b)
Base.:^(t::Tensor, r::TorchNumber) = ThC.pow(t, r)
Base.:(==)(t1::Tensor, t2::Tensor) = ThArrays.ThC.all(ThArrays.ThC.eq1(t1, t2))[]
function Base.ones(::Type{Tensor{T}}, I::Vararg{Int}; dev::Device=CPU()) where T
dims = Int64[I...]
ThC.ones(dims, eltype_id(T), convert(Int, dev))
end
function Base.zeros(::Type{Tensor{T}}, I::Vararg{Int}; dev::Device=CPU()) where T
dims = Int64[I...]
ThC.zeros(dims, eltype_id(T), convert(Int, dev))
end
function Base.rand(::Type{Tensor{T}}, I::Vararg{Int}; dev::Device=CPU()) where T
dims = Int64[I...]
ThC.rand(dims, eltype_id(T), convert(Int, dev))
end
ThC.eye(::Type{T}, n::Int64; dev::Device=CPU()) where T =
ThC.eye(n, eltype_id(T), convert(Int, dev))
ThC.eye(::Type{Tensor{T}}, n::Int64; dev::Device=CPU()) where T =
ThC.eye(T, n, dev=dev)
Base.sum(t::Tensor{T}) where T = ThC.sum(t, eltype_id(T))
Base.view(t::Tensor{T}, I...) where T = error("Not implement yet.")
Base.transpose(t::Tensor{T, 2}) where T = ThC.t(t)
Base.adjoint(t::Tensor) = error("Not implement yet.")
# LinearAlgebra.det(t::Tensor) = error("Not implement yet.")
# LinearAlgebra.logdet(t::Tensor) = error("Not implement yet.")
# LinearAlgebra.logabsdet(t::Tensor) = error("Not implement yet.")
# Base.repeat(t::Tensor; kw...) = error("Not implement yet.")
# Base.reshape(t::Tensor, dims...) = error("Not implement yet.")
# Base.permutedims(t::Tensor, perm) = error("Not implement yet.")
# Base.PermutedDimsArray(t::Tensor, perm) = error("Not implement yet.")
# Base.reverse(t::Tensor; dims) = error("Not implement yet.")
# Base.reverse(t::Tensor) = error("Not implement yet.")
# Base.reverse(t::Tensor, start, stop) = error("Not implement yet.")
# Base.inv(t::Tensor) = error("Not implement yet.")
# Base.:\(a::Tensor, b::Tensor) = error("Not implement yet.")
# Base.prod(t::Tensor, dim) = error("Not implement yet.")
# Base.prod(t::Tensor) = error("Not implement yet.")
# Base.prod(f::Union{Function, Type}, t::Tensor) = error("Not implement yet.")
# Statistics.mean(t::Tensor; dims = :) = error("Not implement yet.")
# Base.maximum(t::Tensor; dims = :) = error("Not implement yet.")
# Base.minimum(t::Tensor; dims = :) = error("Not implement yet.")
# Base.dot(a::Tensor, b::Tensor) = error("Not implement yet.")
# LinearAlgebra.diagm(...)
# NNlib
# softmax, logsoftmax, depthwiseconv, conv, ∇conv_data, maxpool, meanpool
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1195 | mutable struct Scalar{T}
type::Type
pointer::Ptr
function Scalar{T}(p::Ptr) where T
if !haskey(TYPE_MAP, T)
error("Type $T is not supported.")
end
ret = new(T, p)
finalizer(ret) do s
ccall((:scalar_destroy, :libtorch_capi),
Cvoid, (Ptr{Cvoid},),
s.pointer)
end
return ret
end
end
function Scalar{T}(s::U) where {T<:TorchNumber, U<:TorchNumber}
if !haskey(TYPE_MAP, T)
error("Type $T is not supported.")
end
data = T[convert(T, s)]
ptr = ccall((:scalar_from_data, :libtorch_capi),
Ptr{Cvoid}, (Ptr{Cvoid}, Cchar),
data, TYPE_MAP[T])
Scalar{T}(ptr)
end
Scalar(s::T) where T = Scalar{T}(s)
function value(s::Scalar{T}) where T
data = T[zero(T)]
ccall((:scalar_value, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Cchar, Ptr{Cvoid}),
s.pointer, TYPE_MAP[T], data)
return data[1]
end
Base.getindex(s::Scalar) = value(s)
function Base.show(io::IO, s::Scalar{T}) where {T}
write(io, "PyTorch.Scalar{$T} = $(value(s))\n")
end
function Base.display(s::Scalar)
show(stdout, s)
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 8374 | mutable struct Tensor{T, N} <: AbstractArray{T, N}
type::Type
ndims::Int64
pointer::Ptr
data::Union{Nothing,Array{T, N}}
function Tensor{T, N}(p::Ptr, data) where {T, N}
if !haskey(TYPE_MAP, T)
error("Type $T is not supported.")
end
ret = new(T, N, p, data)
finalizer(ret) do t
ccall((:tensor_destroy, :libtorch_capi),
Cvoid, (Ptr{Cvoid},),
t.pointer)
end
return ret
end
end
Base.setproperty!(::Tensor, ::Symbol, x) = error("Can't change field of Tensor.")
function Tensor{T}(array::Array{U, N};
detach=false, requires_grad=false) where {T, U, N}
if !haskey(TYPE_MAP, T)
error("Type $T is not supported.")
end
dims = collect(size(array))
stri = collect(strides(array))
if T != U
array = convert.(T, array)
end
grad = requires_grad ? 1 : 0
copy_data = detach ? 1 : 0
ptr = ccall((:tensor_from_data, :libtorch_capi),
Ptr{Cvoid},
(Ptr{Cvoid}, Csize_t, Cchar,
Ptr{Clonglong}, Ptr{Clonglong}, Csize_t, Cint, Cint),
array, sizeof(array), TYPE_MAP[T], dims, stri, N, copy_data, grad)
Tensor{T, N}(ptr, detach ? nothing : array)
end
function Tensor(array::Array{T, N}; detach=false, requires_grad=false) where {T, N}
Tensor{T}(array, detach=detach, requires_grad=requires_grad)
end
# 0-dim Tensor
function Tensor(s::Int64; requires_grad=false)
grad = requires_grad ? 1 : 0
ptr = ccall((:tensor_int64_0dim, :libtorch_capi),
Ptr{Cvoid},
(Clonglong, Cint), s, grad)
Tensor{Int64, 0}(ptr, nothing)
end
function Tensor(s::T; requires_grad=false) where {T <: TorchNumber}
data = T[s]
grad = requires_grad ? 1 : 0
ptr = ccall((:tensor_from_data, :libtorch_capi),
Ptr{Cvoid},
(Ptr{Cvoid}, Csize_t, Cchar,
Ptr{Clonglong}, Ptr{Clonglong}, Csize_t, Cint, Cint),
data, sizeof(T), TYPE_MAP[T], C_NULL, C_NULL, 0, 1, grad)
Tensor{T, 0}(ptr, nothing)
end
function Tensor(a0::Array{T, 0}; requires_grad=false) where {T <: TorchNumber}
Tensor(a0[], requires_grad=requires_grad)
end
function tensor_from_ptr(p::Ptr)
n_dims = ccall((:tensor_method_ndimension, :libtorch_capi),
Clonglong, (Ptr{Cvoid},),
p)
# sizes = zeros(Int64, n_dims)
# ccall((:tensor_method_sizes, :libtorch_capi),
# Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
# p, sizes)
dtype = ccall((:tensor_method_dtype, :libtorch_capi),
Cchar, (Ptr{Cvoid},),
p)
Tensor{REVERSE_TYPE_MAP[dtype], n_dims}(p, nothing)
end
function Base.convert(::Type{Array}, t::Tensor{T, N}) where {T, N}
if t.data != nothing
return t.data
end
dims = size(t)
ret = Array{T, N}(undef, reverse(dims))
ccall((:tensor_method_data_copy, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Csize_t),
t.pointer, ret, sizeof(T) * prod(dims))
if strides(t)[1] != 1
return permutedims(ret, collect(N:-1:1))
end
return reshape(ret, dims)
end
Base.convert(::Type{T}, x::Tensor{T, 0}) where T = x[]
function Base.string(t::Tensor)
str = ccall((:tensor_to_string, :libtorch_capi),
Ptr{UInt8}, (Ptr{Cvoid},),
t.pointer)
ret = unsafe_string(str)
ccall(:free, Cvoid, (Ptr{Cvoid},), str)
return ret
end
function Base.show(io::IO, t::Tensor{T, N}) where {T, N}
write(io, "PyTorch.Tensor{$T, $N}:\n")
write(io, string(t))
write(io, "\n")
end
function Base.display(t::Tensor)
show(stdout, t)
end
# array interface
Base.eltype(::Type{Tensor{T}}) where {T} = Tensor{T, 0}
Base.ndims(t::Tensor{T, N}) where {T, N} = N
eltype_id(::Tensor{T}) where {T} = Int(TYPE_MAP[T])
eltype_id(::Type{T}) where {T <: TorchNumber} = Int(TYPE_MAP[T])
function Base.strides(t::Tensor)
n_dims = ccall((:tensor_method_ndimension, :libtorch_capi),
Clonglong, (Ptr{Cvoid},),
t.pointer)
strides = zeros(Int64, n_dims)
ccall((:tensor_method_strides, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
t.pointer, strides)
strides
end
function Base.size(t::Tensor{T, N}) where {T, N}
n_dims = ccall((:tensor_method_ndimension, :libtorch_capi),
Clonglong, (Ptr{Cvoid},),
t.pointer)
@assert N == n_dims "Dimension mismatch!"
sizes = zeros(Int64, n_dims)
ccall((:tensor_method_sizes, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
t.pointer, sizes)
tuple(sizes...)
end
function _tensor_indices(t::Tensor, I)
indices = collect.(to_indices(t, I))
shape = collect(Iterators.flatten(size.(indices)))
indices = map(x -> x .- 1, indices)
collect(indices), shape
end
_to_dim_0(t::Tensor) = ThC.opt_reshape(t, Int64[])
_to_dim_1_1(t::Tensor) = ThC.opt_reshape(t, [1, 1])
function Base.getindex(t::Tensor, I...)
ts, shape = _tensor_indices(t, I)
ret = t
for i in 1:length(ts)
ret = ThC.opt_index_select(ret, i - 1, ts[i])
end
all(x -> x == 1, size(ret)) && shape == Union{}[] && return _to_dim_0(ret)
ThC.opt_reshape(ret, shape)
end
Base.getindex(t::Tensor{T}) where T = item(t)
Base.getindex(t::Tensor, i::Int64) = t[eachindex(t)[i]]
Base.getindex(t::Tensor{T, 1}, i::Int64) where T =
ThC.opt_index_select(t, 0, (i - 1)) |> _to_dim_0
function Base.getindex(t::Tensor, I::UnitRange{Int64})
t = vcat(map(i->_to_dim_1_1(t[i]), eachindex(t)[I])...)
ThC.opt_reshape(t, [length(t)])
end
function Base.setindex!(t::Tensor{T}, v::Tensor{T}, I...) where T
@assert length(I) > 0 "no indices given"
@assert(!any(i -> i isa StepRange, I),
"StepRange indices are not supported in Tensor assignment")
ts, _1 = _tensor_indices(t, I)
ret = t
for i in 1:(length(ts) - 1)
ret = ThC.narrow(ret, i - 1, ts[i][1], length(ts[i]))
end
dshape = length.(ts)
ThC.index_copy!(ret, length(ts) - 1, Tensor(ts[end]), reshape(v, dshape))
v
end
Base.setindex!(t::Tensor{T}, v::Array, I...) where T =
setindex!(t, Tensor{T}(v), I...)
Base.setindex!(t::Tensor{T}, v::TorchNumber, i::Int64) where T =
setindex!(t, Tensor{T}([v]), (eachindex(t)[i].I)...)
Base.setindex!(t::Tensor{T, 1}, v::TorchNumber, i::Int64) where T =
setindex!(t, Tensor{T}([v]), i)
function Base.setindex!(t::Tensor{T}, v::Array, I::UnitRange{Int64}) where T
indices = eachindex(t)[I]
@assert length(v) == length(indices)
for idx in 1:length(v)
setindex!(t, Tensor{T}(v[[idx]]), (indices[idx].I)...)
end
end
function Base.iterate(t::Tensor, state=(eachindex(t),))
y = iterate(state...)
y === nothing && return nothing
t[y[1]], (state[1], Base.tail(y)...)
end
Base.cat(I::Vararg{Tensor}; dims) = cat(collect(I), dims)
Base.vcat(I::Vararg{Tensor}) = cat(collect(I), 0)
Base.hcat(I::Vararg{Tensor}) = cat(collect(I), 1)
function Base.hvcat(rows::Tuple{Vararg{Int}}, I::Vararg{Tensor,N}) where N
ts = Iterators.Stateful(I)
hs = map(n -> collect(Iterators.take(ts, n)), rows)
hs = [hcat(t...) for t in hs]
vcat(hs...)
end
# methods
function item(t::Tensor{T,N}) where {T,N}
@assert(N == 0 || prod(size(t)) == 1,
"The Tensor must contain only one element.")
data = T[zero(T)]
ccall((:tensor_method_item, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Cchar, Ptr{Cvoid}),
t.pointer, TYPE_MAP[T], data)
return data[1]
end
# devices
abstract type Device end
struct CPU <: Device end
struct CUDA <: Device
index::Int
end
Base.convert(::Type{Int}, ::CPU) = -1
Base.convert(::Type{Int}, d::CUDA) = d.index
to(t::Tensor, d::Device) = ThC.to(t, convert(Int, d))
to(t::Tensor, ::Type{T}) where T <: TorchNumber = ThC.to2(t, eltype_id(T), 0, 0)
to(t::Tensor, ::Type{T}, d::Device) where T <: TorchNumber = to(t, d, T)
to(t::Tensor, d::Device, ::Type{T}) where T <: TorchNumber =
ThC.to4(t, convert(Int, d), eltype_id(T), 0, 0)
function on(t::Tensor)
data = Int64[0, 0]
ccall((:tensor_method_device, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}), t.pointer, data)
data[1] == -1 && return CPU()
return CUDA(data[2])
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1602 | module ThJIT
using ..ThArrays
mutable struct CompilationUnit
mod::Ptr{Nothing}
owner::Ptr{Nothing}
function CompilationUnit(m::Ptr{Nothing}, o::Ptr{Nothing})
ret = new(m, o)
finalizer(ret) do cu
ccall((:cunit_destroy, :libtorch_capi),
Cvoid, (Ptr{Cvoid},),
cu.owner)
end
end
end
function compile(code::AbstractString)
fields = [Ptr{Nothing}(0), Ptr{Nothing}(0)]
cu = ccall((:cunit_compile, :libtorch_capi),
Ptr{Cvoid}, (Ptr{Cvoid}, Cstring),
fields, pointer(code))
CompilationUnit(fields[1], fields[2])
end
function run_method(cu::CompilationUnit,
method::AbstractString,
args::Vararg{Tensor})
ptrs = map(x -> x.pointer, collect(args))
tr = ccall((:cunit_run_method, :libtorch_capi),
Ptr{Cvoid}, (Ptr{Cvoid}, Cstring, Ptr{Cvoid}, Cint),
cu.mod, pointer(method), ptrs, length(args))
return ThArrays.tensor_from_ptr(tr)
end
struct Function
cu::CompilationUnit
method::AbstractString
end
function (f::Function)(args::Vararg{Tensor})
run_method(f.cu, f.method, args...)
end
get_method(cu::CompilationUnit, method::AbstractString) = Function(cu, method)
Base.getindex(cu::CompilationUnit, method::AbstractString) =
Function(cu, method)
Base.getindex(cu::CompilationUnit, method::Symbol) =
Function(cu, string(method))
function Base.getproperty(cu::CompilationUnit, p::Symbol)
p in fieldnames(CompilationUnit) && return getfield(cu, p)
return cu[p]
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 6060 | module TrackerAD
using ..ThArrays
using Tracker
using Tracker: Tracked, Call, data, track
## Tensor
ThArrays.Tensor(x::Tracker.TrackedArray) = Tensor(data(x), requires_grad=true)
## TrackedTensor
struct TrackedTensor{T, N} <: AbstractArray{T, N}
tracker::Tracked{Tensor{T, N}}
data::Tensor{T, N}
grad::Tensor{T, N}
TrackedTensor{T, N}(
t::Tracked{Tensor{T, N}},
data::Tensor{T, N}) where {T, N} = new(t, data)
TrackedTensor{T, N}(
t::Tracked{Tensor{T, N}},
data::Tensor{T, N}, grad::Tensor{T, N}) where {T, N} = new(t, data, grad)
end
Tracker.data(x::TrackedTensor) = x.data
Tracker.tracker(x::TrackedTensor) = x.tracker
Tracker.track(c::Tracker.Call, x::Tensor) = TrackedTensor(c, x)
Tracker.track(c::Tracker.Call, x::TrackedTensor) = TrackedTensor(c, data(x))
function TrackedTensor(c::Tracker.Call, t::Tensor{T, N}) where {T, N}
TrackedTensor{T, N}(Tracker.Tracked{Tensor{T, N}}(c), t)
end
function TrackedTensor(c::Tracker.Call, t::Tensor{T, N}, d::Tensor{T, N}) where {T, N}
TrackedTensor{T, N}(Tracker.Tracked{Tensor{T, N}}(c, d), t, d)
end
function TrackedTensor(t::Tensor)
TrackedTensor(Tracker.Call(), t, ThC.zeros_like(t))
end
Base.eltype(x::Type{<:TrackedTensor{T}}) where T <: Real = TrackedTensor{T, 0}
function Base.show(io::IO, x::TrackedTensor)
show(io, data(x))
print(io, "(tracked Tensor)")
end
Base.copy(x::TrackedTensor) = x
Base.setindex!(xs::TrackedTensor, v, i...; kwargs...) =
error("Can't differentiate `setindex!`")
## Fallthrough methods
for f in :[Base.size, Base.ndims, Base.collect].args
@eval @inline $f(x::TrackedTensor, a...) = $f(data(x), a...)
end
## patches to Tracker.jl
Tracker.param(x::Tensor) = TrackedTensor(ThC.requires_grad!(x, true))
Tracker.init_grad(x::Tensor) = ThC.zeros_like(x)
Tracker.zero_grad!(x::Tensor) = (x .= 0)
"""
const __FORWARD_RESULT = IdDict{Any, Any}()
Tracker.collectmemaybe(x::TrackedTensor) = begin
__FORWARD_RESULT[Tracker.tracker(x)] = x
x
end
function Tracker.back(g::Tracker.Grads, x::Tracker.Tracked{Tensor{T, N}}, Δ) where {T, N}
if haskey(__FORWARD_RESULT, x)
ThAD.backward(data(__FORWARD_RESULT[x]), Tensor(float.(Δ)))
delete!(__FORWARD_RESULT, x)
end
x.isleaf && (Tracker.accum!(g, x, Δ); return)
ref = x.ref -= 1
if ref > 0 || haskey(g, x)
Tracker.accum!(g, x, Δ)
ref == 0 && Tracker.back_(g, x.f, g[x])
else
ref == 0 && Tracker.back_(g, x.f, Δ)
end
return
end
# we use `_tr` to instead of the above patch now
"""
Tracker.collectmemaybe(x::TrackedTensor) = _tr(x)
## Switches
_th(x) = track(_th, x)
Tracker.@grad function _th(x)
r = TrackedTensor(Tensor(x, requires_grad=true))
r, (d) -> begin
(ThC.ones_like(data(r)) .* d,)
end
end
_th(x::Tracker.TrackedArray) = track(_th, x)
Tracker.@grad function _th(x::Tracker.TrackedArray)
r = TrackedTensor(Tensor(data(x), requires_grad=true))
r, (d) -> begin
(ThC.ones_like(data(r)) .* d,)
end
end
_tr(x) = track(_tr, x)
Tracker.@grad function _tr(x)
x, (d) -> begin
(ones(size(x)) .* d,)
end
end
_tr(x::TrackedTensor{T, 0}) where {T} = track(_tr, x)
Tracker.@grad function _tr(x::TrackedTensor{T, 0}) where {T}
r = convert(T, data(x))
r, (d) -> begin
ThAD.backward(data(x), Tensor(float(d)))
(float(d),)
end
end
_tr(x::TrackedTensor{T, N}) where {T, N} = track(_tr, x)
Tracker.@grad function _tr(x::TrackedTensor{T, N}) where {T, N}
r = convert(Array, data(x))
r, (d) -> begin
ThAD.backward(data(x), Tensor(d))
(ones(size(r)) .* d,)
end
end
## Methods and Grads
Base.Broadcast.broadcasted(f, t::TrackedTensor, args...) = track(Base.Broadcast.broadcasted, f, t, args...)
Tracker.@grad function Base.Broadcast.broadcasted(f, t::TrackedTensor, args...)
r = Base.Broadcast.broadcasted(f, data(t), data.(args)...)
r, (d) -> begin
grads = map(args) do arg
(arg isa TrackedTensor) ? ThAD.get_grad(data(arg), d) : nothing
end
(nothing, ThAD.get_grad(data(t), d), grads...)
end
end
macro grad_for_tensor(name)
esc(quote
$name(t::TrackedTensor, args...) = track($name, t, args...)
Tracker.@grad function $name(t::TrackedTensor, args...)
r = $name(data(t), data.(args)...)
r, (d) -> begin
grads = map(args) do arg
(arg isa TrackedTensor) ? ThAD.get_grad(data(arg), d) : nothing
end
(ThAD.get_grad(data(t), d), grads...)
end
end
end)
end
#
# Methods in src/thc/thc.jl, can be extracted by the command:
# perl -n -e \
# 'if(m/import (Base\..*)/){ $i++; print "$1, "; print "\n" unless $i % 5;}' \
# src/thc/thc.jl
#
for name in :[Base.abs, Base.acos, Base.all, Base.angle, Base.any,
Base.argmax, Base.argmin, Base.asin, Base.atan, Base.cat,
Base.ceil, Base.clamp, Base.clamp!, Base.coalesce, Base.conj,
Base.cos, Base.cosh, Base.cumprod, Base.cumsum, Base.detach,
Base.empty, Base.exp, Base.expm1, Base.fill!, Base.floor,
Base.imag, Base.isfinite, Base.isnan, Base.log, Base.log10,
Base.log1p, Base.log2, Base.max, Base.min, Base.mv,
Base.ones, Base.prod, Base.put!, Base.rand, Base.randn,
Base.range, Base.real, Base.repeat, Base.reshape, Base.resize!,
Base.round, Base.sign, Base.sin, Base.sinh, Base.sort,
Base.split, Base.sqrt, Base.sum, Base.tan, Base.tanh,
Base.transpose, Base.trunc, Base.values, Base.view, Base.zeros,
].args
@eval @grad_for_tensor($name)
end
#
# Methods in src/tensor.jl
#
for name in :[Base.getindex, Base.cat, Base.hcat, Base.vcat].args
@eval @grad_for_tensor($name)
end
#
# Methods in src/common-methods.jl
#
for name in :[Base.:+, Base.:-, Base.:*, Base.:/, Base.div, Base.:^,
Base.adjoint,].args
@eval @grad_for_tensor($name)
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 8241 |
"""
This file intends to genrate Julia methods according C++ function:
void atg_abs_out(tensor *out__, tensor out, tensor self) {
...
}
->
export abs_out
function abs_out(out::Tensor, self::Tensor)
outputs = Int[0]
ccall((:atg_abs_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
pointer(outputs), out.pointer, self.pointer)
return tensor_from_ptr(Ptr{Cvoid}(outputs[1]))
end
"""
const PROJECT_DIR = (@__DIR__) |> dirname |> dirname
const CPP_API_FILE = joinpath(PROJECT_DIR, "csrc", "torch_api_generated.cpp.h")
const JUL_API_FILE = joinpath(PROJECT_DIR, "src", "thc", "thc.jl")
const FUNC_SIG_REG = r"(\w+)\s+(\*?atg_\w+)\((.+)\)\s*{"
const JULIA_KEYWORDS = Set(["function", "end"])
const C_TYPE_MAP = Dict(
"void" => "Cvoid",
"tensor*" => "Ptr{Cvoid}",
"tensor" => "Ptr{Cvoid}",
"scalar" => "Ptr{Cvoid}",
"int" => "Cint",
"int*" => "Ptr{Cvoid}",
"int64_t" => "Clonglong",
"int64_t*" => "Ptr{Cvoid}",
"double" => "Cdouble",
)
const J_TYPE_MAP = Dict(
"void" => "Any",
"tensor*" => "Array{Tensor{T,N}}",
"tensor" => "Tensor",
"scalar" => "TorchNumber",
"int" => "Int",
"int*" => "Array{Int}",
"int64_t" => "Int64",
"int64_t*" => "Array{Int64}",
"double" => "Float64",
)
struct APIFunction
cpp_signature::String
func_name::String
return_type::String
output_count::Int
args::Vector{Pair{String, String}} # name -> type
function APIFunction(m::RegexMatch, n::Int)
csig = strip(m.match, [' ', '{', '\n'])
args = parse_args(m[3])
ret_type = m[1]
fname = m[2]
if fname[1] == '*'
fname = fname[6:end]
ret_type *= '*'
else
fname = fname[5:end]
end
new(csig, fname, ret_type, n, args)
end
end
function parse_args(args::AbstractString)
arg_list = strip.(split(args, ','))
arg_pairs = map(arg_list) do arg
info = strip.(split(arg)) # type, name
if info[2][1] == '*'
info[1] *= '*'
info[2] = info[2][2:end]
end
info[2] in JULIA_KEYWORDS && (info[2] *= "_")
info[2] => info[1]
end
arg_pairs
end
function julia_source(f::APIFunction)
if length(f.args) < 1
@warn "E1: can't translate function [$(f.cpp_signature)], ignored."
return "# $(f.func_name) ignored"
end
for arg in f.args
if !haskey(C_TYPE_MAP, arg.second)
@warn "E2: can't translate function [$(f.cpp_signature)], ignored."
return "# $(f.func_name) ignored"
end
end
lines = [""]
# in-place op: pow_ -> pow!, pow_1 -> pow1!, ...
jl_fname = f.func_name
sufix_m = match(r"(\w+)_(\d*)$", jl_fname)
sufix_m != nothing && (jl_fname = "$(sufix_m[1])$(sufix_m[2])!")
if Symbol(jl_fname) in names(Base)
if !in(jl_fname, ["div"])
push!(lines, "import Base.$(jl_fname)")
end
end
push!(lines, doc(f, jl_fname)) # docs
start = f.args[1].first == "out__" ? 2 : 1
para_type = any(x -> x.second == "tensor*", f.args[start:end]) ?
" where {T,N}" : ""
ccall_ret = start == 1 ? "Ptr{Int}" : "Cvoid"
push!(lines, "function $(jl_fname)($(julia_args(f)))$(para_type)")
push!(lines, julia_locals(f))
push!(lines, " __cret = ccall((:atg_$(f.func_name), :libtorch_capi),")
push!(lines, " $(ccall_ret), ($(ccall_args(f))),")
push!(lines, " $(ccall_julia_args(f)))")
push!(lines, return_statement(f))
push!(lines, "end")
return join(lines, "\n")
end
function doc(f::APIFunction, jl_fname::AbstractString)
cpp_sig = replace(f.cpp_signature, "_" => "\\\\_")
lines = ["\n"]
push!(lines, "\"\"\"")
push!(lines, " $(jl_fname)($(julia_args(f)))")
push!(lines, "")
push!(lines, " Wrapper of C++ function $(cpp_sig)")
push!(lines, "\"\"\"")
join(lines, "\n")
end
function julia_args(f::APIFunction)
args = []
start = f.args[1].first == "out__" ? 2 : 1
for i in start:length(f.args)
p = f.args[i]
if endswith(p.first, "_len") && endswith(f.args[i-1].first, "_data")
nothing
else
push!(args, "$(p.first)::$(J_TYPE_MAP[p.second])")
end
end
join(args, ", ")
end
function julia_locals(f::APIFunction)
lines = []
for i in 1:length(f.args)
p = f.args[i]
if endswith(p.first, "_len") && endswith(f.args[i-1].first, "_data")
push!(lines, " $(p.first) = length($(f.args[i-1].first))")
elseif p.second == "scalar"
push!(lines, " $(p.first)_s_ = Scalar($(p.first))")
elseif p.second == "tensor*"
if p.first == "out__"
output_init = join(repeat(["0"], f.output_count), ", ")
push!(lines, " outputs__ = Int[$(output_init)]")
else
push!(lines, " $(p.first)_ta_ = map(x->x.pointer, $(p.first))")
end
end
end
join(lines, "\n")
end
function ccall_args(f::APIFunction)
length(f.args) == 1 && return C_TYPE_MAP[f.args[1].second] * ","
args = map(p -> C_TYPE_MAP[p.second], f.args)
join(args, ", ")
end
function ccall_julia_args(f::APIFunction)
args = map(f.args) do p
p.second == "tensor*" && p.first == "out__" && return "outputs__"
p.second == "tensor*" && return "$(p.first)_ta_"
p.second == "tensor" && return "$(p.first).pointer"
p.second == "scalar" && return "$(p.first)_s_.pointer"
return p.first
end
join(args, ", ")
end
function return_statement(f::APIFunction)
if match(r"_\d*$", f.func_name) != nothing
return " return self"
elseif f.return_type == "void" && f.args[1].first == "out__"
lines = []
for i in 1:f.output_count
push!(lines,
" __o_$(i) = tensor_from_ptr(Ptr{Cvoid}(outputs__[$(i)]))")
end
push!(lines,
" return " * join(map(x-> "__o_$x", 1:f.output_count), ", "))
return join(lines, "\n")
elseif f.return_type == "tensor*"
lines = []
push!(lines, " ptrs__, i__ = Int[], 1")
push!(lines, " while true")
push!(lines, " ptr__ = unsafe_load(__cret, i__)")
push!(lines, " ptr__ == 0 && break")
push!(lines, " push!(ptrs__, ptr__)")
push!(lines, " i__ += 1")
push!(lines, " end")
push!(lines, " ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)")
push!(lines, " return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)")
return join(lines, "\n")
end
return ""
end
function main()
count = 0
source_lines = readlines(CPP_API_FILE)
output = open(JUL_API_FILE, "w")
func_match = nothing
output_count = 0
write(output, "# !!! THIS FILE IS AUTO-GENERATED, PLEASE DO NOT MODIFY. !!!\n\n")
write(output, "module ThC\n") # module start
write(output, "using ..ThArrays: Tensor, Scalar, TorchNumber, tensor_from_ptr\n")
for line in source_lines
m = match(FUNC_SIG_REG, line)
if m != nothing # start of a function
if func_match != nothing # deal with the previous function
f = APIFunction(func_match, output_count)
write(output, julia_source(f))
output_count = 0
count += 1
end
func_match = m
end
if func_match != nothing # in a function
if match(r"out__\[\d+\]\s*=\s*new", line) != nothing
output_count += 1
elseif match(r"out__\[\D+\]\s*=\s*new", line) != nothing
output_count = 1
end
end
end
if func_match != nothing # the last function
f = APIFunction(func_match, output_count)
write(output, julia_source(f))
count += 1
end
write(output, "\n")
write(output, "include(\"thc-opt.jl\")\n")
write(output, "\n")
write(output, "end\n") # module end
close(output)
@info "$(count) methods generated!\n"
end
main()
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1521 | function opt_add(self::Tensor{T, N}, other::Tensor{T, N}) where {T, N}
outputs__ = Int[0]
__cret = ccall((:atg_add, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return Tensor{T, N}(Ptr{Cvoid}(outputs__[1]), nothing)
end
function opt_index_select(self::Tensor{T, N}, dim::Int64, index::Int64) where {T, N}
ptr = ccall((:tensor_method_index_select_int64, :libtorch_capi),
Ptr{Cvoid}, (Ptr{Cvoid}, Clonglong, Clonglong),
self.pointer, dim, index)
return Tensor{T, N}(ptr, nothing)
end
function opt_index_select(self::Tensor{T, N}, dim::Int64, i::Array{Int64}) where {T, N}
return opt_index_select(self, dim, Tensor(i))
end
function opt_index_select(self::Tensor{T, N}, dim::Int64, index::Tensor) where {T, N}
outputs__ = Int[0]
__cret = ccall((:atg_index_select, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer)
return Tensor{T, N}(Ptr{Cvoid}(outputs__[1]), nothing)
end
function opt_reshape(self::Tensor{T}, shape_data::Array{Int64}) where T
outputs__ = Int[0]
shape_len = length(shape_data)
__cret = ccall((:atg_reshape, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, shape_data, shape_len)
return Tensor{T, shape_len}(Ptr{Cvoid}(outputs__[1]), nothing)
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 631370 | # !!! THIS FILE IS AUTO-GENERATED, PLEASE DO NOT MODIFY. !!!
module ThC
using ..ThArrays: Tensor, Scalar, TorchNumber, tensor_from_ptr
import Base.abs
"""
abs(self::Tensor)
Wrapper of C++ function void atg\\_abs(tensor *out\\_\\_, tensor self)
"""
function abs(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_abs, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
abs!(self::Tensor)
Wrapper of C++ function void atg\\_abs\\_(tensor *out\\_\\_, tensor self)
"""
function abs!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_abs_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
abs_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_abs\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function abs_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_abs_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.acos
"""
acos(self::Tensor)
Wrapper of C++ function void atg\\_acos(tensor *out\\_\\_, tensor self)
"""
function acos(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_acos, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
acos!(self::Tensor)
Wrapper of C++ function void atg\\_acos\\_(tensor *out\\_\\_, tensor self)
"""
function acos!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_acos_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
acos_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_acos\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function acos_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_acos_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool1d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool1d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_avg_pool1d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_avg_pool1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool2d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_avg_pool2d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_avg_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_avg_pool2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_avg_pool2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool3d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool3d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_avg_pool3d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_avg_pool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool3d_backward(grad_output::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self)
"""
function adaptive_avg_pool3d_backward(grad_output::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_avg_pool3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self)
"""
function adaptive_avg_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_avg_pool3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_avg_pool3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_avg\\_pool3d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_avg_pool3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_avg_pool3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_max_pool1d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool1d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_max_pool1d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0, 0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_max_pool1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
adaptive_max_pool2d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_max_pool2d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0, 0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_max_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
adaptive_max_pool2d_backward(grad_output::Tensor, self::Tensor, indices::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor indices)
"""
function adaptive_max_pool2d_backward(grad_output::Tensor, self::Tensor, indices::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_max_pool2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_max_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor indices)
"""
function adaptive_max_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_max_pool2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_max_pool2d_out(out::Tensor, indices::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool2d\\_out(tensor *out\\_\\_, tensor out, tensor indices, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_max_pool2d_out(out::Tensor, indices::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0, 0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_max_pool2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, indices.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
adaptive_max_pool3d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool3d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_max_pool3d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0, 0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_max_pool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
adaptive_max_pool3d_backward(grad_output::Tensor, self::Tensor, indices::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor indices)
"""
function adaptive_max_pool3d_backward(grad_output::Tensor, self::Tensor, indices::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_max_pool3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_max_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor)
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor indices)
"""
function adaptive_max_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_adaptive_max_pool3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
adaptive_max_pool3d_out(out::Tensor, indices::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_adaptive\\_max\\_pool3d\\_out(tensor *out\\_\\_, tensor out, tensor indices, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function adaptive_max_pool3d_out(out::Tensor, indices::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0, 0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_adaptive_max_pool3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, indices.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
add(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_add(tensor *out\\_\\_, tensor self, tensor other)
"""
function add(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_add, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
add1(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_add1(tensor *out\\_\\_, tensor self, scalar other)
"""
function add1(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_add1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
add!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_add\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function add!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_add_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
add1!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_add\\_1(tensor *out\\_\\_, tensor self, scalar other)
"""
function add1!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_add_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
add_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_add\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function add_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_add_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addbmm(self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_addbmm(tensor *out\\_\\_, tensor self, tensor batch1, tensor batch2)
"""
function addbmm(self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addbmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, batch1.pointer, batch2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addbmm!(self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_addbmm\\_(tensor *out\\_\\_, tensor self, tensor batch1, tensor batch2)
"""
function addbmm!(self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addbmm_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, batch1.pointer, batch2.pointer)
return self
end
"""
addbmm_out(out::Tensor, self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_addbmm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor batch1, tensor batch2)
"""
function addbmm_out(out::Tensor, self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addbmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, batch1.pointer, batch2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addcdiv(self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcdiv(tensor *out\\_\\_, tensor self, tensor tensor1, tensor tensor2)
"""
function addcdiv(self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcdiv, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, tensor1.pointer, tensor2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addcdiv!(self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcdiv\\_(tensor *out\\_\\_, tensor self, tensor tensor1, tensor tensor2)
"""
function addcdiv!(self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcdiv_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, tensor1.pointer, tensor2.pointer)
return self
end
"""
addcdiv_out(out::Tensor, self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcdiv\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor tensor1, tensor tensor2)
"""
function addcdiv_out(out::Tensor, self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcdiv_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, tensor1.pointer, tensor2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addcmul(self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcmul(tensor *out\\_\\_, tensor self, tensor tensor1, tensor tensor2)
"""
function addcmul(self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcmul, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, tensor1.pointer, tensor2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addcmul!(self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcmul\\_(tensor *out\\_\\_, tensor self, tensor tensor1, tensor tensor2)
"""
function addcmul!(self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcmul_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, tensor1.pointer, tensor2.pointer)
return self
end
"""
addcmul_out(out::Tensor, self::Tensor, tensor1::Tensor, tensor2::Tensor)
Wrapper of C++ function void atg\\_addcmul\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor tensor1, tensor tensor2)
"""
function addcmul_out(out::Tensor, self::Tensor, tensor1::Tensor, tensor2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addcmul_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, tensor1.pointer, tensor2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addmm(self::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_addmm(tensor *out\\_\\_, tensor self, tensor mat1, tensor mat2)
"""
function addmm(self::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addmm!(self::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_addmm\\_(tensor *out\\_\\_, tensor self, tensor mat1, tensor mat2)
"""
function addmm!(self::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmm_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat1.pointer, mat2.pointer)
return self
end
"""
addmm_out(out::Tensor, self::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_addmm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mat1, tensor mat2)
"""
function addmm_out(out::Tensor, self::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addmv(self::Tensor, mat::Tensor, vec::Tensor)
Wrapper of C++ function void atg\\_addmv(tensor *out\\_\\_, tensor self, tensor mat, tensor vec)
"""
function addmv(self::Tensor, mat::Tensor, vec::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmv, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat.pointer, vec.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addmv!(self::Tensor, mat::Tensor, vec::Tensor)
Wrapper of C++ function void atg\\_addmv\\_(tensor *out\\_\\_, tensor self, tensor mat, tensor vec)
"""
function addmv!(self::Tensor, mat::Tensor, vec::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmv_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat.pointer, vec.pointer)
return self
end
"""
addmv_out(out::Tensor, self::Tensor, mat::Tensor, vec::Tensor)
Wrapper of C++ function void atg\\_addmv\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mat, tensor vec)
"""
function addmv_out(out::Tensor, self::Tensor, mat::Tensor, vec::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addmv_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mat.pointer, vec.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addr(self::Tensor, vec1::Tensor, vec2::Tensor)
Wrapper of C++ function void atg\\_addr(tensor *out\\_\\_, tensor self, tensor vec1, tensor vec2)
"""
function addr(self::Tensor, vec1::Tensor, vec2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addr, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, vec1.pointer, vec2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
addr!(self::Tensor, vec1::Tensor, vec2::Tensor)
Wrapper of C++ function void atg\\_addr\\_(tensor *out\\_\\_, tensor self, tensor vec1, tensor vec2)
"""
function addr!(self::Tensor, vec1::Tensor, vec2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addr_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, vec1.pointer, vec2.pointer)
return self
end
"""
addr_out(out::Tensor, self::Tensor, vec1::Tensor, vec2::Tensor)
Wrapper of C++ function void atg\\_addr\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor vec1, tensor vec2)
"""
function addr_out(out::Tensor, self::Tensor, vec1::Tensor, vec2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_addr_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, vec1.pointer, vec2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
affine_grid_generator(theta::Tensor, size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_affine\\_grid\\_generator(tensor *out\\_\\_, tensor theta, int64\\_t *size\\_data, int size\\_len, int align\\_corners)
"""
function affine_grid_generator(theta::Tensor, size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_affine_grid_generator, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, theta.pointer, size_data, size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
affine_grid_generator_backward(grad::Tensor, size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_affine\\_grid\\_generator\\_backward(tensor *out\\_\\_, tensor grad, int64\\_t *size\\_data, int size\\_len, int align\\_corners)
"""
function affine_grid_generator_backward(grad::Tensor, size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_affine_grid_generator_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, grad.pointer, size_data, size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
alias(self::Tensor)
Wrapper of C++ function void atg\\_alias(tensor *out\\_\\_, tensor self)
"""
function alias(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_alias, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
align_as(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_align\\_as(tensor *out\\_\\_, tensor self, tensor other)
"""
function align_as(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_align_as, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
align_tensors(tensors_data::Array{Tensor{T,N}})
Wrapper of C++ function tensor *atg\\_align\\_tensors(tensor *tensors\\_data, int tensors\\_len)
"""
function align_tensors(tensors_data::Array{Tensor{T,N}}) where {T,N}
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_align_tensors, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Cint),
tensors_data_ta_, tensors_len)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
import Base.all
"""
all(self::Tensor)
Wrapper of C++ function void atg\\_all(tensor *out\\_\\_, tensor self)
"""
function all(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_all, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
all1(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_all1(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function all1(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_all1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
all_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_all\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, int keepdim)
"""
function all_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_all_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, out.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
alpha_dropout(input::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_alpha\\_dropout(tensor *out\\_\\_, tensor input, double p, int train)
"""
function alpha_dropout(input::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_alpha_dropout, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, p, train)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
alpha_dropout!(self::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_alpha\\_dropout\\_(tensor *out\\_\\_, tensor self, double p, int train)
"""
function alpha_dropout!(self::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_alpha_dropout_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, self.pointer, p, train)
return self
end
import Base.angle
"""
angle(self::Tensor)
Wrapper of C++ function void atg\\_angle(tensor *out\\_\\_, tensor self)
"""
function angle(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_angle, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
angle_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_angle\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function angle_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_angle_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.any
"""
any(self::Tensor)
Wrapper of C++ function void atg\\_any(tensor *out\\_\\_, tensor self)
"""
function any(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_any, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
any1(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_any1(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function any1(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_any1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
any_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_any\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, int keepdim)
"""
function any_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_any_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, out.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
arange(end_::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_arange(tensor *out\\_\\_, scalar end, int options\\_kind, int options\\_device)
"""
function arange(end_::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
end__s_ = Scalar(end_)
__cret = ccall((:atg_arange, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, end__s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
arange1(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_arange1(tensor *out\\_\\_, scalar start, scalar end, int options\\_kind, int options\\_device)
"""
function arange1(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_arange1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
arange2(start::TorchNumber, end_::TorchNumber, step::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_arange2(tensor *out\\_\\_, scalar start, scalar end, scalar step, int options\\_kind, int options\\_device)
"""
function arange2(start::TorchNumber, end_::TorchNumber, step::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
step_s_ = Scalar(step)
__cret = ccall((:atg_arange2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, step_s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
arange_out(out::Tensor, end_::TorchNumber)
Wrapper of C++ function void atg\\_arange\\_out(tensor *out\\_\\_, tensor out, scalar end)
"""
function arange_out(out::Tensor, end_::TorchNumber)
outputs__ = Int[0]
end__s_ = Scalar(end_)
__cret = ccall((:atg_arange_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, end__s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
arange_out1(out::Tensor, start::TorchNumber, end_::TorchNumber)
Wrapper of C++ function void atg\\_arange\\_out1(tensor *out\\_\\_, tensor out, scalar start, scalar end)
"""
function arange_out1(out::Tensor, start::TorchNumber, end_::TorchNumber)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_arange_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, start_s_.pointer, end__s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.argmax
"""
argmax(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_argmax(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function argmax(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_argmax, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.argmin
"""
argmin(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_argmin(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function argmin(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_argmin, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
argsort(self::Tensor, dim::Int64, descending::Int)
Wrapper of C++ function void atg\\_argsort(tensor *out\\_\\_, tensor self, int64\\_t dim, int descending)
"""
function argsort(self::Tensor, dim::Int64, descending::Int)
outputs__ = Int[0]
__cret = ccall((:atg_argsort, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, descending)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
as_strided(self::Tensor, size_data::Array{Int64}, stride_data::Array{Int64}, storage_offset::Int64)
Wrapper of C++ function void atg\\_as\\_strided(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t storage\\_offset)
"""
function as_strided(self::Tensor, size_data::Array{Int64}, stride_data::Array{Int64}, storage_offset::Int64)
outputs__ = Int[0]
size_len = length(size_data)
stride_len = length(stride_data)
__cret = ccall((:atg_as_strided, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, self.pointer, size_data, size_len, stride_data, stride_len, storage_offset)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
as_strided!(self::Tensor, size_data::Array{Int64}, stride_data::Array{Int64}, storage_offset::Int64)
Wrapper of C++ function void atg\\_as\\_strided\\_(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t storage\\_offset)
"""
function as_strided!(self::Tensor, size_data::Array{Int64}, stride_data::Array{Int64}, storage_offset::Int64)
outputs__ = Int[0]
size_len = length(size_data)
stride_len = length(stride_data)
__cret = ccall((:atg_as_strided_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, self.pointer, size_data, size_len, stride_data, stride_len, storage_offset)
return self
end
import Base.asin
"""
asin(self::Tensor)
Wrapper of C++ function void atg\\_asin(tensor *out\\_\\_, tensor self)
"""
function asin(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_asin, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
asin!(self::Tensor)
Wrapper of C++ function void atg\\_asin\\_(tensor *out\\_\\_, tensor self)
"""
function asin!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_asin_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
asin_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_asin\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function asin_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_asin_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.atan
"""
atan(self::Tensor)
Wrapper of C++ function void atg\\_atan(tensor *out\\_\\_, tensor self)
"""
function atan(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
atan2(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_atan2(tensor *out\\_\\_, tensor self, tensor other)
"""
function atan2(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
atan2!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_atan2\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function atan2!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan2_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
atan2_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_atan2\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function atan2_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan2_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
atan!(self::Tensor)
Wrapper of C++ function void atg\\_atan\\_(tensor *out\\_\\_, tensor self)
"""
function atan!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
atan_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_atan\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function atan_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_atan_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool1d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int)
Wrapper of C++ function void atg\\_avg\\_pool1d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad)
"""
function avg_pool1d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool2d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool2d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool2d_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool2d_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, out.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool3d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool3d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool3d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool3d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool3d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
avg_pool3d_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
Wrapper of C++ function void atg\\_avg\\_pool3d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int ceil\\_mode, int count\\_include\\_pad, int64\\_t divisor\\_override)
"""
function avg_pool3d_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, ceil_mode::Int, count_include_pad::Int, divisor_override::Int64)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_avg_pool3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint, Clonglong),
outputs__, out.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, ceil_mode, count_include_pad, divisor_override)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
baddbmm(self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_baddbmm(tensor *out\\_\\_, tensor self, tensor batch1, tensor batch2)
"""
function baddbmm(self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_baddbmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, batch1.pointer, batch2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
baddbmm!(self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_baddbmm\\_(tensor *out\\_\\_, tensor self, tensor batch1, tensor batch2)
"""
function baddbmm!(self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_baddbmm_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, batch1.pointer, batch2.pointer)
return self
end
"""
baddbmm_out(out::Tensor, self::Tensor, batch1::Tensor, batch2::Tensor)
Wrapper of C++ function void atg\\_baddbmm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor batch1, tensor batch2)
"""
function baddbmm_out(out::Tensor, self::Tensor, batch1::Tensor, batch2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_baddbmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, batch1.pointer, batch2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bartlett_window(window_length::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_bartlett\\_window(tensor *out\\_\\_, int64\\_t window\\_length, int options\\_kind, int options\\_device)
"""
function bartlett_window(window_length::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_bartlett_window, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, window_length, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bartlett_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_bartlett\\_window1(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, int options\\_kind, int options\\_device)
"""
function bartlett_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_bartlett_window1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint, Cint),
outputs__, window_length, periodic, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, momentum::Float64, eps::Float64, cudnn_enabled::Int)
Wrapper of C++ function void atg\\_batch\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor running\\_mean, tensor running\\_var, int training, double momentum, double eps, int cudnn\\_enabled)
"""
function batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, momentum::Float64, eps::Float64, cudnn_enabled::Int)
outputs__ = Int[0]
__cret = ccall((:atg_batch_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cdouble, Cdouble, Cint),
outputs__, input.pointer, weight.pointer, bias.pointer, running_mean.pointer, running_var.pointer, training, momentum, eps, cudnn_enabled)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
batch_norm_backward_elemt(grad_out::Tensor, input::Tensor, mean::Tensor, invstd::Tensor, weight::Tensor, mean_dy::Tensor, mean_dy_xmu::Tensor)
Wrapper of C++ function void atg\\_batch\\_norm\\_backward\\_elemt(tensor *out\\_\\_, tensor grad\\_out, tensor input, tensor mean, tensor invstd, tensor weight, tensor mean\\_dy, tensor mean\\_dy\\_xmu)
"""
function batch_norm_backward_elemt(grad_out::Tensor, input::Tensor, mean::Tensor, invstd::Tensor, weight::Tensor, mean_dy::Tensor, mean_dy_xmu::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_batch_norm_backward_elemt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_out.pointer, input.pointer, mean.pointer, invstd.pointer, weight.pointer, mean_dy.pointer, mean_dy_xmu.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
batch_norm_backward_reduce(grad_out::Tensor, input::Tensor, mean::Tensor, invstd::Tensor, weight::Tensor, input_g::Int, weight_g::Int, bias_g::Int)
Wrapper of C++ function void atg\\_batch\\_norm\\_backward\\_reduce(tensor *out\\_\\_, tensor grad\\_out, tensor input, tensor mean, tensor invstd, tensor weight, int input\\_g, int weight\\_g, int bias\\_g)
"""
function batch_norm_backward_reduce(grad_out::Tensor, input::Tensor, mean::Tensor, invstd::Tensor, weight::Tensor, input_g::Int, weight_g::Int, bias_g::Int)
outputs__ = Int[0, 0, 0, 0]
__cret = ccall((:atg_batch_norm_backward_reduce, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, grad_out.pointer, input.pointer, mean.pointer, invstd.pointer, weight.pointer, input_g, weight_g, bias_g)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
__o_4 = tensor_from_ptr(Ptr{Cvoid}(outputs__[4]))
return __o_1, __o_2, __o_3, __o_4
end
"""
batch_norm_elemt(input::Tensor, weight::Tensor, bias::Tensor, mean::Tensor, invstd::Tensor, eps::Float64)
Wrapper of C++ function void atg\\_batch\\_norm\\_elemt(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps)
"""
function batch_norm_elemt(input::Tensor, weight::Tensor, bias::Tensor, mean::Tensor, invstd::Tensor, eps::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_batch_norm_elemt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, input.pointer, weight.pointer, bias.pointer, mean.pointer, invstd.pointer, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
batch_norm_elemt_out(out::Tensor, input::Tensor, weight::Tensor, bias::Tensor, mean::Tensor, invstd::Tensor, eps::Float64)
Wrapper of C++ function void atg\\_batch\\_norm\\_elemt\\_out(tensor *out\\_\\_, tensor out, tensor input, tensor weight, tensor bias, tensor mean, tensor invstd, double eps)
"""
function batch_norm_elemt_out(out::Tensor, input::Tensor, weight::Tensor, bias::Tensor, mean::Tensor, invstd::Tensor, eps::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_batch_norm_elemt_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, out.pointer, input.pointer, weight.pointer, bias.pointer, mean.pointer, invstd.pointer, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
batch_norm_gather_stats(input::Tensor, mean::Tensor, invstd::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64, eps::Float64, count::Int64)
Wrapper of C++ function void atg\\_batch\\_norm\\_gather\\_stats(tensor *out\\_\\_, tensor input, tensor mean, tensor invstd, tensor running\\_mean, tensor running\\_var, double momentum, double eps, int64\\_t count)
"""
function batch_norm_gather_stats(input::Tensor, mean::Tensor, invstd::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64, eps::Float64, count::Int64)
outputs__ = Int[0, 0]
__cret = ccall((:atg_batch_norm_gather_stats, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Clonglong),
outputs__, input.pointer, mean.pointer, invstd.pointer, running_mean.pointer, running_var.pointer, momentum, eps, count)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
batch_norm_gather_stats_with_counts(input::Tensor, mean::Tensor, invstd::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64, eps::Float64, counts_data::Array{Int64})
Wrapper of C++ function void atg\\_batch\\_norm\\_gather\\_stats\\_with\\_counts(tensor *out\\_\\_, tensor input, tensor mean, tensor invstd, tensor running\\_mean, tensor running\\_var, double momentum, double eps, int64\\_t *counts\\_data, int counts\\_len)
"""
function batch_norm_gather_stats_with_counts(input::Tensor, mean::Tensor, invstd::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64, eps::Float64, counts_data::Array{Int64})
outputs__ = Int[0, 0]
counts_len = length(counts_data)
__cret = ccall((:atg_batch_norm_gather_stats_with_counts, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Ptr{Cvoid}, Cint),
outputs__, input.pointer, mean.pointer, invstd.pointer, running_mean.pointer, running_var.pointer, momentum, eps, counts_data, counts_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
batch_norm_stats(input::Tensor, eps::Float64)
Wrapper of C++ function void atg\\_batch\\_norm\\_stats(tensor *out\\_\\_, tensor input, double eps)
"""
function batch_norm_stats(input::Tensor, eps::Float64)
outputs__ = Int[0, 0]
__cret = ccall((:atg_batch_norm_stats, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, input.pointer, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
batch_norm_update_stats(input::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64)
Wrapper of C++ function void atg\\_batch\\_norm\\_update\\_stats(tensor *out\\_\\_, tensor input, tensor running\\_mean, tensor running\\_var, double momentum)
"""
function batch_norm_update_stats(input::Tensor, running_mean::Tensor, running_var::Tensor, momentum::Float64)
outputs__ = Int[0, 0]
__cret = ccall((:atg_batch_norm_update_stats, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, input.pointer, running_mean.pointer, running_var.pointer, momentum)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
bernoulli(self::Tensor)
Wrapper of C++ function void atg\\_bernoulli(tensor *out\\_\\_, tensor self)
"""
function bernoulli(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bernoulli, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bernoulli1(self::Tensor, p::Float64)
Wrapper of C++ function void atg\\_bernoulli1(tensor *out\\_\\_, tensor self, double p)
"""
function bernoulli1(self::Tensor, p::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_bernoulli1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, p)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bernoulli!(self::Tensor, p::Tensor)
Wrapper of C++ function void atg\\_bernoulli\\_(tensor *out\\_\\_, tensor self, tensor p)
"""
function bernoulli!(self::Tensor, p::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bernoulli_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, p.pointer)
return self
end
"""
bernoulli1!(self::Tensor, p::Float64)
Wrapper of C++ function void atg\\_bernoulli\\_1(tensor *out\\_\\_, tensor self, double p)
"""
function bernoulli1!(self::Tensor, p::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_bernoulli_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, p)
return self
end
"""
bernoulli_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_bernoulli\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function bernoulli_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bernoulli_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bilinear(input1::Tensor, input2::Tensor, weight::Tensor, bias::Tensor)
Wrapper of C++ function void atg\\_bilinear(tensor *out\\_\\_, tensor input1, tensor input2, tensor weight, tensor bias)
"""
function bilinear(input1::Tensor, input2::Tensor, weight::Tensor, bias::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bilinear, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input1.pointer, input2.pointer, weight.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy(tensor *out\\_\\_, tensor self, tensor target, tensor weight, int64\\_t reduction)
"""
function binary_cross_entropy(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction)
"""
function binary_cross_entropy_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction)
"""
function binary_cross_entropy_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, tensor weight, int64\\_t reduction)
"""
function binary_cross_entropy_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy_with_logits(self::Tensor, target::Tensor, weight::Tensor, pos_weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy\\_with\\_logits(tensor *out\\_\\_, tensor self, tensor target, tensor weight, tensor pos\\_weight, int64\\_t reduction)
"""
function binary_cross_entropy_with_logits(self::Tensor, target::Tensor, weight::Tensor, pos_weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy_with_logits, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, weight.pointer, pos_weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
binary_cross_entropy_with_logits_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, pos_weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_binary\\_cross\\_entropy\\_with\\_logits\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, tensor weight, tensor pos\\_weight, int64\\_t reduction)
"""
function binary_cross_entropy_with_logits_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, pos_weight::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_binary_cross_entropy_with_logits_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, weight.pointer, pos_weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bincount(self::Tensor, weights::Tensor, minlength::Int64)
Wrapper of C++ function void atg\\_bincount(tensor *out\\_\\_, tensor self, tensor weights, int64\\_t minlength)
"""
function bincount(self::Tensor, weights::Tensor, minlength::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_bincount, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, weights.pointer, minlength)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_not(self::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_not(tensor *out\\_\\_, tensor self)
"""
function bitwise_not(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_not, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_not!(self::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_not\\_(tensor *out\\_\\_, tensor self)
"""
function bitwise_not!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_not_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
bitwise_not_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_not\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function bitwise_not_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_not_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_xor(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_bitwise\\_xor(tensor *out\\_\\_, tensor self, scalar other)
"""
function bitwise_xor(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_bitwise_xor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_xor1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_xor1(tensor *out\\_\\_, tensor self, tensor other)
"""
function bitwise_xor1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_xor1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_xor!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_bitwise\\_xor\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function bitwise_xor!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_bitwise_xor_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
bitwise_xor1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_xor\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function bitwise_xor1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_xor_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
bitwise_xor_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_bitwise\\_xor\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function bitwise_xor_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bitwise_xor_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bitwise_xor_out1(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_bitwise\\_xor\\_out1(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function bitwise_xor_out1(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_bitwise_xor_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
blackman_window(window_length::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_blackman\\_window(tensor *out\\_\\_, int64\\_t window\\_length, int options\\_kind, int options\\_device)
"""
function blackman_window(window_length::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_blackman_window, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, window_length, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
blackman_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_blackman\\_window1(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, int options\\_kind, int options\\_device)
"""
function blackman_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_blackman_window1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint, Cint),
outputs__, window_length, periodic, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bmm(self::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_bmm(tensor *out\\_\\_, tensor self, tensor mat2)
"""
function bmm(self::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
bmm_out(out::Tensor, self::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_bmm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mat2)
"""
function bmm_out(out::Tensor, self::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_bmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
broadcast_tensors(tensors_data::Array{Tensor{T,N}})
Wrapper of C++ function tensor *atg\\_broadcast\\_tensors(tensor *tensors\\_data, int tensors\\_len)
"""
function broadcast_tensors(tensors_data::Array{Tensor{T,N}}) where {T,N}
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_broadcast_tensors, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Cint),
tensors_data_ta_, tensors_len)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
"""
cartesian_prod(tensors_data::Array{Tensor{T,N}})
Wrapper of C++ function void atg\\_cartesian\\_prod(tensor *out\\_\\_, tensor *tensors\\_data, int tensors\\_len)
"""
function cartesian_prod(tensors_data::Array{Tensor{T,N}}) where {T,N}
outputs__ = Int[0]
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_cartesian_prod, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, tensors_data_ta_, tensors_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.cat
"""
cat(tensors_data::Array{Tensor{T,N}}, dim::Int64)
Wrapper of C++ function void atg\\_cat(tensor *out\\_\\_, tensor *tensors\\_data, int tensors\\_len, int64\\_t dim)
"""
function cat(tensors_data::Array{Tensor{T,N}}, dim::Int64) where {T,N}
outputs__ = Int[0]
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_cat, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong),
outputs__, tensors_data_ta_, tensors_len, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cat_out(out::Tensor, tensors_data::Array{Tensor{T,N}}, dim::Int64)
Wrapper of C++ function void atg\\_cat\\_out(tensor *out\\_\\_, tensor out, tensor *tensors\\_data, int tensors\\_len, int64\\_t dim)
"""
function cat_out(out::Tensor, tensors_data::Array{Tensor{T,N}}, dim::Int64) where {T,N}
outputs__ = Int[0]
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_cat_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong),
outputs__, out.pointer, tensors_data_ta_, tensors_len, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cauchy!(self::Tensor, median::Float64, sigma::Float64)
Wrapper of C++ function void atg\\_cauchy\\_(tensor *out\\_\\_, tensor self, double median, double sigma)
"""
function cauchy!(self::Tensor, median::Float64, sigma::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_cauchy_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble),
outputs__, self.pointer, median, sigma)
return self
end
"""
cdist(x1::Tensor, x2::Tensor, p::Float64, compute_mode::Int64)
Wrapper of C++ function void atg\\_cdist(tensor *out\\_\\_, tensor x1, tensor x2, double p, int64\\_t compute\\_mode)
"""
function cdist(x1::Tensor, x2::Tensor, p::Float64, compute_mode::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cdist, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong),
outputs__, x1.pointer, x2.pointer, p, compute_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.ceil
"""
ceil(self::Tensor)
Wrapper of C++ function void atg\\_ceil(tensor *out\\_\\_, tensor self)
"""
function ceil(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ceil, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ceil!(self::Tensor)
Wrapper of C++ function void atg\\_ceil\\_(tensor *out\\_\\_, tensor self)
"""
function ceil!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ceil_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
ceil_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_ceil\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function ceil_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ceil_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
celu(self::Tensor)
Wrapper of C++ function void atg\\_celu(tensor *out\\_\\_, tensor self)
"""
function celu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_celu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
celu!(self::Tensor)
Wrapper of C++ function void atg\\_celu\\_(tensor *out\\_\\_, tensor self)
"""
function celu!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_celu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
chain_matmul(matrices_data::Array{Tensor{T,N}})
Wrapper of C++ function void atg\\_chain\\_matmul(tensor *out\\_\\_, tensor *matrices\\_data, int matrices\\_len)
"""
function chain_matmul(matrices_data::Array{Tensor{T,N}}) where {T,N}
outputs__ = Int[0]
matrices_data_ta_ = map(x->x.pointer, matrices_data)
matrices_len = length(matrices_data)
__cret = ccall((:atg_chain_matmul, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, matrices_data_ta_, matrices_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky(self::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky(tensor *out\\_\\_, tensor self, int upper)
"""
function cholesky(self::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky_inverse(self::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky\\_inverse(tensor *out\\_\\_, tensor self, int upper)
"""
function cholesky_inverse(self::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky_inverse, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky_inverse_out(out::Tensor, self::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky\\_inverse\\_out(tensor *out\\_\\_, tensor out, tensor self, int upper)
"""
function cholesky_inverse_out(out::Tensor, self::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky_inverse_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky_out(out::Tensor, self::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky\\_out(tensor *out\\_\\_, tensor out, tensor self, int upper)
"""
function cholesky_out(out::Tensor, self::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky_solve(self::Tensor, input2::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky\\_solve(tensor *out\\_\\_, tensor self, tensor input2, int upper)
"""
function cholesky_solve(self::Tensor, input2::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky_solve, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, input2.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cholesky_solve_out(out::Tensor, self::Tensor, input2::Tensor, upper::Int)
Wrapper of C++ function void atg\\_cholesky\\_solve\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor input2, int upper)
"""
function cholesky_solve_out(out::Tensor, self::Tensor, input2::Tensor, upper::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cholesky_solve_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, input2.pointer, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
chunk(self::Tensor, chunks::Int64, dim::Int64)
Wrapper of C++ function tensor *atg\\_chunk(tensor self, int64\\_t chunks, int64\\_t dim)
"""
function chunk(self::Tensor, chunks::Int64, dim::Int64)
__cret = ccall((:atg_chunk, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Clonglong, Clonglong),
self.pointer, chunks, dim)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
import Base.clamp
"""
clamp(self::Tensor, min::TorchNumber, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp(tensor *out\\_\\_, tensor self, scalar min, scalar max)
"""
function clamp(self::Tensor, min::TorchNumber, max::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, min_s_.pointer, max_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.clamp!
"""
clamp!(self::Tensor, min::TorchNumber, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_(tensor *out\\_\\_, tensor self, scalar min, scalar max)
"""
function clamp!(self::Tensor, min::TorchNumber, max::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, min_s_.pointer, max_s_.pointer)
return self
end
"""
clamp_max(self::Tensor, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_max(tensor *out\\_\\_, tensor self, scalar max)
"""
function clamp_max(self::Tensor, max::TorchNumber)
outputs__ = Int[0]
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp_max, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, max_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
clamp_max!(self::Tensor, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_max\\_(tensor *out\\_\\_, tensor self, scalar max)
"""
function clamp_max!(self::Tensor, max::TorchNumber)
outputs__ = Int[0]
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp_max_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, max_s_.pointer)
return self
end
"""
clamp_max_out(out::Tensor, self::Tensor, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_max\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar max)
"""
function clamp_max_out(out::Tensor, self::Tensor, max::TorchNumber)
outputs__ = Int[0]
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp_max_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, max_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
clamp_min(self::Tensor, min::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_min(tensor *out\\_\\_, tensor self, scalar min)
"""
function clamp_min(self::Tensor, min::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
__cret = ccall((:atg_clamp_min, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, min_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
clamp_min!(self::Tensor, min::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_min\\_(tensor *out\\_\\_, tensor self, scalar min)
"""
function clamp_min!(self::Tensor, min::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
__cret = ccall((:atg_clamp_min_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, min_s_.pointer)
return self
end
"""
clamp_min_out(out::Tensor, self::Tensor, min::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_min\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar min)
"""
function clamp_min_out(out::Tensor, self::Tensor, min::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
__cret = ccall((:atg_clamp_min_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, min_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
clamp_out(out::Tensor, self::Tensor, min::TorchNumber, max::TorchNumber)
Wrapper of C++ function void atg\\_clamp\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar min, scalar max)
"""
function clamp_out(out::Tensor, self::Tensor, min::TorchNumber, max::TorchNumber)
outputs__ = Int[0]
min_s_ = Scalar(min)
max_s_ = Scalar(max)
__cret = ccall((:atg_clamp_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, min_s_.pointer, max_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
clone(self::Tensor)
Wrapper of C++ function void atg\\_clone(tensor *out\\_\\_, tensor self)
"""
function clone(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_clone, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.coalesce
"""
coalesce(self::Tensor)
Wrapper of C++ function void atg\\_coalesce(tensor *out\\_\\_, tensor self)
"""
function coalesce(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_coalesce, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
col2im(self::Tensor, output_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_col2im(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function col2im(self::Tensor, output_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_col2im, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
col2im_backward(grad_output::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_col2im\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function col2im_backward(grad_output::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_col2im_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
col2im_backward_out(grad_input::Tensor, grad_output::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_col2im\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function col2im_backward_out(grad_input::Tensor, grad_output::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_col2im_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
col2im_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_col2im\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function col2im_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_col2im_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
combinations(self::Tensor, r::Int64, with_replacement::Int)
Wrapper of C++ function void atg\\_combinations(tensor *out\\_\\_, tensor self, int64\\_t r, int with\\_replacement)
"""
function combinations(self::Tensor, r::Int64, with_replacement::Int)
outputs__ = Int[0]
__cret = ccall((:atg_combinations, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, r, with_replacement)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.conj
"""
conj(self::Tensor)
Wrapper of C++ function void atg\\_conj(tensor *out\\_\\_, tensor self)
"""
function conj(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_conj, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conj_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_conj\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function conj_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_conj_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
constant_pad_nd(self::Tensor, pad_data::Array{Int64})
Wrapper of C++ function void atg\\_constant\\_pad\\_nd(tensor *out\\_\\_, tensor self, int64\\_t *pad\\_data, int pad\\_len)
"""
function constant_pad_nd(self::Tensor, pad_data::Array{Int64})
outputs__ = Int[0]
pad_len = length(pad_data)
__cret = ccall((:atg_constant_pad_nd, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, pad_data, pad_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
contiguous(self::Tensor)
Wrapper of C++ function void atg\\_contiguous(tensor *out\\_\\_, tensor self)
"""
function contiguous(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_contiguous, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv1d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_conv1d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups)
"""
function conv1d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv2d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_conv2d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups)
"""
function conv2d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv3d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_conv3d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups)
"""
function conv3d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv_tbc(self::Tensor, weight::Tensor, bias::Tensor, pad::Int64)
Wrapper of C++ function void atg\\_conv\\_tbc(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t pad)
"""
function conv_tbc(self::Tensor, weight::Tensor, bias::Tensor, pad::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_conv_tbc, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, weight.pointer, bias.pointer, pad)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv_tbc_backward(self::Tensor, input::Tensor, weight::Tensor, bias::Tensor, pad::Int64)
Wrapper of C++ function void atg\\_conv\\_tbc\\_backward(tensor *out\\_\\_, tensor self, tensor input, tensor weight, tensor bias, int64\\_t pad)
"""
function conv_tbc_backward(self::Tensor, input::Tensor, weight::Tensor, bias::Tensor, pad::Int64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_conv_tbc_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, input.pointer, weight.pointer, bias.pointer, pad)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
conv_transpose1d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_conv\\_transpose1d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t groups, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function conv_transpose1d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv_transpose1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Ptr{Cvoid}, Cint),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, groups, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv_transpose2d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_conv\\_transpose2d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t groups, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function conv_transpose2d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv_transpose2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Ptr{Cvoid}, Cint),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, groups, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
conv_transpose3d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_conv\\_transpose3d(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t groups, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function conv_transpose3d(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, groups::Int64, dilation_data::Array{Int64})
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_conv_transpose3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Ptr{Cvoid}, Cint),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, groups, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
convolution(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, transposed::Int, output_padding_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_convolution(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int transposed, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t groups)
"""
function convolution(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, transposed::Int, output_padding_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
output_padding_len = length(output_padding_data)
__cret = ccall((:atg_convolution, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, transposed, output_padding_data, output_padding_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
convolution_overrideable(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, transposed::Int, output_padding_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_convolution\\_overrideable(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int transposed, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t groups)
"""
function convolution_overrideable(input::Tensor, weight::Tensor, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, transposed::Int, output_padding_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
output_padding_len = length(output_padding_data)
__cret = ccall((:atg_convolution_overrideable, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, input.pointer, weight.pointer, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, transposed, output_padding_data, output_padding_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
copy_sparse_to_sparse!(self::Tensor, src::Tensor, non_blocking::Int)
Wrapper of C++ function void atg\\_copy\\_sparse\\_to\\_sparse\\_(tensor *out\\_\\_, tensor self, tensor src, int non\\_blocking)
"""
function copy_sparse_to_sparse!(self::Tensor, src::Tensor, non_blocking::Int)
outputs__ = Int[0]
__cret = ccall((:atg_copy_sparse_to_sparse_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, src.pointer, non_blocking)
return self
end
import Base.cos
"""
cos(self::Tensor)
Wrapper of C++ function void atg\\_cos(tensor *out\\_\\_, tensor self)
"""
function cos(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cos, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cos!(self::Tensor)
Wrapper of C++ function void atg\\_cos\\_(tensor *out\\_\\_, tensor self)
"""
function cos!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cos_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
cos_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_cos\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function cos_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cos_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.cosh
"""
cosh(self::Tensor)
Wrapper of C++ function void atg\\_cosh(tensor *out\\_\\_, tensor self)
"""
function cosh(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cosh, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cosh!(self::Tensor)
Wrapper of C++ function void atg\\_cosh\\_(tensor *out\\_\\_, tensor self)
"""
function cosh!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cosh_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
cosh_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_cosh\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function cosh_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cosh_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cosine_embedding_loss(input1::Tensor, input2::Tensor, target::Tensor, margin::Float64, reduction::Int64)
Wrapper of C++ function void atg\\_cosine\\_embedding\\_loss(tensor *out\\_\\_, tensor input1, tensor input2, tensor target, double margin, int64\\_t reduction)
"""
function cosine_embedding_loss(input1::Tensor, input2::Tensor, target::Tensor, margin::Float64, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cosine_embedding_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong),
outputs__, input1.pointer, input2.pointer, target.pointer, margin, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cosine_similarity(x1::Tensor, x2::Tensor, dim::Int64, eps::Float64)
Wrapper of C++ function void atg\\_cosine\\_similarity(tensor *out\\_\\_, tensor x1, tensor x2, int64\\_t dim, double eps)
"""
function cosine_similarity(x1::Tensor, x2::Tensor, dim::Int64, eps::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_cosine_similarity, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cdouble),
outputs__, x1.pointer, x2.pointer, dim, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cross(self::Tensor, other::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_cross(tensor *out\\_\\_, tensor self, tensor other, int64\\_t dim)
"""
function cross(self::Tensor, other::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cross, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, other.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cross_out(out::Tensor, self::Tensor, other::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_cross\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other, int64\\_t dim)
"""
function cross_out(out::Tensor, self::Tensor, other::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cross_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, other.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ctc_loss(log_probs::Tensor, targets::Tensor, input_lengths_data::Array{Int64}, target_lengths_data::Array{Int64}, blank::Int64, reduction::Int64, zero_infinity::Int)
Wrapper of C++ function void atg\\_ctc\\_loss(tensor *out\\_\\_, tensor log\\_probs, tensor targets, int64\\_t *input\\_lengths\\_data, int input\\_lengths\\_len, int64\\_t *target\\_lengths\\_data, int target\\_lengths\\_len, int64\\_t blank, int64\\_t reduction, int zero\\_infinity)
"""
function ctc_loss(log_probs::Tensor, targets::Tensor, input_lengths_data::Array{Int64}, target_lengths_data::Array{Int64}, blank::Int64, reduction::Int64, zero_infinity::Int)
outputs__ = Int[0]
input_lengths_len = length(input_lengths_data)
target_lengths_len = length(target_lengths_data)
__cret = ccall((:atg_ctc_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Clonglong, Cint),
outputs__, log_probs.pointer, targets.pointer, input_lengths_data, input_lengths_len, target_lengths_data, target_lengths_len, blank, reduction, zero_infinity)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ctc_loss1(log_probs::Tensor, targets::Tensor, input_lengths::Tensor, target_lengths::Tensor, blank::Int64, reduction::Int64, zero_infinity::Int)
Wrapper of C++ function void atg\\_ctc\\_loss1(tensor *out\\_\\_, tensor log\\_probs, tensor targets, tensor input\\_lengths, tensor target\\_lengths, int64\\_t blank, int64\\_t reduction, int zero\\_infinity)
"""
function ctc_loss1(log_probs::Tensor, targets::Tensor, input_lengths::Tensor, target_lengths::Tensor, blank::Int64, reduction::Int64, zero_infinity::Int)
outputs__ = Int[0]
__cret = ccall((:atg_ctc_loss1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, log_probs.pointer, targets.pointer, input_lengths.pointer, target_lengths.pointer, blank, reduction, zero_infinity)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_affine_grid_generator(theta::Tensor, n::Int64, C::Int64, H::Int64, W::Int64)
Wrapper of C++ function void atg\\_cudnn\\_affine\\_grid\\_generator(tensor *out\\_\\_, tensor theta, int64\\_t n, int64\\_t C, int64\\_t H, int64\\_t W)
"""
function cudnn_affine_grid_generator(theta::Tensor, n::Int64, C::Int64, H::Int64, W::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cudnn_affine_grid_generator, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Clonglong),
outputs__, theta.pointer, n, C, H, W)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_affine_grid_generator_backward(grad::Tensor, n::Int64, C::Int64, H::Int64, W::Int64)
Wrapper of C++ function void atg\\_cudnn\\_affine\\_grid\\_generator\\_backward(tensor *out\\_\\_, tensor grad, int64\\_t n, int64\\_t C, int64\\_t H, int64\\_t W)
"""
function cudnn_affine_grid_generator_backward(grad::Tensor, n::Int64, C::Int64, H::Int64, W::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_cudnn_affine_grid_generator_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Clonglong),
outputs__, grad.pointer, n, C, H, W)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, exponential_average_factor::Float64, epsilon::Float64)
Wrapper of C++ function void atg\\_cudnn\\_batch\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor running\\_mean, tensor running\\_var, int training, double exponential\\_average\\_factor, double epsilon)
"""
function cudnn_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, exponential_average_factor::Float64, epsilon::Float64)
outputs__ = Int[0, 0, 0, 0]
__cret = ccall((:atg_cudnn_batch_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cdouble, Cdouble),
outputs__, input.pointer, weight.pointer, bias.pointer, running_mean.pointer, running_var.pointer, training, exponential_average_factor, epsilon)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
__o_4 = tensor_from_ptr(Ptr{Cvoid}(outputs__[4]))
return __o_1, __o_2, __o_3, __o_4
end
"""
cudnn_batch_norm_backward(input::Tensor, grad_output::Tensor, weight::Tensor, running_mean::Tensor, running_var::Tensor, save_mean::Tensor, save_var::Tensor, epsilon::Float64, reserveSpace::Tensor)
Wrapper of C++ function void atg\\_cudnn\\_batch\\_norm\\_backward(tensor *out\\_\\_, tensor input, tensor grad\\_output, tensor weight, tensor running\\_mean, tensor running\\_var, tensor save\\_mean, tensor save\\_var, double epsilon, tensor reserveSpace)
"""
function cudnn_batch_norm_backward(input::Tensor, grad_output::Tensor, weight::Tensor, running_mean::Tensor, running_var::Tensor, save_mean::Tensor, save_var::Tensor, epsilon::Float64, reserveSpace::Tensor)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_cudnn_batch_norm_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Ptr{Cvoid}),
outputs__, input.pointer, grad_output.pointer, weight.pointer, running_mean.pointer, running_var.pointer, save_mean.pointer, save_var.pointer, epsilon, reserveSpace.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
cudnn_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_backward_bias(grad_output::Tensor)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_backward\\_bias(tensor *out\\_\\_, tensor grad\\_output)
"""
function cudnn_convolution_backward_bias(grad_output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cudnn_convolution_backward_bias, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_backward\\_input(tensor *out\\_\\_, int64\\_t *self\\_size\\_data, int self\\_size\\_len, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
self_size_len = length(self_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self_size_data, self_size_len, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_backward\\_weight(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution_backward_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_transpose(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, output_padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_transpose(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution_transpose(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, output_padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution_transpose, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, output_padding_data, output_padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_transpose_backward_bias(grad_output::Tensor)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_transpose\\_backward\\_bias(tensor *out\\_\\_, tensor grad\\_output)
"""
function cudnn_convolution_transpose_backward_bias(grad_output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cudnn_convolution_transpose_backward_bias, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_transpose_backward_input(grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_transpose\\_backward\\_input(tensor *out\\_\\_, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution_transpose_backward_input(grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution_transpose_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_convolution_transpose_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_cudnn\\_convolution\\_transpose\\_backward\\_weight(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function cudnn_convolution_transpose_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_cudnn_convolution_transpose_backward_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_grid_sampler(self::Tensor, grid::Tensor)
Wrapper of C++ function void atg\\_cudnn\\_grid\\_sampler(tensor *out\\_\\_, tensor self, tensor grid)
"""
function cudnn_grid_sampler(self::Tensor, grid::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_cudnn_grid_sampler, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, grid.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cudnn_grid_sampler_backward(self::Tensor, grid::Tensor, grad_output::Tensor)
Wrapper of C++ function void atg\\_cudnn\\_grid\\_sampler\\_backward(tensor *out\\_\\_, tensor self, tensor grid, tensor grad\\_output)
"""
function cudnn_grid_sampler_backward(self::Tensor, grid::Tensor, grad_output::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_cudnn_grid_sampler_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, grid.pointer, grad_output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
import Base.cumprod
"""
cumprod(self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_cumprod(tensor *out\\_\\_, tensor self, int64\\_t dim, int dtype)
"""
function cumprod(self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cumprod, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cumprod_out(out::Tensor, self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_cumprod\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, int dtype)
"""
function cumprod_out(out::Tensor, self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cumprod_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, out.pointer, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.cumsum
"""
cumsum(self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_cumsum(tensor *out\\_\\_, tensor self, int64\\_t dim, int dtype)
"""
function cumsum(self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cumsum, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
cumsum_out(out::Tensor, self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_cumsum\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, int dtype)
"""
function cumsum_out(out::Tensor, self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_cumsum_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, out.pointer, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
data(self::Tensor)
Wrapper of C++ function void atg\\_data(tensor *out\\_\\_, tensor self)
"""
function data(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_data, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dequantize(self::Tensor)
Wrapper of C++ function void atg\\_dequantize(tensor *out\\_\\_, tensor self)
"""
function dequantize(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_dequantize, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
det(self::Tensor)
Wrapper of C++ function void atg\\_det(tensor *out\\_\\_, tensor self)
"""
function det(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_det, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.detach
"""
detach(self::Tensor)
Wrapper of C++ function void atg\\_detach(tensor *out\\_\\_, tensor self)
"""
function detach(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_detach, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
detach!(self::Tensor)
Wrapper of C++ function void atg\\_detach\\_(tensor *out\\_\\_, tensor self)
"""
function detach!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_detach_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
diag(self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_diag(tensor *out\\_\\_, tensor self, int64\\_t diagonal)
"""
function diag(self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_diag, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
diag_embed(self::Tensor, offset::Int64, dim1::Int64, dim2::Int64)
Wrapper of C++ function void atg\\_diag\\_embed(tensor *out\\_\\_, tensor self, int64\\_t offset, int64\\_t dim1, int64\\_t dim2)
"""
function diag_embed(self::Tensor, offset::Int64, dim1::Int64, dim2::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_diag_embed, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, offset, dim1, dim2)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
diag_out(out::Tensor, self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_diag\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t diagonal)
"""
function diag_out(out::Tensor, self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_diag_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
diagflat(self::Tensor, offset::Int64)
Wrapper of C++ function void atg\\_diagflat(tensor *out\\_\\_, tensor self, int64\\_t offset)
"""
function diagflat(self::Tensor, offset::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_diagflat, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, offset)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
diagonal(self::Tensor, offset::Int64, dim1::Int64, dim2::Int64)
Wrapper of C++ function void atg\\_diagonal(tensor *out\\_\\_, tensor self, int64\\_t offset, int64\\_t dim1, int64\\_t dim2)
"""
function diagonal(self::Tensor, offset::Int64, dim1::Int64, dim2::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_diagonal, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, offset, dim1, dim2)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
digamma(self::Tensor)
Wrapper of C++ function void atg\\_digamma(tensor *out\\_\\_, tensor self)
"""
function digamma(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_digamma, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
digamma!(self::Tensor)
Wrapper of C++ function void atg\\_digamma\\_(tensor *out\\_\\_, tensor self)
"""
function digamma!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_digamma_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
digamma_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_digamma\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function digamma_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_digamma_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dist(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_dist(tensor *out\\_\\_, tensor self, tensor other)
"""
function dist(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_dist, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
div(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_div(tensor *out\\_\\_, tensor self, tensor other)
"""
function div(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_div, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
div1(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_div1(tensor *out\\_\\_, tensor self, scalar other)
"""
function div1(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_div1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
div!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_div\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function div!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_div_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
div1!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_div\\_1(tensor *out\\_\\_, tensor self, scalar other)
"""
function div1!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_div_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
div_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_div\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function div_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_div_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dot(self::Tensor, tensor::Tensor)
Wrapper of C++ function void atg\\_dot(tensor *out\\_\\_, tensor self, tensor tensor)
"""
function dot(self::Tensor, tensor::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_dot, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, tensor.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dot_out(out::Tensor, self::Tensor, tensor::Tensor)
Wrapper of C++ function void atg\\_dot\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor tensor)
"""
function dot_out(out::Tensor, self::Tensor, tensor::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_dot_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, tensor.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dropout(input::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_dropout(tensor *out\\_\\_, tensor input, double p, int train)
"""
function dropout(input::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_dropout, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, p, train)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
dropout!(self::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_dropout\\_(tensor *out\\_\\_, tensor self, double p, int train)
"""
function dropout!(self::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_dropout_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, self.pointer, p, train)
return self
end
"""
eig(self::Tensor, eigenvectors::Int)
Wrapper of C++ function void atg\\_eig(tensor *out\\_\\_, tensor self, int eigenvectors)
"""
function eig(self::Tensor, eigenvectors::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_eig, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, eigenvectors)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
eig_out(e::Tensor, v::Tensor, self::Tensor, eigenvectors::Int)
Wrapper of C++ function void atg\\_eig\\_out(tensor *out\\_\\_, tensor e, tensor v, tensor self, int eigenvectors)
"""
function eig_out(e::Tensor, v::Tensor, self::Tensor, eigenvectors::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_eig_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, e.pointer, v.pointer, self.pointer, eigenvectors)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
elu(self::Tensor)
Wrapper of C++ function void atg\\_elu(tensor *out\\_\\_, tensor self)
"""
function elu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_elu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
elu!(self::Tensor)
Wrapper of C++ function void atg\\_elu\\_(tensor *out\\_\\_, tensor self)
"""
function elu!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_elu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
elu_backward(grad_output::Tensor, alpha::TorchNumber, scale::TorchNumber, input_scale::TorchNumber, output::Tensor)
Wrapper of C++ function void atg\\_elu\\_backward(tensor *out\\_\\_, tensor grad\\_output, scalar alpha, scalar scale, scalar input\\_scale, tensor output)
"""
function elu_backward(grad_output::Tensor, alpha::TorchNumber, scale::TorchNumber, input_scale::TorchNumber, output::Tensor)
outputs__ = Int[0]
alpha_s_ = Scalar(alpha)
scale_s_ = Scalar(scale)
input_scale_s_ = Scalar(input_scale)
__cret = ccall((:atg_elu_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, alpha_s_.pointer, scale_s_.pointer, input_scale_s_.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
elu_backward_out(grad_input::Tensor, grad_output::Tensor, alpha::TorchNumber, scale::TorchNumber, input_scale::TorchNumber, output::Tensor)
Wrapper of C++ function void atg\\_elu\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, scalar alpha, scalar scale, scalar input\\_scale, tensor output)
"""
function elu_backward_out(grad_input::Tensor, grad_output::Tensor, alpha::TorchNumber, scale::TorchNumber, input_scale::TorchNumber, output::Tensor)
outputs__ = Int[0]
alpha_s_ = Scalar(alpha)
scale_s_ = Scalar(scale)
input_scale_s_ = Scalar(input_scale)
__cret = ccall((:atg_elu_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, alpha_s_.pointer, scale_s_.pointer, input_scale_s_.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
elu_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_elu\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function elu_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_elu_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
embedding(weight::Tensor, indices::Tensor, padding_idx::Int64, scale_grad_by_freq::Int, sparse::Int)
Wrapper of C++ function void atg\\_embedding(tensor *out\\_\\_, tensor weight, tensor indices, int64\\_t padding\\_idx, int scale\\_grad\\_by\\_freq, int sparse)
"""
function embedding(weight::Tensor, indices::Tensor, padding_idx::Int64, scale_grad_by_freq::Int, sparse::Int)
outputs__ = Int[0]
__cret = ccall((:atg_embedding, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, weight.pointer, indices.pointer, padding_idx, scale_grad_by_freq, sparse)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
embedding_backward(grad::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int, sparse::Int)
Wrapper of C++ function void atg\\_embedding\\_backward(tensor *out\\_\\_, tensor grad, tensor indices, int64\\_t num\\_weights, int64\\_t padding\\_idx, int scale\\_grad\\_by\\_freq, int sparse)
"""
function embedding_backward(grad::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int, sparse::Int)
outputs__ = Int[0]
__cret = ccall((:atg_embedding_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint, Cint),
outputs__, grad.pointer, indices.pointer, num_weights, padding_idx, scale_grad_by_freq, sparse)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
embedding_bag(weight::Tensor, indices::Tensor, offsets::Tensor, scale_grad_by_freq::Int, mode::Int64, sparse::Int, per_sample_weights::Tensor)
Wrapper of C++ function void atg\\_embedding\\_bag(tensor *out\\_\\_, tensor weight, tensor indices, tensor offsets, int scale\\_grad\\_by\\_freq, int64\\_t mode, int sparse, tensor per\\_sample\\_weights)
"""
function embedding_bag(weight::Tensor, indices::Tensor, offsets::Tensor, scale_grad_by_freq::Int, mode::Int64, sparse::Int, per_sample_weights::Tensor)
outputs__ = Int[0, 0, 0, 0]
__cret = ccall((:atg_embedding_bag, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong, Cint, Ptr{Cvoid}),
outputs__, weight.pointer, indices.pointer, offsets.pointer, scale_grad_by_freq, mode, sparse, per_sample_weights.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
__o_4 = tensor_from_ptr(Ptr{Cvoid}(outputs__[4]))
return __o_1, __o_2, __o_3, __o_4
end
"""
embedding_dense_backward(grad_output::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int)
Wrapper of C++ function void atg\\_embedding\\_dense\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor indices, int64\\_t num\\_weights, int64\\_t padding\\_idx, int scale\\_grad\\_by\\_freq)
"""
function embedding_dense_backward(grad_output::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int)
outputs__ = Int[0]
__cret = ccall((:atg_embedding_dense_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, grad_output.pointer, indices.pointer, num_weights, padding_idx, scale_grad_by_freq)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
embedding_renorm!(self::Tensor, indices::Tensor, max_norm::Float64, norm_type::Float64)
Wrapper of C++ function void atg\\_embedding\\_renorm\\_(tensor *out\\_\\_, tensor self, tensor indices, double max\\_norm, double norm\\_type)
"""
function embedding_renorm!(self::Tensor, indices::Tensor, max_norm::Float64, norm_type::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_embedding_renorm_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble),
outputs__, self.pointer, indices.pointer, max_norm, norm_type)
return self
end
"""
embedding_sparse_backward(grad::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int)
Wrapper of C++ function void atg\\_embedding\\_sparse\\_backward(tensor *out\\_\\_, tensor grad, tensor indices, int64\\_t num\\_weights, int64\\_t padding\\_idx, int scale\\_grad\\_by\\_freq)
"""
function embedding_sparse_backward(grad::Tensor, indices::Tensor, num_weights::Int64, padding_idx::Int64, scale_grad_by_freq::Int)
outputs__ = Int[0]
__cret = ccall((:atg_embedding_sparse_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, grad.pointer, indices.pointer, num_weights, padding_idx, scale_grad_by_freq)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.empty
"""
empty(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_empty(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function empty(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_empty, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
empty_like(self::Tensor)
Wrapper of C++ function void atg\\_empty\\_like(tensor *out\\_\\_, tensor self)
"""
function empty_like(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_empty_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
empty_like1(self::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_empty\\_like1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device)
"""
function empty_like1(self::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_empty_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
empty_out(out::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_empty\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len)
"""
function empty_out(out::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_empty_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
empty_strided(size_data::Array{Int64}, stride_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_empty\\_strided(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int64\\_t *stride\\_data, int stride\\_len, int options\\_kind, int options\\_device)
"""
function empty_strided(size_data::Array{Int64}, stride_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
stride_len = length(stride_data)
__cret = ccall((:atg_empty_strided, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, stride_data, stride_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eq(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_eq(tensor *out\\_\\_, tensor self, scalar other)
"""
function eq(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_eq, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eq1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_eq1(tensor *out\\_\\_, tensor self, tensor other)
"""
function eq1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_eq1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eq!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_eq\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function eq!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_eq_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
eq1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_eq\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function eq1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_eq_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
eq_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_eq\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function eq_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_eq_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eq_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_eq\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function eq_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_eq_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erf(self::Tensor)
Wrapper of C++ function void atg\\_erf(tensor *out\\_\\_, tensor self)
"""
function erf(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erf, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erf!(self::Tensor)
Wrapper of C++ function void atg\\_erf\\_(tensor *out\\_\\_, tensor self)
"""
function erf!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erf_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
erf_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_erf\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function erf_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erf_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erfc(self::Tensor)
Wrapper of C++ function void atg\\_erfc(tensor *out\\_\\_, tensor self)
"""
function erfc(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfc, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erfc!(self::Tensor)
Wrapper of C++ function void atg\\_erfc\\_(tensor *out\\_\\_, tensor self)
"""
function erfc!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfc_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
erfc_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_erfc\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function erfc_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfc_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erfinv(self::Tensor)
Wrapper of C++ function void atg\\_erfinv(tensor *out\\_\\_, tensor self)
"""
function erfinv(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfinv, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
erfinv!(self::Tensor)
Wrapper of C++ function void atg\\_erfinv\\_(tensor *out\\_\\_, tensor self)
"""
function erfinv!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfinv_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
erfinv_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_erfinv\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function erfinv_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_erfinv_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.exp
"""
exp(self::Tensor)
Wrapper of C++ function void atg\\_exp(tensor *out\\_\\_, tensor self)
"""
function exp(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_exp, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
exp!(self::Tensor)
Wrapper of C++ function void atg\\_exp\\_(tensor *out\\_\\_, tensor self)
"""
function exp!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_exp_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
exp_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_exp\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function exp_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_exp_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
expand(self::Tensor, size_data::Array{Int64}, implicit::Int)
Wrapper of C++ function void atg\\_expand(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int implicit)
"""
function expand(self::Tensor, size_data::Array{Int64}, implicit::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_expand, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, size_data, size_len, implicit)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
expand_as(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_expand\\_as(tensor *out\\_\\_, tensor self, tensor other)
"""
function expand_as(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_expand_as, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.expm1
"""
expm1(self::Tensor)
Wrapper of C++ function void atg\\_expm1(tensor *out\\_\\_, tensor self)
"""
function expm1(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_expm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
expm1!(self::Tensor)
Wrapper of C++ function void atg\\_expm1\\_(tensor *out\\_\\_, tensor self)
"""
function expm1!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_expm1_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
expm1_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_expm1\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function expm1_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_expm1_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
exponential!(self::Tensor, lambd::Float64)
Wrapper of C++ function void atg\\_exponential\\_(tensor *out\\_\\_, tensor self, double lambd)
"""
function exponential!(self::Tensor, lambd::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_exponential_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, lambd)
return self
end
"""
eye(n::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_eye(tensor *out\\_\\_, int64\\_t n, int options\\_kind, int options\\_device)
"""
function eye(n::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_eye, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, n, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eye1(n::Int64, m::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_eye1(tensor *out\\_\\_, int64\\_t n, int64\\_t m, int options\\_kind, int options\\_device)
"""
function eye1(n::Int64, m::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_eye1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Clonglong, Cint, Cint),
outputs__, n, m, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eye_out(out::Tensor, n::Int64)
Wrapper of C++ function void atg\\_eye\\_out(tensor *out\\_\\_, tensor out, int64\\_t n)
"""
function eye_out(out::Tensor, n::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_eye_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, n)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
eye_out1(out::Tensor, n::Int64, m::Int64)
Wrapper of C++ function void atg\\_eye\\_out1(tensor *out\\_\\_, tensor out, int64\\_t n, int64\\_t m)
"""
function eye_out1(out::Tensor, n::Int64, m::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_eye_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, out.pointer, n, m)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fake_quantize_per_channel_affine(self::Tensor, scale::Tensor, zero_point::Tensor, axis::Int64, quant_min::Int64, quant_max::Int64)
Wrapper of C++ function void atg\\_fake\\_quantize\\_per\\_channel\\_affine(tensor *out\\_\\_, tensor self, tensor scale, tensor zero\\_point, int64\\_t axis, int64\\_t quant\\_min, int64\\_t quant\\_max)
"""
function fake_quantize_per_channel_affine(self::Tensor, scale::Tensor, zero_point::Tensor, axis::Int64, quant_min::Int64, quant_max::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_fake_quantize_per_channel_affine, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, scale.pointer, zero_point.pointer, axis, quant_min, quant_max)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fake_quantize_per_channel_affine_backward(grad::Tensor, self::Tensor, scale::Tensor, zero_point::Tensor, axis::Int64, quant_min::Int64, quant_max::Int64)
Wrapper of C++ function void atg\\_fake\\_quantize\\_per\\_channel\\_affine\\_backward(tensor *out\\_\\_, tensor grad, tensor self, tensor scale, tensor zero\\_point, int64\\_t axis, int64\\_t quant\\_min, int64\\_t quant\\_max)
"""
function fake_quantize_per_channel_affine_backward(grad::Tensor, self::Tensor, scale::Tensor, zero_point::Tensor, axis::Int64, quant_min::Int64, quant_max::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_fake_quantize_per_channel_affine_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, grad.pointer, self.pointer, scale.pointer, zero_point.pointer, axis, quant_min, quant_max)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fake_quantize_per_tensor_affine(self::Tensor, scale::Float64, zero_point::Int64, quant_min::Int64, quant_max::Int64)
Wrapper of C++ function void atg\\_fake\\_quantize\\_per\\_tensor\\_affine(tensor *out\\_\\_, tensor self, double scale, int64\\_t zero\\_point, int64\\_t quant\\_min, int64\\_t quant\\_max)
"""
function fake_quantize_per_tensor_affine(self::Tensor, scale::Float64, zero_point::Int64, quant_min::Int64, quant_max::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_fake_quantize_per_tensor_affine, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, scale, zero_point, quant_min, quant_max)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fake_quantize_per_tensor_affine_backward(grad::Tensor, self::Tensor, scale::Float64, zero_point::Int64, quant_min::Int64, quant_max::Int64)
Wrapper of C++ function void atg\\_fake\\_quantize\\_per\\_tensor\\_affine\\_backward(tensor *out\\_\\_, tensor grad, tensor self, double scale, int64\\_t zero\\_point, int64\\_t quant\\_min, int64\\_t quant\\_max)
"""
function fake_quantize_per_tensor_affine_backward(grad::Tensor, self::Tensor, scale::Float64, zero_point::Int64, quant_min::Int64, quant_max::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_fake_quantize_per_tensor_affine_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong, Clonglong, Clonglong),
outputs__, grad.pointer, self.pointer, scale, zero_point, quant_min, quant_max)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_linear_fp16_weight(input::Tensor, packed_weight::Tensor, bias::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_linear\\_fp16\\_weight(tensor *out\\_\\_, tensor input, tensor packed\\_weight, tensor bias)
"""
function fbgemm_linear_fp16_weight(input::Tensor, packed_weight::Tensor, bias::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fbgemm_linear_fp16_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, packed_weight.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_linear_fp16_weight_fp32_activation(input::Tensor, packed_weight::Tensor, bias::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_linear\\_fp16\\_weight\\_fp32\\_activation(tensor *out\\_\\_, tensor input, tensor packed\\_weight, tensor bias)
"""
function fbgemm_linear_fp16_weight_fp32_activation(input::Tensor, packed_weight::Tensor, bias::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fbgemm_linear_fp16_weight_fp32_activation, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, packed_weight.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_linear_int8_weight(input::Tensor, weight::Tensor, packed::Tensor, col_offsets::Tensor, weight_scale::TorchNumber, weight_zero_point::TorchNumber, bias::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_linear\\_int8\\_weight(tensor *out\\_\\_, tensor input, tensor weight, tensor packed, tensor col\\_offsets, scalar weight\\_scale, scalar weight\\_zero\\_point, tensor bias)
"""
function fbgemm_linear_int8_weight(input::Tensor, weight::Tensor, packed::Tensor, col_offsets::Tensor, weight_scale::TorchNumber, weight_zero_point::TorchNumber, bias::Tensor)
outputs__ = Int[0]
weight_scale_s_ = Scalar(weight_scale)
weight_zero_point_s_ = Scalar(weight_zero_point)
__cret = ccall((:atg_fbgemm_linear_int8_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, weight.pointer, packed.pointer, col_offsets.pointer, weight_scale_s_.pointer, weight_zero_point_s_.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_linear_int8_weight_fp32_activation(input::Tensor, weight::Tensor, packed::Tensor, col_offsets::Tensor, weight_scale::TorchNumber, weight_zero_point::TorchNumber, bias::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_linear\\_int8\\_weight\\_fp32\\_activation(tensor *out\\_\\_, tensor input, tensor weight, tensor packed, tensor col\\_offsets, scalar weight\\_scale, scalar weight\\_zero\\_point, tensor bias)
"""
function fbgemm_linear_int8_weight_fp32_activation(input::Tensor, weight::Tensor, packed::Tensor, col_offsets::Tensor, weight_scale::TorchNumber, weight_zero_point::TorchNumber, bias::Tensor)
outputs__ = Int[0]
weight_scale_s_ = Scalar(weight_scale)
weight_zero_point_s_ = Scalar(weight_zero_point)
__cret = ccall((:atg_fbgemm_linear_int8_weight_fp32_activation, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, weight.pointer, packed.pointer, col_offsets.pointer, weight_scale_s_.pointer, weight_zero_point_s_.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_pack_gemm_matrix_fp16(input::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_pack\\_gemm\\_matrix\\_fp16(tensor *out\\_\\_, tensor input)
"""
function fbgemm_pack_gemm_matrix_fp16(input::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fbgemm_pack_gemm_matrix_fp16, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_pack_quantized_matrix(input::Tensor)
Wrapper of C++ function void atg\\_fbgemm\\_pack\\_quantized\\_matrix(tensor *out\\_\\_, tensor input)
"""
function fbgemm_pack_quantized_matrix(input::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fbgemm_pack_quantized_matrix, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fbgemm_pack_quantized_matrix1(input::Tensor, K::Int64, n::Int64)
Wrapper of C++ function void atg\\_fbgemm\\_pack\\_quantized\\_matrix1(tensor *out\\_\\_, tensor input, int64\\_t K, int64\\_t n)
"""
function fbgemm_pack_quantized_matrix1(input::Tensor, K::Int64, n::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_fbgemm_pack_quantized_matrix1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, input.pointer, K, n)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
feature_alpha_dropout(input::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_feature\\_alpha\\_dropout(tensor *out\\_\\_, tensor input, double p, int train)
"""
function feature_alpha_dropout(input::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_feature_alpha_dropout, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, p, train)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
feature_alpha_dropout!(self::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_feature\\_alpha\\_dropout\\_(tensor *out\\_\\_, tensor self, double p, int train)
"""
function feature_alpha_dropout!(self::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_feature_alpha_dropout_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, self.pointer, p, train)
return self
end
"""
feature_dropout(input::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_feature\\_dropout(tensor *out\\_\\_, tensor input, double p, int train)
"""
function feature_dropout(input::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_feature_dropout, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, p, train)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
feature_dropout!(self::Tensor, p::Float64, train::Int)
Wrapper of C++ function void atg\\_feature\\_dropout\\_(tensor *out\\_\\_, tensor self, double p, int train)
"""
function feature_dropout!(self::Tensor, p::Float64, train::Int)
outputs__ = Int[0]
__cret = ccall((:atg_feature_dropout_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, self.pointer, p, train)
return self
end
"""
fft(self::Tensor, signal_ndim::Int64, normalized::Int)
Wrapper of C++ function void atg\\_fft(tensor *out\\_\\_, tensor self, int64\\_t signal\\_ndim, int normalized)
"""
function fft(self::Tensor, signal_ndim::Int64, normalized::Int)
outputs__ = Int[0]
__cret = ccall((:atg_fft, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, signal_ndim, normalized)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.fill!
"""
fill!(self::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_fill\\_(tensor *out\\_\\_, tensor self, scalar value)
"""
function fill!(self::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_fill_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, value_s_.pointer)
return self
end
"""
fill1!(self::Tensor, value::Tensor)
Wrapper of C++ function void atg\\_fill\\_1(tensor *out\\_\\_, tensor self, tensor value)
"""
function fill1!(self::Tensor, value::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fill_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, value.pointer)
return self
end
"""
fill_diagonal!(self::Tensor, fill_value::TorchNumber, wrap::Int)
Wrapper of C++ function void atg\\_fill\\_diagonal\\_(tensor *out\\_\\_, tensor self, scalar fill\\_value, int wrap)
"""
function fill_diagonal!(self::Tensor, fill_value::TorchNumber, wrap::Int)
outputs__ = Int[0]
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_fill_diagonal_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, fill_value_s_.pointer, wrap)
return self
end
"""
flatten(self::Tensor, start_dim::Int64, end_dim::Int64)
Wrapper of C++ function void atg\\_flatten(tensor *out\\_\\_, tensor self, int64\\_t start\\_dim, int64\\_t end\\_dim)
"""
function flatten(self::Tensor, start_dim::Int64, end_dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_flatten, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, start_dim, end_dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
flip(self::Tensor, dims_data::Array{Int64})
Wrapper of C++ function void atg\\_flip(tensor *out\\_\\_, tensor self, int64\\_t *dims\\_data, int dims\\_len)
"""
function flip(self::Tensor, dims_data::Array{Int64})
outputs__ = Int[0]
dims_len = length(dims_data)
__cret = ccall((:atg_flip, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dims_data, dims_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.floor
"""
floor(self::Tensor)
Wrapper of C++ function void atg\\_floor(tensor *out\\_\\_, tensor self)
"""
function floor(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_floor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
floor!(self::Tensor)
Wrapper of C++ function void atg\\_floor\\_(tensor *out\\_\\_, tensor self)
"""
function floor!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_floor_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
floor_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_floor\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function floor_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_floor_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fmod(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_fmod(tensor *out\\_\\_, tensor self, scalar other)
"""
function fmod(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_fmod, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fmod1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_fmod1(tensor *out\\_\\_, tensor self, tensor other)
"""
function fmod1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fmod1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fmod!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_fmod\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function fmod!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_fmod_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
fmod1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_fmod\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function fmod1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fmod_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
fmod_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_fmod\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function fmod_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_fmod_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fmod_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_fmod\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function fmod_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_fmod_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
frac(self::Tensor)
Wrapper of C++ function void atg\\_frac(tensor *out\\_\\_, tensor self)
"""
function frac(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_frac, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
frac!(self::Tensor)
Wrapper of C++ function void atg\\_frac\\_(tensor *out\\_\\_, tensor self)
"""
function frac!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_frac_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
frac_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_frac\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function frac_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_frac_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fractional_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor random\\_samples)
"""
function fractional_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, random_samples.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
fractional_max_pool2d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor indices)
"""
function fractional_max_pool2d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fractional_max_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor indices)
"""
function fractional_max_pool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fractional_max_pool2d_out(output::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool2d\\_out(tensor *out\\_\\_, tensor output, tensor indices, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor random\\_samples)
"""
function fractional_max_pool2d_out(output::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, output.pointer, indices.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, random_samples.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
fractional_max_pool3d(self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool3d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor random\\_samples)
"""
function fractional_max_pool3d(self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, random_samples.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
fractional_max_pool3d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor indices)
"""
function fractional_max_pool3d_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fractional_max_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor indices)
"""
function fractional_max_pool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
fractional_max_pool3d_out(output::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
Wrapper of C++ function void atg\\_fractional\\_max\\_pool3d\\_out(tensor *out\\_\\_, tensor output, tensor indices, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *output\\_size\\_data, int output\\_size\\_len, tensor random\\_samples)
"""
function fractional_max_pool3d_out(output::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, output_size_data::Array{Int64}, random_samples::Tensor)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
output_size_len = length(output_size_data)
__cret = ccall((:atg_fractional_max_pool3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, output.pointer, indices.pointer, self.pointer, kernel_size_data, kernel_size_len, output_size_data, output_size_len, random_samples.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
frobenius_norm(self::Tensor)
Wrapper of C++ function void atg\\_frobenius\\_norm(tensor *out\\_\\_, tensor self)
"""
function frobenius_norm(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_frobenius_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
frobenius_norm1(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_frobenius\\_norm1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function frobenius_norm1(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_frobenius_norm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
frobenius_norm_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_frobenius\\_norm\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function frobenius_norm_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_frobenius_norm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
full(size_data::Array{Int64}, fill_value::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_full(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, scalar fill\\_value, int options\\_kind, int options\\_device)
"""
function full(size_data::Array{Int64}, fill_value::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_full, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, size_data, size_len, fill_value_s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
full_like(self::Tensor, fill_value::TorchNumber)
Wrapper of C++ function void atg\\_full\\_like(tensor *out\\_\\_, tensor self, scalar fill\\_value)
"""
function full_like(self::Tensor, fill_value::TorchNumber)
outputs__ = Int[0]
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_full_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, fill_value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
full_like1(self::Tensor, fill_value::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_full\\_like1(tensor *out\\_\\_, tensor self, scalar fill\\_value, int options\\_kind, int options\\_device)
"""
function full_like1(self::Tensor, fill_value::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_full_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, fill_value_s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
full_out(out::Tensor, size_data::Array{Int64}, fill_value::TorchNumber)
Wrapper of C++ function void atg\\_full\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len, scalar fill\\_value)
"""
function full_out(out::Tensor, size_data::Array{Int64}, fill_value::TorchNumber)
outputs__ = Int[0]
size_len = length(size_data)
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_full_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, out.pointer, size_data, size_len, fill_value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gather(self::Tensor, dim::Int64, index::Tensor, sparse_grad::Int)
Wrapper of C++ function void atg\\_gather(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, int sparse\\_grad)
"""
function gather(self::Tensor, dim::Int64, index::Tensor, sparse_grad::Int)
outputs__ = Int[0]
__cret = ccall((:atg_gather, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dim, index.pointer, sparse_grad)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gather_out(out::Tensor, self::Tensor, dim::Int64, index::Tensor, sparse_grad::Int)
Wrapper of C++ function void atg\\_gather\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, tensor index, int sparse\\_grad)
"""
function gather_out(out::Tensor, self::Tensor, dim::Int64, index::Tensor, sparse_grad::Int)
outputs__ = Int[0]
__cret = ccall((:atg_gather_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, dim, index.pointer, sparse_grad)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ge(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ge(tensor *out\\_\\_, tensor self, scalar other)
"""
function ge(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ge, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ge1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ge1(tensor *out\\_\\_, tensor self, tensor other)
"""
function ge1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ge1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ge!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ge\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function ge!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ge_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
ge1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ge\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function ge1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ge_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
ge_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ge\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function ge_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ge_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ge_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ge\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function ge_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ge_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gelu(self::Tensor)
Wrapper of C++ function void atg\\_gelu(tensor *out\\_\\_, tensor self)
"""
function gelu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gelu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gelu_backward(grad::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_gelu\\_backward(tensor *out\\_\\_, tensor grad, tensor self)
"""
function gelu_backward(grad::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gelu_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
geometric!(self::Tensor, p::Float64)
Wrapper of C++ function void atg\\_geometric\\_(tensor *out\\_\\_, tensor self, double p)
"""
function geometric!(self::Tensor, p::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_geometric_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, p)
return self
end
"""
geqrf(self::Tensor)
Wrapper of C++ function void atg\\_geqrf(tensor *out\\_\\_, tensor self)
"""
function geqrf(self::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_geqrf, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
geqrf_out(a::Tensor, tau::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_geqrf\\_out(tensor *out\\_\\_, tensor a, tensor tau, tensor self)
"""
function geqrf_out(a::Tensor, tau::Tensor, self::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_geqrf_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, a.pointer, tau.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
ger(self::Tensor, vec2::Tensor)
Wrapper of C++ function void atg\\_ger(tensor *out\\_\\_, tensor self, tensor vec2)
"""
function ger(self::Tensor, vec2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ger, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, vec2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ger_out(out::Tensor, self::Tensor, vec2::Tensor)
Wrapper of C++ function void atg\\_ger\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor vec2)
"""
function ger_out(out::Tensor, self::Tensor, vec2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ger_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, vec2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
glu(self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_glu(tensor *out\\_\\_, tensor self, int64\\_t dim)
"""
function glu(self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_glu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
glu_backward(grad_output::Tensor, self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_glu\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t dim)
"""
function glu_backward(grad_output::Tensor, self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_glu_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
glu_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_glu\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t dim)
"""
function glu_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_glu_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
glu_out(out::Tensor, self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_glu\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim)
"""
function glu_out(out::Tensor, self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_glu_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
grad(self::Tensor)
Wrapper of C++ function void atg\\_grad(tensor *out\\_\\_, tensor self)
"""
function grad(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_grad, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
grid_sampler(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
Wrapper of C++ function void atg\\_grid\\_sampler(tensor *out\\_\\_, tensor input, tensor grid, int64\\_t interpolation\\_mode, int64\\_t padding\\_mode, int align\\_corners)
"""
function grid_sampler(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
outputs__ = Int[0]
__cret = ccall((:atg_grid_sampler, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, input.pointer, grid.pointer, interpolation_mode, padding_mode, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
grid_sampler_2d(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
Wrapper of C++ function void atg\\_grid\\_sampler\\_2d(tensor *out\\_\\_, tensor input, tensor grid, int64\\_t interpolation\\_mode, int64\\_t padding\\_mode, int align\\_corners)
"""
function grid_sampler_2d(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
outputs__ = Int[0]
__cret = ccall((:atg_grid_sampler_2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, input.pointer, grid.pointer, interpolation_mode, padding_mode, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
grid_sampler_2d_backward(grad_output::Tensor, input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
Wrapper of C++ function void atg\\_grid\\_sampler\\_2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor input, tensor grid, int64\\_t interpolation\\_mode, int64\\_t padding\\_mode, int align\\_corners)
"""
function grid_sampler_2d_backward(grad_output::Tensor, input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_grid_sampler_2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, grad_output.pointer, input.pointer, grid.pointer, interpolation_mode, padding_mode, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
grid_sampler_3d(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
Wrapper of C++ function void atg\\_grid\\_sampler\\_3d(tensor *out\\_\\_, tensor input, tensor grid, int64\\_t interpolation\\_mode, int64\\_t padding\\_mode, int align\\_corners)
"""
function grid_sampler_3d(input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
outputs__ = Int[0]
__cret = ccall((:atg_grid_sampler_3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, input.pointer, grid.pointer, interpolation_mode, padding_mode, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
grid_sampler_3d_backward(grad_output::Tensor, input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
Wrapper of C++ function void atg\\_grid\\_sampler\\_3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor input, tensor grid, int64\\_t interpolation\\_mode, int64\\_t padding\\_mode, int align\\_corners)
"""
function grid_sampler_3d_backward(grad_output::Tensor, input::Tensor, grid::Tensor, interpolation_mode::Int64, padding_mode::Int64, align_corners::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_grid_sampler_3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, grad_output.pointer, input.pointer, grid.pointer, interpolation_mode, padding_mode, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
group_norm(input::Tensor, num_groups::Int64, weight::Tensor, bias::Tensor, eps::Float64, cudnn_enabled::Int)
Wrapper of C++ function void atg\\_group\\_norm(tensor *out\\_\\_, tensor input, int64\\_t num\\_groups, tensor weight, tensor bias, double eps, int cudnn\\_enabled)
"""
function group_norm(input::Tensor, num_groups::Int64, weight::Tensor, bias::Tensor, eps::Float64, cudnn_enabled::Int)
outputs__ = Int[0]
__cret = ccall((:atg_group_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, num_groups, weight.pointer, bias.pointer, eps, cudnn_enabled)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gru(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int)
Wrapper of C++ function void atg\\_gru(tensor *out\\_\\_, tensor input, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first)
"""
function gru(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_gru, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint),
outputs__, input.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
gru1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int)
Wrapper of C++ function void atg\\_gru1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional)
"""
function gru1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_gru1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
gru_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
Wrapper of C++ function void atg\\_gru\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh)
"""
function gru_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gru_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gt(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_gt(tensor *out\\_\\_, tensor self, scalar other)
"""
function gt(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_gt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gt1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_gt1(tensor *out\\_\\_, tensor self, tensor other)
"""
function gt1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gt1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gt!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_gt\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function gt!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_gt_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
gt1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_gt\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function gt1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gt_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
gt_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_gt\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function gt_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_gt_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
gt_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_gt\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function gt_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_gt_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hamming_window(window_length::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hamming\\_window(tensor *out\\_\\_, int64\\_t window\\_length, int options\\_kind, int options\\_device)
"""
function hamming_window(window_length::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hamming_window, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, window_length, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hamming_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hamming\\_window1(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, int options\\_kind, int options\\_device)
"""
function hamming_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hamming_window1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint, Cint),
outputs__, window_length, periodic, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hamming_window2(window_length::Int64, periodic::Int, alpha::Float64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hamming\\_window2(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, double alpha, int options\\_kind, int options\\_device)
"""
function hamming_window2(window_length::Int64, periodic::Int, alpha::Float64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hamming_window2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cdouble, Cint, Cint),
outputs__, window_length, periodic, alpha, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hamming_window3(window_length::Int64, periodic::Int, alpha::Float64, beta::Float64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hamming\\_window3(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, double alpha, double beta, int options\\_kind, int options\\_device)
"""
function hamming_window3(window_length::Int64, periodic::Int, alpha::Float64, beta::Float64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hamming_window3, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cdouble, Cdouble, Cint, Cint),
outputs__, window_length, periodic, alpha, beta, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hann_window(window_length::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hann\\_window(tensor *out\\_\\_, int64\\_t window\\_length, int options\\_kind, int options\\_device)
"""
function hann_window(window_length::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hann_window, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, window_length, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hann_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_hann\\_window1(tensor *out\\_\\_, int64\\_t window\\_length, int periodic, int options\\_kind, int options\\_device)
"""
function hann_window1(window_length::Int64, periodic::Int, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_hann_window1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint, Cint),
outputs__, window_length, periodic, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardshrink(self::Tensor)
Wrapper of C++ function void atg\\_hardshrink(tensor *out\\_\\_, tensor self)
"""
function hardshrink(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hardshrink, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardshrink_backward(grad_out::Tensor, self::Tensor, lambd::TorchNumber)
Wrapper of C++ function void atg\\_hardshrink\\_backward(tensor *out\\_\\_, tensor grad\\_out, tensor self, scalar lambd)
"""
function hardshrink_backward(grad_out::Tensor, self::Tensor, lambd::TorchNumber)
outputs__ = Int[0]
lambd_s_ = Scalar(lambd)
__cret = ccall((:atg_hardshrink_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_out.pointer, self.pointer, lambd_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardtanh(self::Tensor)
Wrapper of C++ function void atg\\_hardtanh(tensor *out\\_\\_, tensor self)
"""
function hardtanh(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hardtanh, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardtanh!(self::Tensor)
Wrapper of C++ function void atg\\_hardtanh\\_(tensor *out\\_\\_, tensor self)
"""
function hardtanh!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hardtanh_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
hardtanh_backward(grad_output::Tensor, self::Tensor, min_val::TorchNumber, max_val::TorchNumber)
Wrapper of C++ function void atg\\_hardtanh\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, scalar min\\_val, scalar max\\_val)
"""
function hardtanh_backward(grad_output::Tensor, self::Tensor, min_val::TorchNumber, max_val::TorchNumber)
outputs__ = Int[0]
min_val_s_ = Scalar(min_val)
max_val_s_ = Scalar(max_val)
__cret = ccall((:atg_hardtanh_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, min_val_s_.pointer, max_val_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardtanh_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, min_val::TorchNumber, max_val::TorchNumber)
Wrapper of C++ function void atg\\_hardtanh\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, scalar min\\_val, scalar max\\_val)
"""
function hardtanh_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, min_val::TorchNumber, max_val::TorchNumber)
outputs__ = Int[0]
min_val_s_ = Scalar(min_val)
max_val_s_ = Scalar(max_val)
__cret = ccall((:atg_hardtanh_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, min_val_s_.pointer, max_val_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hardtanh_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_hardtanh\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function hardtanh_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hardtanh_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hinge_embedding_loss(self::Tensor, target::Tensor, margin::Float64, reduction::Int64)
Wrapper of C++ function void atg\\_hinge\\_embedding\\_loss(tensor *out\\_\\_, tensor self, tensor target, double margin, int64\\_t reduction)
"""
function hinge_embedding_loss(self::Tensor, target::Tensor, margin::Float64, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_hinge_embedding_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong),
outputs__, self.pointer, target.pointer, margin, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
histc(self::Tensor, bins::Int64)
Wrapper of C++ function void atg\\_histc(tensor *out\\_\\_, tensor self, int64\\_t bins)
"""
function histc(self::Tensor, bins::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_histc, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, bins)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
histc_out(out::Tensor, self::Tensor, bins::Int64)
Wrapper of C++ function void atg\\_histc\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t bins)
"""
function histc_out(out::Tensor, self::Tensor, bins::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_histc_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, bins)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hspmm(mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_hspmm(tensor *out\\_\\_, tensor mat1, tensor mat2)
"""
function hspmm(mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hspmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
hspmm_out(out::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_hspmm\\_out(tensor *out\\_\\_, tensor out, tensor mat1, tensor mat2)
"""
function hspmm_out(out::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_hspmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ifft(self::Tensor, signal_ndim::Int64, normalized::Int)
Wrapper of C++ function void atg\\_ifft(tensor *out\\_\\_, tensor self, int64\\_t signal\\_ndim, int normalized)
"""
function ifft(self::Tensor, signal_ndim::Int64, normalized::Int)
outputs__ = Int[0]
__cret = ccall((:atg_ifft, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, signal_ndim, normalized)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
im2col(self::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_im2col(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function im2col(self::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_im2col, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
im2col_backward(grad_output::Tensor, input_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_im2col\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function im2col_backward(grad_output::Tensor, input_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
input_size_len = length(input_size_data)
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_im2col_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, input_size_data, input_size_len, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
im2col_backward_out(grad_input::Tensor, grad_output::Tensor, input_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_im2col\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function im2col_backward_out(grad_input::Tensor, grad_output::Tensor, input_size_data::Array{Int64}, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
input_size_len = length(input_size_data)
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_im2col_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, input_size_data, input_size_len, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
im2col_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
Wrapper of C++ function void atg\\_im2col\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len)
"""
function im2col_out(out::Tensor, self::Tensor, kernel_size_data::Array{Int64}, dilation_data::Array{Int64}, padding_data::Array{Int64}, stride_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
dilation_len = length(dilation_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
__cret = ccall((:atg_im2col_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, kernel_size_data, kernel_size_len, dilation_data, dilation_len, padding_data, padding_len, stride_data, stride_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.imag
"""
imag(self::Tensor)
Wrapper of C++ function void atg\\_imag(tensor *out\\_\\_, tensor self)
"""
function imag(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_imag, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
imag_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_imag\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function imag_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_imag_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index(self::Tensor, indices_data::Array{Tensor{T,N}})
Wrapper of C++ function void atg\\_index(tensor *out\\_\\_, tensor self, tensor *indices\\_data, int indices\\_len)
"""
function index(self::Tensor, indices_data::Array{Tensor{T,N}}) where {T,N}
outputs__ = Int[0]
indices_data_ta_ = map(x->x.pointer, indices_data)
indices_len = length(indices_data)
__cret = ccall((:atg_index, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, indices_data_ta_, indices_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_add(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_index\\_add(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor source)
"""
function index_add(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_add, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, source.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_add!(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_index\\_add\\_(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor source)
"""
function index_add!(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_add_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, source.pointer)
return self
end
"""
index_copy(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_index\\_copy(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor source)
"""
function index_copy(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_copy, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, source.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_copy!(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_index\\_copy\\_(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor source)
"""
function index_copy!(self::Tensor, dim::Int64, index::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_copy_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, source.pointer)
return self
end
"""
index_fill(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_index\\_fill(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, scalar value)
"""
function index_fill(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_index_fill, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_fill1(self::Tensor, dim::Int64, index::Tensor, value::Tensor)
Wrapper of C++ function void atg\\_index\\_fill1(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor value)
"""
function index_fill1(self::Tensor, dim::Int64, index::Tensor, value::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_fill1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_fill!(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_index\\_fill\\_(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, scalar value)
"""
function index_fill!(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_index_fill_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value_s_.pointer)
return self
end
"""
index_fill1!(self::Tensor, dim::Int64, index::Tensor, value::Tensor)
Wrapper of C++ function void atg\\_index\\_fill\\_1(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor value)
"""
function index_fill1!(self::Tensor, dim::Int64, index::Tensor, value::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_fill_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value.pointer)
return self
end
"""
index_put(self::Tensor, indices_data::Array{Tensor{T,N}}, values::Tensor, accumulate::Int)
Wrapper of C++ function void atg\\_index\\_put(tensor *out\\_\\_, tensor self, tensor *indices\\_data, int indices\\_len, tensor values, int accumulate)
"""
function index_put(self::Tensor, indices_data::Array{Tensor{T,N}}, values::Tensor, accumulate::Int) where {T,N}
outputs__ = Int[0]
indices_data_ta_ = map(x->x.pointer, indices_data)
indices_len = length(indices_data)
__cret = ccall((:atg_index_put, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, indices_data_ta_, indices_len, values.pointer, accumulate)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_put!(self::Tensor, indices_data::Array{Tensor{T,N}}, values::Tensor, accumulate::Int)
Wrapper of C++ function void atg\\_index\\_put\\_(tensor *out\\_\\_, tensor self, tensor *indices\\_data, int indices\\_len, tensor values, int accumulate)
"""
function index_put!(self::Tensor, indices_data::Array{Tensor{T,N}}, values::Tensor, accumulate::Int) where {T,N}
outputs__ = Int[0]
indices_data_ta_ = map(x->x.pointer, indices_data)
indices_len = length(indices_data)
__cret = ccall((:atg_index_put_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, indices_data_ta_, indices_len, values.pointer, accumulate)
return self
end
"""
index_select(self::Tensor, dim::Int64, index::Tensor)
Wrapper of C++ function void atg\\_index\\_select(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index)
"""
function index_select(self::Tensor, dim::Int64, index::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_select, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
index_select_out(out::Tensor, self::Tensor, dim::Int64, index::Tensor)
Wrapper of C++ function void atg\\_index\\_select\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, tensor index)
"""
function index_select_out(out::Tensor, self::Tensor, dim::Int64, index::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_index_select_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, dim, index.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
indices(self::Tensor)
Wrapper of C++ function void atg\\_indices(tensor *out\\_\\_, tensor self)
"""
function indices(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
instance_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, use_input_stats::Int, momentum::Float64, eps::Float64, cudnn_enabled::Int)
Wrapper of C++ function void atg\\_instance\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor running\\_mean, tensor running\\_var, int use\\_input\\_stats, double momentum, double eps, int cudnn\\_enabled)
"""
function instance_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, use_input_stats::Int, momentum::Float64, eps::Float64, cudnn_enabled::Int)
outputs__ = Int[0]
__cret = ccall((:atg_instance_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cdouble, Cdouble, Cint),
outputs__, input.pointer, weight.pointer, bias.pointer, running_mean.pointer, running_var.pointer, use_input_stats, momentum, eps, cudnn_enabled)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
int_repr(self::Tensor)
Wrapper of C++ function void atg\\_int\\_repr(tensor *out\\_\\_, tensor self)
"""
function int_repr(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_int_repr, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
inverse(self::Tensor)
Wrapper of C++ function void atg\\_inverse(tensor *out\\_\\_, tensor self)
"""
function inverse(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_inverse, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
inverse_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_inverse\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function inverse_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_inverse_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
irfft(self::Tensor, signal_ndim::Int64, normalized::Int, onesided::Int, signal_sizes_data::Array{Int64})
Wrapper of C++ function void atg\\_irfft(tensor *out\\_\\_, tensor self, int64\\_t signal\\_ndim, int normalized, int onesided, int64\\_t *signal\\_sizes\\_data, int signal\\_sizes\\_len)
"""
function irfft(self::Tensor, signal_ndim::Int64, normalized::Int, onesided::Int, signal_sizes_data::Array{Int64})
outputs__ = Int[0]
signal_sizes_len = length(signal_sizes_data)
__cret = ccall((:atg_irfft, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, signal_ndim, normalized, onesided, signal_sizes_data, signal_sizes_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
isclose(self::Tensor, other::Tensor, rtol::Float64, atol::Float64, equal_nan::Int)
Wrapper of C++ function void atg\\_isclose(tensor *out\\_\\_, tensor self, tensor other, double rtol, double atol, int equal\\_nan)
"""
function isclose(self::Tensor, other::Tensor, rtol::Float64, atol::Float64, equal_nan::Int)
outputs__ = Int[0]
__cret = ccall((:atg_isclose, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Cint),
outputs__, self.pointer, other.pointer, rtol, atol, equal_nan)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.isfinite
"""
isfinite(self::Tensor)
Wrapper of C++ function void atg\\_isfinite(tensor *out\\_\\_, tensor self)
"""
function isfinite(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_isfinite, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.isnan
"""
isnan(self::Tensor)
Wrapper of C++ function void atg\\_isnan(tensor *out\\_\\_, tensor self)
"""
function isnan(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_isnan, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
kl_div(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_kl\\_div(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function kl_div(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_kl_div, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
kl_div_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_kl\\_div\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function kl_div_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_kl_div_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
kthvalue(self::Tensor, k::Int64, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_kthvalue(tensor *out\\_\\_, tensor self, int64\\_t k, int64\\_t dim, int keepdim)
"""
function kthvalue(self::Tensor, k::Int64, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_kthvalue, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, self.pointer, k, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
kthvalue_out(values::Tensor, indices::Tensor, self::Tensor, k::Int64, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_kthvalue\\_out(tensor *out\\_\\_, tensor values, tensor indices, tensor self, int64\\_t k, int64\\_t dim, int keepdim)
"""
function kthvalue_out(values::Tensor, indices::Tensor, self::Tensor, k::Int64, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_kthvalue_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint),
outputs__, values.pointer, indices.pointer, self.pointer, k, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
l1_loss(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_l1\\_loss(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function l1_loss(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_l1_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
l1_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_l1\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function l1_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_l1_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
l1_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_l1\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function l1_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_l1_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
l1_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_l1\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, int64\\_t reduction)
"""
function l1_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_l1_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
layer_norm(input::Tensor, normalized_shape_data::Array{Int64}, weight::Tensor, bias::Tensor, eps::Float64, cudnn_enable::Int)
Wrapper of C++ function void atg\\_layer\\_norm(tensor *out\\_\\_, tensor input, int64\\_t *normalized\\_shape\\_data, int normalized\\_shape\\_len, tensor weight, tensor bias, double eps, int cudnn\\_enable)
"""
function layer_norm(input::Tensor, normalized_shape_data::Array{Int64}, weight::Tensor, bias::Tensor, eps::Float64, cudnn_enable::Int)
outputs__ = Int[0]
normalized_shape_len = length(normalized_shape_data)
__cret = ccall((:atg_layer_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, input.pointer, normalized_shape_data, normalized_shape_len, weight.pointer, bias.pointer, eps, cudnn_enable)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
le(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_le(tensor *out\\_\\_, tensor self, scalar other)
"""
function le(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_le, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
le1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_le1(tensor *out\\_\\_, tensor self, tensor other)
"""
function le1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_le1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
le!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_le\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function le!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_le_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
le1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_le\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function le1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_le_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
le_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_le\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function le_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_le_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
le_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_le\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function le_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_le_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
leaky_relu(self::Tensor)
Wrapper of C++ function void atg\\_leaky\\_relu(tensor *out\\_\\_, tensor self)
"""
function leaky_relu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_leaky_relu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
leaky_relu!(self::Tensor)
Wrapper of C++ function void atg\\_leaky\\_relu\\_(tensor *out\\_\\_, tensor self)
"""
function leaky_relu!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_leaky_relu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
leaky_relu_backward(grad_output::Tensor, self::Tensor, negative_slope::TorchNumber)
Wrapper of C++ function void atg\\_leaky\\_relu\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, scalar negative\\_slope)
"""
function leaky_relu_backward(grad_output::Tensor, self::Tensor, negative_slope::TorchNumber)
outputs__ = Int[0]
negative_slope_s_ = Scalar(negative_slope)
__cret = ccall((:atg_leaky_relu_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, negative_slope_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
leaky_relu_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, negative_slope::TorchNumber)
Wrapper of C++ function void atg\\_leaky\\_relu\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, scalar negative\\_slope)
"""
function leaky_relu_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, negative_slope::TorchNumber)
outputs__ = Int[0]
negative_slope_s_ = Scalar(negative_slope)
__cret = ccall((:atg_leaky_relu_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, negative_slope_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
leaky_relu_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_leaky\\_relu\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function leaky_relu_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_leaky_relu_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lerp(self::Tensor, end_::Tensor, weight::TorchNumber)
Wrapper of C++ function void atg\\_lerp(tensor *out\\_\\_, tensor self, tensor end, scalar weight)
"""
function lerp(self::Tensor, end_::Tensor, weight::TorchNumber)
outputs__ = Int[0]
weight_s_ = Scalar(weight)
__cret = ccall((:atg_lerp, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, end_.pointer, weight_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lerp1(self::Tensor, end_::Tensor, weight::Tensor)
Wrapper of C++ function void atg\\_lerp1(tensor *out\\_\\_, tensor self, tensor end, tensor weight)
"""
function lerp1(self::Tensor, end_::Tensor, weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lerp1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, end_.pointer, weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lerp!(self::Tensor, end_::Tensor, weight::TorchNumber)
Wrapper of C++ function void atg\\_lerp\\_(tensor *out\\_\\_, tensor self, tensor end, scalar weight)
"""
function lerp!(self::Tensor, end_::Tensor, weight::TorchNumber)
outputs__ = Int[0]
weight_s_ = Scalar(weight)
__cret = ccall((:atg_lerp_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, end_.pointer, weight_s_.pointer)
return self
end
"""
lerp1!(self::Tensor, end_::Tensor, weight::Tensor)
Wrapper of C++ function void atg\\_lerp\\_1(tensor *out\\_\\_, tensor self, tensor end, tensor weight)
"""
function lerp1!(self::Tensor, end_::Tensor, weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lerp_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, end_.pointer, weight.pointer)
return self
end
"""
lerp_out(out::Tensor, self::Tensor, end_::Tensor, weight::TorchNumber)
Wrapper of C++ function void atg\\_lerp\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor end, scalar weight)
"""
function lerp_out(out::Tensor, self::Tensor, end_::Tensor, weight::TorchNumber)
outputs__ = Int[0]
weight_s_ = Scalar(weight)
__cret = ccall((:atg_lerp_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, end_.pointer, weight_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lerp_out1(out::Tensor, self::Tensor, end_::Tensor, weight::Tensor)
Wrapper of C++ function void atg\\_lerp\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor end, tensor weight)
"""
function lerp_out1(out::Tensor, self::Tensor, end_::Tensor, weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lerp_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, end_.pointer, weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lgamma(self::Tensor)
Wrapper of C++ function void atg\\_lgamma(tensor *out\\_\\_, tensor self)
"""
function lgamma(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lgamma, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lgamma!(self::Tensor)
Wrapper of C++ function void atg\\_lgamma\\_(tensor *out\\_\\_, tensor self)
"""
function lgamma!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lgamma_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
lgamma_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_lgamma\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function lgamma_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lgamma_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
linear(input::Tensor, weight::Tensor, bias::Tensor)
Wrapper of C++ function void atg\\_linear(tensor *out\\_\\_, tensor input, tensor weight, tensor bias)
"""
function linear(input::Tensor, weight::Tensor, bias::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_linear, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, weight.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
linspace(start::TorchNumber, end_::TorchNumber, steps::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_linspace(tensor *out\\_\\_, scalar start, scalar end, int64\\_t steps, int options\\_kind, int options\\_device)
"""
function linspace(start::TorchNumber, end_::TorchNumber, steps::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_linspace, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, steps, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
linspace_out(out::Tensor, start::TorchNumber, end_::TorchNumber, steps::Int64)
Wrapper of C++ function void atg\\_linspace\\_out(tensor *out\\_\\_, tensor out, scalar start, scalar end, int64\\_t steps)
"""
function linspace_out(out::Tensor, start::TorchNumber, end_::TorchNumber, steps::Int64)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_linspace_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, start_s_.pointer, end__s_.pointer, steps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.log
"""
log(self::Tensor)
Wrapper of C++ function void atg\\_log(tensor *out\\_\\_, tensor self)
"""
function log(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.log10
"""
log10(self::Tensor)
Wrapper of C++ function void atg\\_log10(tensor *out\\_\\_, tensor self)
"""
function log10(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log10, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log10!(self::Tensor)
Wrapper of C++ function void atg\\_log10\\_(tensor *out\\_\\_, tensor self)
"""
function log10!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log10_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
log10_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_log10\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function log10_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log10_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.log1p
"""
log1p(self::Tensor)
Wrapper of C++ function void atg\\_log1p(tensor *out\\_\\_, tensor self)
"""
function log1p(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log1p, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log1p!(self::Tensor)
Wrapper of C++ function void atg\\_log1p\\_(tensor *out\\_\\_, tensor self)
"""
function log1p!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log1p_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
log1p_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_log1p\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function log1p_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log1p_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.log2
"""
log2(self::Tensor)
Wrapper of C++ function void atg\\_log2(tensor *out\\_\\_, tensor self)
"""
function log2(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log2!(self::Tensor)
Wrapper of C++ function void atg\\_log2\\_(tensor *out\\_\\_, tensor self)
"""
function log2!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log2_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
log2_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_log2\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function log2_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log2_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log!(self::Tensor)
Wrapper of C++ function void atg\\_log\\_(tensor *out\\_\\_, tensor self)
"""
function log!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
log_normal!(self::Tensor, mean::Float64, std::Float64)
Wrapper of C++ function void atg\\_log\\_normal\\_(tensor *out\\_\\_, tensor self, double mean, double std)
"""
function log_normal!(self::Tensor, mean::Float64, std::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_log_normal_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble),
outputs__, self.pointer, mean, std)
return self
end
"""
log_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_log\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function log_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log_sigmoid(self::Tensor)
Wrapper of C++ function void atg\\_log\\_sigmoid(tensor *out\\_\\_, tensor self)
"""
function log_sigmoid(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_sigmoid, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log_sigmoid_backward(grad_output::Tensor, self::Tensor, buffer::Tensor)
Wrapper of C++ function void atg\\_log\\_sigmoid\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor buffer)
"""
function log_sigmoid_backward(grad_output::Tensor, self::Tensor, buffer::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_sigmoid_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, buffer.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log_sigmoid_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, buffer::Tensor)
Wrapper of C++ function void atg\\_log\\_sigmoid\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor buffer)
"""
function log_sigmoid_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, buffer::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_sigmoid_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, buffer.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log_sigmoid_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_log\\_sigmoid\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function log_sigmoid_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_log_sigmoid_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
log_softmax(self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_log\\_softmax(tensor *out\\_\\_, tensor self, int64\\_t dim, int dtype)
"""
function log_softmax(self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_log_softmax, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logdet(self::Tensor)
Wrapper of C++ function void atg\\_logdet(tensor *out\\_\\_, tensor self)
"""
function logdet(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logdet, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logical_not(self::Tensor)
Wrapper of C++ function void atg\\_logical\\_not(tensor *out\\_\\_, tensor self)
"""
function logical_not(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_not, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logical_not!(self::Tensor)
Wrapper of C++ function void atg\\_logical\\_not\\_(tensor *out\\_\\_, tensor self)
"""
function logical_not!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_not_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
logical_not_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_logical\\_not\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function logical_not_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_not_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logical_xor(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_logical\\_xor(tensor *out\\_\\_, tensor self, tensor other)
"""
function logical_xor(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_xor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logical_xor!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_logical\\_xor\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function logical_xor!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_xor_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
logical_xor_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_logical\\_xor\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function logical_xor_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_logical_xor_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logspace(start::TorchNumber, end_::TorchNumber, steps::Int64, base::Float64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_logspace(tensor *out\\_\\_, scalar start, scalar end, int64\\_t steps, double base, int options\\_kind, int options\\_device)
"""
function logspace(start::TorchNumber, end_::TorchNumber, steps::Int64, base::Float64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_logspace, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cdouble, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, steps, base, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logspace_out(out::Tensor, start::TorchNumber, end_::TorchNumber, steps::Int64, base::Float64)
Wrapper of C++ function void atg\\_logspace\\_out(tensor *out\\_\\_, tensor out, scalar start, scalar end, int64\\_t steps, double base)
"""
function logspace_out(out::Tensor, start::TorchNumber, end_::TorchNumber, steps::Int64, base::Float64)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_logspace_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cdouble),
outputs__, out.pointer, start_s_.pointer, end__s_.pointer, steps, base)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logsumexp(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_logsumexp(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function logsumexp(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_logsumexp, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
logsumexp_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_logsumexp\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function logsumexp_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_logsumexp_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lstm(input::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int)
Wrapper of C++ function void atg\\_lstm(tensor *out\\_\\_, tensor input, tensor *hx\\_data, int hx\\_len, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first)
"""
function lstm(input::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int) where {T,N}
outputs__ = Int[0, 0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_lstm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint),
outputs__, input.pointer, hx_data_ta_, hx_len, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
lstm1(data::Tensor, batch_sizes::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int)
Wrapper of C++ function void atg\\_lstm1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor *hx\\_data, int hx\\_len, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional)
"""
function lstm1(data::Tensor, batch_sizes::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int) where {T,N}
outputs__ = Int[0, 0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_lstm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx_data_ta_, hx_len, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
lstm_cell(input::Tensor, hx_data::Array{Tensor{T,N}}, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
Wrapper of C++ function void atg\\_lstm\\_cell(tensor *out\\_\\_, tensor input, tensor *hx\\_data, int hx\\_len, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh)
"""
function lstm_cell(input::Tensor, hx_data::Array{Tensor{T,N}}, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor) where {T,N}
outputs__ = Int[0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
__cret = ccall((:atg_lstm_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx_data_ta_, hx_len, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
lstsq(self::Tensor, A::Tensor)
Wrapper of C++ function void atg\\_lstsq(tensor *out\\_\\_, tensor self, tensor A)
"""
function lstsq(self::Tensor, A::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_lstsq, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, A.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
lstsq_out(X::Tensor, qr::Tensor, self::Tensor, A::Tensor)
Wrapper of C++ function void atg\\_lstsq\\_out(tensor *out\\_\\_, tensor X, tensor qr, tensor self, tensor A)
"""
function lstsq_out(X::Tensor, qr::Tensor, self::Tensor, A::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_lstsq_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, X.pointer, qr.pointer, self.pointer, A.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
lt(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_lt(tensor *out\\_\\_, tensor self, scalar other)
"""
function lt(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_lt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lt1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_lt1(tensor *out\\_\\_, tensor self, tensor other)
"""
function lt1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lt1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lt!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_lt\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function lt!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_lt_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
lt1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_lt\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function lt1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lt_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
lt_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_lt\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function lt_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_lt_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lt_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_lt\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function lt_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lt_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lu_solve(self::Tensor, LU_data::Tensor, LU_pivots::Tensor)
Wrapper of C++ function void atg\\_lu\\_solve(tensor *out\\_\\_, tensor self, tensor LU\\_data, tensor LU\\_pivots)
"""
function lu_solve(self::Tensor, LU_data::Tensor, LU_pivots::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lu_solve, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, LU_data.pointer, LU_pivots.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
lu_solve_out(out::Tensor, self::Tensor, LU_data::Tensor, LU_pivots::Tensor)
Wrapper of C++ function void atg\\_lu\\_solve\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor LU\\_data, tensor LU\\_pivots)
"""
function lu_solve_out(out::Tensor, self::Tensor, LU_data::Tensor, LU_pivots::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_lu_solve_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, LU_data.pointer, LU_pivots.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
margin_ranking_loss(input1::Tensor, input2::Tensor, target::Tensor, margin::Float64, reduction::Int64)
Wrapper of C++ function void atg\\_margin\\_ranking\\_loss(tensor *out\\_\\_, tensor input1, tensor input2, tensor target, double margin, int64\\_t reduction)
"""
function margin_ranking_loss(input1::Tensor, input2::Tensor, target::Tensor, margin::Float64, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_margin_ranking_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong),
outputs__, input1.pointer, input2.pointer, target.pointer, margin, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
masked_fill(self::Tensor, mask::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_masked\\_fill(tensor *out\\_\\_, tensor self, tensor mask, scalar value)
"""
function masked_fill(self::Tensor, mask::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_masked_fill, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
masked_fill1(self::Tensor, mask::Tensor, value::Tensor)
Wrapper of C++ function void atg\\_masked\\_fill1(tensor *out\\_\\_, tensor self, tensor mask, tensor value)
"""
function masked_fill1(self::Tensor, mask::Tensor, value::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_fill1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, value.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
masked_fill!(self::Tensor, mask::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_masked\\_fill\\_(tensor *out\\_\\_, tensor self, tensor mask, scalar value)
"""
function masked_fill!(self::Tensor, mask::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_masked_fill_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, value_s_.pointer)
return self
end
"""
masked_fill1!(self::Tensor, mask::Tensor, value::Tensor)
Wrapper of C++ function void atg\\_masked\\_fill\\_1(tensor *out\\_\\_, tensor self, tensor mask, tensor value)
"""
function masked_fill1!(self::Tensor, mask::Tensor, value::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_fill_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, value.pointer)
return self
end
"""
masked_scatter(self::Tensor, mask::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_masked\\_scatter(tensor *out\\_\\_, tensor self, tensor mask, tensor source)
"""
function masked_scatter(self::Tensor, mask::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_scatter, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, source.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
masked_scatter!(self::Tensor, mask::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_masked\\_scatter\\_(tensor *out\\_\\_, tensor self, tensor mask, tensor source)
"""
function masked_scatter!(self::Tensor, mask::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_scatter_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer, source.pointer)
return self
end
"""
masked_select(self::Tensor, mask::Tensor)
Wrapper of C++ function void atg\\_masked\\_select(tensor *out\\_\\_, tensor self, tensor mask)
"""
function masked_select(self::Tensor, mask::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_select, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
masked_select_out(out::Tensor, self::Tensor, mask::Tensor)
Wrapper of C++ function void atg\\_masked\\_select\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mask)
"""
function masked_select_out(out::Tensor, self::Tensor, mask::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_masked_select_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mask.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
matmul(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_matmul(tensor *out\\_\\_, tensor self, tensor other)
"""
function matmul(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_matmul, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
matmul_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_matmul\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function matmul_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_matmul_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
matrix_power(self::Tensor, n::Int64)
Wrapper of C++ function void atg\\_matrix\\_power(tensor *out\\_\\_, tensor self, int64\\_t n)
"""
function matrix_power(self::Tensor, n::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_matrix_power, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, n)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
matrix_rank(self::Tensor, symmetric::Int)
Wrapper of C++ function void atg\\_matrix\\_rank(tensor *out\\_\\_, tensor self, int symmetric)
"""
function matrix_rank(self::Tensor, symmetric::Int)
outputs__ = Int[0]
__cret = ccall((:atg_matrix_rank, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, symmetric)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
matrix_rank1(self::Tensor, tol::Float64, symmetric::Int)
Wrapper of C++ function void atg\\_matrix\\_rank1(tensor *out\\_\\_, tensor self, double tol, int symmetric)
"""
function matrix_rank1(self::Tensor, tol::Float64, symmetric::Int)
outputs__ = Int[0]
__cret = ccall((:atg_matrix_rank1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cint),
outputs__, self.pointer, tol, symmetric)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.max
"""
max(self::Tensor)
Wrapper of C++ function void atg\\_max(tensor *out\\_\\_, tensor self)
"""
function max(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_max, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_max1(tensor *out\\_\\_, tensor self, tensor other)
"""
function max1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_max1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max2(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_max2(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function max2(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_max2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_max\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function max_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_max_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_out1(max::Tensor, max_values::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_max\\_out1(tensor *out\\_\\_, tensor max, tensor max\\_values, tensor self, int64\\_t dim, int keepdim)
"""
function max_out1(max::Tensor, max_values::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_max_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, max.pointer, max_values.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_pool1d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool1d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool1d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool1d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool1d\\_with\\_indices(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool1d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool1d_with_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool2d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool2d\\_with\\_indices(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool2d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool2d_with_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_pool2d_with_indices_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
Wrapper of C++ function void atg\\_max\\_pool2d\\_with\\_indices\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode, tensor indices)
"""
function max_pool2d_with_indices_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool2d_with_indices_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool2d_with_indices_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
Wrapper of C++ function void atg\\_max\\_pool2d\\_with\\_indices\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode, tensor indices)
"""
function max_pool2d_with_indices_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool2d_with_indices_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool2d_with_indices_out(out::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool2d\\_with\\_indices\\_out(tensor *out\\_\\_, tensor out, tensor indices, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool2d_with_indices_out(out::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool2d_with_indices_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, indices.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_pool3d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool3d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool3d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool3d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool3d\\_with\\_indices(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool3d_with_indices(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool3d_with_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_pool3d_with_indices_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
Wrapper of C++ function void atg\\_max\\_pool3d\\_with\\_indices\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode, tensor indices)
"""
function max_pool3d_with_indices_backward(grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool3d_with_indices_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool3d_with_indices_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
Wrapper of C++ function void atg\\_max\\_pool3d\\_with\\_indices\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode, tensor indices)
"""
function max_pool3d_with_indices_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int, indices::Tensor)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool3d_with_indices_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode, indices.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_pool3d_with_indices_out(out::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_max\\_pool3d\\_with\\_indices\\_out(tensor *out\\_\\_, tensor out, tensor indices, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function max_pool3d_with_indices_out(out::Tensor, indices::Tensor, self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0, 0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_max_pool3d_with_indices_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, indices.pointer, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
max_unpool2d(self::Tensor, indices::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool2d(tensor *out\\_\\_, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function max_unpool2d(self::Tensor, indices::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_max_unpool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, indices.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool2d_backward(grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function max_unpool2d_backward(grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_max_unpool2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, indices.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function max_unpool2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_max_unpool2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, indices.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool2d_out(out::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool2d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function max_unpool2d_out(out::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_max_unpool2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, indices.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool3d(self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool3d(tensor *out\\_\\_, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function max_unpool3d(self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_max_unpool3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, indices.pointer, output_size_data, output_size_len, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool3d_backward(grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function max_unpool3d_backward(grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_max_unpool3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, indices.pointer, output_size_data, output_size_len, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function max_unpool3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_max_unpool3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, indices.pointer, output_size_data, output_size_len, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_unpool3d_out(out::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_max\\_unpool3d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor indices, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function max_unpool3d_out(out::Tensor, self::Tensor, indices::Tensor, output_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_max_unpool3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, indices.pointer, output_size_data, output_size_len, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
max_values(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_max\\_values(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function max_values(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_max_values, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mean(self::Tensor, dtype::Int)
Wrapper of C++ function void atg\\_mean(tensor *out\\_\\_, tensor self, int dtype)
"""
function mean(self::Tensor, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_mean, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mean1(self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_mean1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function mean1(self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_mean1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mean_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_mean\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function mean_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_mean_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
median(self::Tensor)
Wrapper of C++ function void atg\\_median(tensor *out\\_\\_, tensor self)
"""
function median(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_median, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
median1(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_median1(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function median1(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_median1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
median_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_median\\_out(tensor *out\\_\\_, tensor values, tensor indices, tensor self, int64\\_t dim, int keepdim)
"""
function median_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_median_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, values.pointer, indices.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
meshgrid(tensors_data::Array{Tensor{T,N}})
Wrapper of C++ function tensor *atg\\_meshgrid(tensor *tensors\\_data, int tensors\\_len)
"""
function meshgrid(tensors_data::Array{Tensor{T,N}}) where {T,N}
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_meshgrid, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Cint),
tensors_data_ta_, tensors_len)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
import Base.min
"""
min(self::Tensor)
Wrapper of C++ function void atg\\_min(tensor *out\\_\\_, tensor self)
"""
function min(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_min, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
min1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_min1(tensor *out\\_\\_, tensor self, tensor other)
"""
function min1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_min1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
min2(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_min2(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function min2(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_min2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
min_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_min\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function min_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_min_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
min_out1(min::Tensor, min_indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_min\\_out1(tensor *out\\_\\_, tensor min, tensor min\\_indices, tensor self, int64\\_t dim, int keepdim)
"""
function min_out1(min::Tensor, min_indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_min_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, min.pointer, min_indices.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
min_values(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_min\\_values(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function min_values(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_min_values, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, exponential_average_factor::Float64, epsilon::Float64)
Wrapper of C++ function void atg\\_miopen\\_batch\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor running\\_mean, tensor running\\_var, int training, double exponential\\_average\\_factor, double epsilon)
"""
function miopen_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, exponential_average_factor::Float64, epsilon::Float64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_miopen_batch_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cdouble, Cdouble),
outputs__, input.pointer, weight.pointer, bias.pointer, running_mean.pointer, running_var.pointer, training, exponential_average_factor, epsilon)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
miopen_batch_norm_backward(input::Tensor, grad_output::Tensor, weight::Tensor, running_mean::Tensor, running_var::Tensor, save_mean::Tensor, save_var::Tensor, epsilon::Float64)
Wrapper of C++ function void atg\\_miopen\\_batch\\_norm\\_backward(tensor *out\\_\\_, tensor input, tensor grad\\_output, tensor weight, tensor running\\_mean, tensor running\\_var, tensor save\\_mean, tensor save\\_var, double epsilon)
"""
function miopen_batch_norm_backward(input::Tensor, grad_output::Tensor, weight::Tensor, running_mean::Tensor, running_var::Tensor, save_mean::Tensor, save_var::Tensor, epsilon::Float64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_miopen_batch_norm_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, input.pointer, grad_output.pointer, weight.pointer, running_mean.pointer, running_var.pointer, save_mean.pointer, save_var.pointer, epsilon)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
miopen_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_backward_bias(grad_output::Tensor)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_backward\\_bias(tensor *out\\_\\_, tensor grad\\_output)
"""
function miopen_convolution_backward_bias(grad_output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_miopen_convolution_backward_bias, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_backward\\_input(tensor *out\\_\\_, int64\\_t *self\\_size\\_data, int self\\_size\\_len, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
self_size_len = length(self_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self_size_data, self_size_len, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_backward\\_weight(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution_backward_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_transpose(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, output_padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_transpose(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution_transpose(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, output_padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution_transpose, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, output_padding_data, output_padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_transpose_backward_input(grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_transpose\\_backward\\_input(tensor *out\\_\\_, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution_transpose_backward_input(grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution_transpose_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_convolution_transpose_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_convolution\\_transpose\\_backward\\_weight(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_convolution_transpose_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_convolution_transpose_backward_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_depthwise_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_depthwise\\_convolution(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_depthwise_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_depthwise_convolution, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_depthwise_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_depthwise\\_convolution\\_backward\\_input(tensor *out\\_\\_, int64\\_t *self\\_size\\_data, int self\\_size\\_len, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_depthwise_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
self_size_len = length(self_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_depthwise_convolution_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, self_size_data, self_size_len, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_depthwise_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
Wrapper of C++ function void atg\\_miopen\\_depthwise\\_convolution\\_backward\\_weight(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int benchmark, int deterministic)
"""
function miopen_depthwise_convolution_backward_weight(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, benchmark::Int, deterministic::Int)
outputs__ = Int[0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_miopen_depthwise_convolution_backward_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, benchmark, deterministic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
miopen_rnn(input::Tensor, weight_data::Array{Tensor{T,N}}, weight_stride0::Int64, hx::Tensor, cx::Tensor, mode::Int64, hidden_size::Int64, num_layers::Int64, batch_first::Int, dropout::Float64, train::Int, bidirectional::Int, batch_sizes_data::Array{Int64}, dropout_state::Tensor)
Wrapper of C++ function void atg\\_miopen\\_rnn(tensor *out\\_\\_, tensor input, tensor *weight\\_data, int weight\\_len, int64\\_t weight\\_stride0, tensor hx, tensor cx, int64\\_t mode, int64\\_t hidden\\_size, int64\\_t num\\_layers, int batch\\_first, double dropout, int train, int bidirectional, int64\\_t *batch\\_sizes\\_data, int batch\\_sizes\\_len, tensor dropout\\_state)
"""
function miopen_rnn(input::Tensor, weight_data::Array{Tensor{T,N}}, weight_stride0::Int64, hx::Tensor, cx::Tensor, mode::Int64, hidden_size::Int64, num_layers::Int64, batch_first::Int, dropout::Float64, train::Int, bidirectional::Int, batch_sizes_data::Array{Int64}, dropout_state::Tensor) where {T,N}
outputs__ = Int[0, 0, 0, 0, 0]
weight_data_ta_ = map(x->x.pointer, weight_data)
weight_len = length(weight_data)
batch_sizes_len = length(batch_sizes_data)
__cret = ccall((:atg_miopen_rnn, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Cint, Cdouble, Cint, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}),
outputs__, input.pointer, weight_data_ta_, weight_len, weight_stride0, hx.pointer, cx.pointer, mode, hidden_size, num_layers, batch_first, dropout, train, bidirectional, batch_sizes_data, batch_sizes_len, dropout_state.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
__o_4 = tensor_from_ptr(Ptr{Cvoid}(outputs__[4]))
__o_5 = tensor_from_ptr(Ptr{Cvoid}(outputs__[5]))
return __o_1, __o_2, __o_3, __o_4, __o_5
end
"""
mkldnn_adaptive_avg_pool2d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_mkldnn\\_adaptive\\_avg\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function mkldnn_adaptive_avg_pool2d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_mkldnn_adaptive_avg_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mkldnn_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_mkldnn\\_convolution(tensor *out\\_\\_, tensor self, tensor weight, tensor bias, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups)
"""
function mkldnn_convolution(self::Tensor, weight::Tensor, bias::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_mkldnn_convolution, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, self.pointer, weight.pointer, bias.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mkldnn_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, bias_defined::Int)
Wrapper of C++ function void atg\\_mkldnn\\_convolution\\_backward\\_input(tensor *out\\_\\_, int64\\_t *self\\_size\\_data, int self\\_size\\_len, tensor grad\\_output, tensor weight, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int bias\\_defined)
"""
function mkldnn_convolution_backward_input(self_size_data::Array{Int64}, grad_output::Tensor, weight::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, bias_defined::Int)
outputs__ = Int[0]
self_size_len = length(self_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_mkldnn_convolution_backward_input, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint),
outputs__, self_size_data, self_size_len, grad_output.pointer, weight.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, bias_defined)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mkldnn_convolution_backward_weights(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, bias_defined::Int)
Wrapper of C++ function void atg\\_mkldnn\\_convolution\\_backward\\_weights(tensor *out\\_\\_, int64\\_t *weight\\_size\\_data, int weight\\_size\\_len, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups, int bias\\_defined)
"""
function mkldnn_convolution_backward_weights(weight_size_data::Array{Int64}, grad_output::Tensor, self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64, bias_defined::Int)
outputs__ = Int[0, 0]
weight_size_len = length(weight_size_data)
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_mkldnn_convolution_backward_weights, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong, Cint),
outputs__, weight_size_data, weight_size_len, grad_output.pointer, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups, bias_defined)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
mkldnn_linear(input::Tensor, weight::Tensor, bias::Tensor)
Wrapper of C++ function void atg\\_mkldnn\\_linear(tensor *out\\_\\_, tensor input, tensor weight, tensor bias)
"""
function mkldnn_linear(input::Tensor, weight::Tensor, bias::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mkldnn_linear, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, weight.pointer, bias.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mkldnn_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_mkldnn\\_max\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function mkldnn_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_mkldnn_max_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mkldnn_reorder_conv2d_weight(self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
Wrapper of C++ function void atg\\_mkldnn\\_reorder\\_conv2d\\_weight(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int64\\_t groups)
"""
function mkldnn_reorder_conv2d_weight(self::Tensor, padding_data::Array{Int64}, stride_data::Array{Int64}, dilation_data::Array{Int64}, groups::Int64)
outputs__ = Int[0]
padding_len = length(padding_data)
stride_len = length(stride_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_mkldnn_reorder_conv2d_weight, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Clonglong),
outputs__, self.pointer, padding_data, padding_len, stride_data, stride_len, dilation_data, dilation_len, groups)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mm(self::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_mm(tensor *out\\_\\_, tensor self, tensor mat2)
"""
function mm(self::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mm_out(out::Tensor, self::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_mm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mat2)
"""
function mm_out(out::Tensor, self::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mode(self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_mode(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim)
"""
function mode(self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_mode, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
mode_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
Wrapper of C++ function void atg\\_mode\\_out(tensor *out\\_\\_, tensor values, tensor indices, tensor self, int64\\_t dim, int keepdim)
"""
function mode_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, keepdim::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_mode_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, values.pointer, indices.pointer, self.pointer, dim, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
mse_loss(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_mse\\_loss(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function mse_loss(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mse_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mse_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_mse\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function mse_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mse_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mse_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_mse\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function mse_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mse_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mse_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_mse\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, int64\\_t reduction)
"""
function mse_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mse_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mul(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_mul(tensor *out\\_\\_, tensor self, tensor other)
"""
function mul(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mul, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mul1(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_mul1(tensor *out\\_\\_, tensor self, scalar other)
"""
function mul1(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_mul1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mul!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_mul\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function mul!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mul_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
mul1!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_mul\\_1(tensor *out\\_\\_, tensor self, scalar other)
"""
function mul1!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_mul_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
mul_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_mul\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function mul_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mul_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multi_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, p::TorchNumber, margin::TorchNumber, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_multi\\_margin\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64\\_t reduction)
"""
function multi_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, p::TorchNumber, margin::TorchNumber, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
p_s_ = Scalar(p)
margin_s_ = Scalar(margin)
__cret = ccall((:atg_multi_margin_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, p_s_.pointer, margin_s_.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multi_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, p::TorchNumber, margin::TorchNumber, weight::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_multi\\_margin\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, scalar p, scalar margin, tensor weight, int64\\_t reduction)
"""
function multi_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, p::TorchNumber, margin::TorchNumber, weight::Tensor, reduction::Int64)
outputs__ = Int[0]
p_s_ = Scalar(p)
margin_s_ = Scalar(margin)
__cret = ccall((:atg_multi_margin_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, p_s_.pointer, margin_s_.pointer, weight.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multilabel_margin_loss(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_multilabel\\_margin\\_loss(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function multilabel_margin_loss(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_multilabel_margin_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multilabel_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64, is_target::Tensor)
Wrapper of C++ function void atg\\_multilabel\\_margin\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction, tensor is\\_target)
"""
function multilabel_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64, is_target::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_multilabel_margin_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction, is_target.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multilabel_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64, is_target::Tensor)
Wrapper of C++ function void atg\\_multilabel\\_margin\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction, tensor is\\_target)
"""
function multilabel_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64, is_target::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_multilabel_margin_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, reduction, is_target.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multilabel_margin_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_multilabel\\_margin\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, int64\\_t reduction)
"""
function multilabel_margin_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_multilabel_margin_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multinomial(self::Tensor, num_samples::Int64, replacement::Int)
Wrapper of C++ function void atg\\_multinomial(tensor *out\\_\\_, tensor self, int64\\_t num\\_samples, int replacement)
"""
function multinomial(self::Tensor, num_samples::Int64, replacement::Int)
outputs__ = Int[0]
__cret = ccall((:atg_multinomial, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, num_samples, replacement)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
multinomial_out(out::Tensor, self::Tensor, num_samples::Int64, replacement::Int)
Wrapper of C++ function void atg\\_multinomial\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t num\\_samples, int replacement)
"""
function multinomial_out(out::Tensor, self::Tensor, num_samples::Int64, replacement::Int)
outputs__ = Int[0]
__cret = ccall((:atg_multinomial_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, out.pointer, self.pointer, num_samples, replacement)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.mv
"""
mv(self::Tensor, vec::Tensor)
Wrapper of C++ function void atg\\_mv(tensor *out\\_\\_, tensor self, tensor vec)
"""
function mv(self::Tensor, vec::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mv, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, vec.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mv_out(out::Tensor, self::Tensor, vec::Tensor)
Wrapper of C++ function void atg\\_mv\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor vec)
"""
function mv_out(out::Tensor, self::Tensor, vec::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_mv_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, vec.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mvlgamma(self::Tensor, p::Int64)
Wrapper of C++ function void atg\\_mvlgamma(tensor *out\\_\\_, tensor self, int64\\_t p)
"""
function mvlgamma(self::Tensor, p::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mvlgamma, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, p)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
mvlgamma!(self::Tensor, p::Int64)
Wrapper of C++ function void atg\\_mvlgamma\\_(tensor *out\\_\\_, tensor self, int64\\_t p)
"""
function mvlgamma!(self::Tensor, p::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_mvlgamma_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, p)
return self
end
"""
narrow(self::Tensor, dim::Int64, start::Int64, length::Int64)
Wrapper of C++ function void atg\\_narrow(tensor *out\\_\\_, tensor self, int64\\_t dim, int64\\_t start, int64\\_t length)
"""
function narrow(self::Tensor, dim::Int64, start::Int64, length::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_narrow, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, dim, start, length)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
narrow_copy(self::Tensor, dim::Int64, start::Int64, length::Int64)
Wrapper of C++ function void atg\\_narrow\\_copy(tensor *out\\_\\_, tensor self, int64\\_t dim, int64\\_t start, int64\\_t length)
"""
function narrow_copy(self::Tensor, dim::Int64, start::Int64, length::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_narrow_copy, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, dim, start, length)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
native_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, momentum::Float64, eps::Float64)
Wrapper of C++ function void atg\\_native\\_batch\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, tensor running\\_mean, tensor running\\_var, int training, double momentum, double eps)
"""
function native_batch_norm(input::Tensor, weight::Tensor, bias::Tensor, running_mean::Tensor, running_var::Tensor, training::Int, momentum::Float64, eps::Float64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_native_batch_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cdouble, Cdouble),
outputs__, input.pointer, weight.pointer, bias.pointer, running_mean.pointer, running_var.pointer, training, momentum, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
native_layer_norm(input::Tensor, weight::Tensor, bias::Tensor, M::Int64, n::Int64, eps::Float64)
Wrapper of C++ function void atg\\_native\\_layer\\_norm(tensor *out\\_\\_, tensor input, tensor weight, tensor bias, int64\\_t M, int64\\_t n, double eps)
"""
function native_layer_norm(input::Tensor, weight::Tensor, bias::Tensor, M::Int64, n::Int64, eps::Float64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_native_layer_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cdouble),
outputs__, input.pointer, weight.pointer, bias.pointer, M, n, eps)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
native_norm(self::Tensor)
Wrapper of C++ function void atg\\_native\\_norm(tensor *out\\_\\_, tensor self)
"""
function native_norm(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_native_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ne(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ne(tensor *out\\_\\_, tensor self, scalar other)
"""
function ne(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ne, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ne1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ne1(tensor *out\\_\\_, tensor self, tensor other)
"""
function ne1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ne1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ne!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ne\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function ne!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ne_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
ne1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ne\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function ne1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ne_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
ne_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_ne\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function ne_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_ne_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ne_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_ne\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function ne_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ne_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
neg(self::Tensor)
Wrapper of C++ function void atg\\_neg(tensor *out\\_\\_, tensor self)
"""
function neg(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_neg, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
neg!(self::Tensor)
Wrapper of C++ function void atg\\_neg\\_(tensor *out\\_\\_, tensor self)
"""
function neg!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_neg_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
neg_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_neg\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function neg_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_neg_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
new_empty(self::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_new\\_empty(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function new_empty(self::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_new_empty, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
new_full(self::Tensor, size_data::Array{Int64}, fill_value::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_new\\_full(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, scalar fill\\_value, int options\\_kind, int options\\_device)
"""
function new_full(self::Tensor, size_data::Array{Int64}, fill_value::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
fill_value_s_ = Scalar(fill_value)
__cret = ccall((:atg_new_full, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, size_data, size_len, fill_value_s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
new_zeros(self::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_new\\_zeros(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function new_zeros(self::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_new_zeros, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
Wrapper of C++ function void atg\\_nll\\_loss(tensor *out\\_\\_, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index)
"""
function nll_loss(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, target.pointer, weight.pointer, reduction, ignore_index)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss2d(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
Wrapper of C++ function void atg\\_nll\\_loss2d(tensor *out\\_\\_, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index)
"""
function nll_loss2d(self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, target.pointer, weight.pointer, reduction, ignore_index)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss2d_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
Wrapper of C++ function void atg\\_nll\\_loss2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index, tensor total\\_weight)
"""
function nll_loss2d_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index, total_weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
Wrapper of C++ function void atg\\_nll\\_loss2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index, tensor total\\_weight)
"""
function nll_loss2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index, total_weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss2d_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
Wrapper of C++ function void atg\\_nll\\_loss2d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index)
"""
function nll_loss2d_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
Wrapper of C++ function void atg\\_nll\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index, tensor total\\_weight)
"""
function nll_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index, total_weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
Wrapper of C++ function void atg\\_nll\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index, tensor total\\_weight)
"""
function nll_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64, total_weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index, total_weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nll_loss_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
Wrapper of C++ function void atg\\_nll\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, tensor weight, int64\\_t reduction, int64\\_t ignore\\_index)
"""
function nll_loss_out(out::Tensor, self::Tensor, target::Tensor, weight::Tensor, reduction::Int64, ignore_index::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_nll_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, weight.pointer, reduction, ignore_index)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nonzero(self::Tensor)
Wrapper of C++ function void atg\\_nonzero(tensor *out\\_\\_, tensor self)
"""
function nonzero(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nonzero, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nonzero_numpy(self::Tensor)
Wrapper of C++ function tensor *atg\\_nonzero\\_numpy(tensor self)
"""
function nonzero_numpy(self::Tensor)
__cret = ccall((:atg_nonzero_numpy, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid},),
self.pointer)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
"""
nonzero_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_nonzero\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function nonzero_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_nonzero_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm(self::Tensor)
Wrapper of C++ function void atg\\_norm(tensor *out\\_\\_, tensor self)
"""
function norm(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm1(self::Tensor, p::TorchNumber, dtype::Int)
Wrapper of C++ function void atg\\_norm1(tensor *out\\_\\_, tensor self, scalar p, int dtype)
"""
function norm1(self::Tensor, p::TorchNumber, dtype::Int)
outputs__ = Int[0]
p_s_ = Scalar(p)
__cret = ccall((:atg_norm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, p_s_.pointer, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm2(self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_norm2(tensor *out\\_\\_, tensor self, scalar p, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function norm2(self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
p_s_ = Scalar(p)
dim_len = length(dim_data)
__cret = ccall((:atg_norm2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, p_s_.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm3(self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_norm3(tensor *out\\_\\_, tensor self, scalar p, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function norm3(self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
p_s_ = Scalar(p)
dim_len = length(dim_data)
__cret = ccall((:atg_norm3, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, p_s_.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm_except_dim(v::Tensor, pow::Int64, dim::Int64)
Wrapper of C++ function void atg\\_norm\\_except\\_dim(tensor *out\\_\\_, tensor v, int64\\_t pow, int64\\_t dim)
"""
function norm_except_dim(v::Tensor, pow::Int64, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_norm_except_dim, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, v.pointer, pow, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm_out(out::Tensor, self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_norm\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar p, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function norm_out(out::Tensor, self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
p_s_ = Scalar(p)
dim_len = length(dim_data)
__cret = ccall((:atg_norm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, p_s_.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
norm_out1(out::Tensor, self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_norm\\_out1(tensor *out\\_\\_, tensor out, tensor self, scalar p, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function norm_out1(out::Tensor, self::Tensor, p::TorchNumber, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
p_s_ = Scalar(p)
dim_len = length(dim_data)
__cret = ccall((:atg_norm_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, out.pointer, self.pointer, p_s_.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
normal!(self::Tensor, mean::Float64, std::Float64)
Wrapper of C++ function void atg\\_normal\\_(tensor *out\\_\\_, tensor self, double mean, double std)
"""
function normal!(self::Tensor, mean::Float64, std::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_normal_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble),
outputs__, self.pointer, mean, std)
return self
end
"""
normal_out(out::Tensor, mean::Tensor, std::Float64)
Wrapper of C++ function void atg\\_normal\\_out(tensor *out\\_\\_, tensor out, tensor mean, double std)
"""
function normal_out(out::Tensor, mean::Tensor, std::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_normal_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, out.pointer, mean.pointer, std)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
normal_out1(out::Tensor, mean::Float64, std::Tensor)
Wrapper of C++ function void atg\\_normal\\_out1(tensor *out\\_\\_, tensor out, double mean, tensor std)
"""
function normal_out1(out::Tensor, mean::Float64, std::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_normal_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Ptr{Cvoid}),
outputs__, out.pointer, mean, std.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
normal_out2(out::Tensor, mean::Tensor, std::Tensor)
Wrapper of C++ function void atg\\_normal\\_out2(tensor *out\\_\\_, tensor out, tensor mean, tensor std)
"""
function normal_out2(out::Tensor, mean::Tensor, std::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_normal_out2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, mean.pointer, std.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
normal_out3(out::Tensor, mean::Float64, std::Float64, size_data::Array{Int64})
Wrapper of C++ function void atg\\_normal\\_out3(tensor *out\\_\\_, tensor out, double mean, double std, int64\\_t *size\\_data, int size\\_len)
"""
function normal_out3(out::Tensor, mean::Float64, std::Float64, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_normal_out3, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Ptr{Cvoid}, Cint),
outputs__, out.pointer, mean, std, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nuclear_norm(self::Tensor, keepdim::Int)
Wrapper of C++ function void atg\\_nuclear\\_norm(tensor *out\\_\\_, tensor self, int keepdim)
"""
function nuclear_norm(self::Tensor, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_nuclear_norm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nuclear_norm1(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_nuclear\\_norm1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function nuclear_norm1(self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_nuclear_norm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nuclear_norm_out(out::Tensor, self::Tensor, keepdim::Int)
Wrapper of C++ function void atg\\_nuclear\\_norm\\_out(tensor *out\\_\\_, tensor out, tensor self, int keepdim)
"""
function nuclear_norm_out(out::Tensor, self::Tensor, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_nuclear_norm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
nuclear_norm_out1(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
Wrapper of C++ function void atg\\_nuclear\\_norm\\_out1(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim)
"""
function nuclear_norm_out1(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_nuclear_norm_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
numpy_t(self::Tensor)
Wrapper of C++ function void atg\\_numpy\\_t(tensor *out\\_\\_, tensor self)
"""
function numpy_t(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_numpy_t, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
one_hot(self::Tensor, num_classes::Int64)
Wrapper of C++ function void atg\\_one\\_hot(tensor *out\\_\\_, tensor self, int64\\_t num\\_classes)
"""
function one_hot(self::Tensor, num_classes::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_one_hot, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, num_classes)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.ones
"""
ones(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_ones(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function ones(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_ones, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ones_like(self::Tensor)
Wrapper of C++ function void atg\\_ones\\_like(tensor *out\\_\\_, tensor self)
"""
function ones_like(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_ones_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ones_like1(self::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_ones\\_like1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device)
"""
function ones_like1(self::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_ones_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ones_out(out::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_ones\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len)
"""
function ones_out(out::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_ones_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
orgqr(self::Tensor, input2::Tensor)
Wrapper of C++ function void atg\\_orgqr(tensor *out\\_\\_, tensor self, tensor input2)
"""
function orgqr(self::Tensor, input2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_orgqr, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, input2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
orgqr_out(out::Tensor, self::Tensor, input2::Tensor)
Wrapper of C++ function void atg\\_orgqr\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor input2)
"""
function orgqr_out(out::Tensor, self::Tensor, input2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_orgqr_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, input2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ormqr(self::Tensor, input2::Tensor, input3::Tensor, left::Int, transpose::Int)
Wrapper of C++ function void atg\\_ormqr(tensor *out\\_\\_, tensor self, tensor input2, tensor input3, int left, int transpose)
"""
function ormqr(self::Tensor, input2::Tensor, input3::Tensor, left::Int, transpose::Int)
outputs__ = Int[0]
__cret = ccall((:atg_ormqr, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, input2.pointer, input3.pointer, left, transpose)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
ormqr_out(out::Tensor, self::Tensor, input2::Tensor, input3::Tensor, left::Int, transpose::Int)
Wrapper of C++ function void atg\\_ormqr\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor input2, tensor input3, int left, int transpose)
"""
function ormqr_out(out::Tensor, self::Tensor, input2::Tensor, input3::Tensor, left::Int, transpose::Int)
outputs__ = Int[0]
__cret = ccall((:atg_ormqr_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, input2.pointer, input3.pointer, left, transpose)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pairwise_distance(x1::Tensor, x2::Tensor, p::Float64, eps::Float64, keepdim::Int)
Wrapper of C++ function void atg\\_pairwise\\_distance(tensor *out\\_\\_, tensor x1, tensor x2, double p, double eps, int keepdim)
"""
function pairwise_distance(x1::Tensor, x2::Tensor, p::Float64, eps::Float64, keepdim::Int)
outputs__ = Int[0]
__cret = ccall((:atg_pairwise_distance, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Cint),
outputs__, x1.pointer, x2.pointer, p, eps, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pdist(self::Tensor, p::Float64)
Wrapper of C++ function void atg\\_pdist(tensor *out\\_\\_, tensor self, double p)
"""
function pdist(self::Tensor, p::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_pdist, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, p)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
permute(self::Tensor, dims_data::Array{Int64})
Wrapper of C++ function void atg\\_permute(tensor *out\\_\\_, tensor self, int64\\_t *dims\\_data, int dims\\_len)
"""
function permute(self::Tensor, dims_data::Array{Int64})
outputs__ = Int[0]
dims_len = length(dims_data)
__cret = ccall((:atg_permute, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dims_data, dims_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pin_memory(self::Tensor)
Wrapper of C++ function void atg\\_pin\\_memory(tensor *out\\_\\_, tensor self)
"""
function pin_memory(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_pin_memory, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pinverse(self::Tensor, rcond::Float64)
Wrapper of C++ function void atg\\_pinverse(tensor *out\\_\\_, tensor self, double rcond)
"""
function pinverse(self::Tensor, rcond::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_pinverse, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble),
outputs__, self.pointer, rcond)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pixel_shuffle(self::Tensor, upscale_factor::Int64)
Wrapper of C++ function void atg\\_pixel\\_shuffle(tensor *out\\_\\_, tensor self, int64\\_t upscale\\_factor)
"""
function pixel_shuffle(self::Tensor, upscale_factor::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_pixel_shuffle, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, upscale_factor)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
poisson(self::Tensor)
Wrapper of C++ function void atg\\_poisson(tensor *out\\_\\_, tensor self)
"""
function poisson(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_poisson, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
poisson_nll_loss(input::Tensor, target::Tensor, log_input::Int, full::Int, eps::Float64, reduction::Int64)
Wrapper of C++ function void atg\\_poisson\\_nll\\_loss(tensor *out\\_\\_, tensor input, tensor target, int log\\_input, int full, double eps, int64\\_t reduction)
"""
function poisson_nll_loss(input::Tensor, target::Tensor, log_input::Int, full::Int, eps::Float64, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_poisson_nll_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cdouble, Clonglong),
outputs__, input.pointer, target.pointer, log_input, full, eps, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
polygamma(n::Int64, self::Tensor)
Wrapper of C++ function void atg\\_polygamma(tensor *out\\_\\_, int64\\_t n, tensor self)
"""
function polygamma(n::Int64, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_polygamma, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, n, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
polygamma!(self::Tensor, n::Int64)
Wrapper of C++ function void atg\\_polygamma\\_(tensor *out\\_\\_, tensor self, int64\\_t n)
"""
function polygamma!(self::Tensor, n::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_polygamma_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, n)
return self
end
"""
polygamma_out(out::Tensor, n::Int64, self::Tensor)
Wrapper of C++ function void atg\\_polygamma\\_out(tensor *out\\_\\_, tensor out, int64\\_t n, tensor self)
"""
function polygamma_out(out::Tensor, n::Int64, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_polygamma_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, out.pointer, n, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow(self::Tensor, exponent::TorchNumber)
Wrapper of C++ function void atg\\_pow(tensor *out\\_\\_, tensor self, scalar exponent)
"""
function pow(self::Tensor, exponent::TorchNumber)
outputs__ = Int[0]
exponent_s_ = Scalar(exponent)
__cret = ccall((:atg_pow, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, exponent_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow1(self::Tensor, exponent::Tensor)
Wrapper of C++ function void atg\\_pow1(tensor *out\\_\\_, tensor self, tensor exponent)
"""
function pow1(self::Tensor, exponent::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_pow1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, exponent.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow2(self::TorchNumber, exponent::Tensor)
Wrapper of C++ function void atg\\_pow2(tensor *out\\_\\_, scalar self, tensor exponent)
"""
function pow2(self::TorchNumber, exponent::Tensor)
outputs__ = Int[0]
self_s_ = Scalar(self)
__cret = ccall((:atg_pow2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self_s_.pointer, exponent.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow!(self::Tensor, exponent::TorchNumber)
Wrapper of C++ function void atg\\_pow\\_(tensor *out\\_\\_, tensor self, scalar exponent)
"""
function pow!(self::Tensor, exponent::TorchNumber)
outputs__ = Int[0]
exponent_s_ = Scalar(exponent)
__cret = ccall((:atg_pow_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, exponent_s_.pointer)
return self
end
"""
pow1!(self::Tensor, exponent::Tensor)
Wrapper of C++ function void atg\\_pow\\_1(tensor *out\\_\\_, tensor self, tensor exponent)
"""
function pow1!(self::Tensor, exponent::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_pow_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, exponent.pointer)
return self
end
"""
pow_out(out::Tensor, self::Tensor, exponent::TorchNumber)
Wrapper of C++ function void atg\\_pow\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar exponent)
"""
function pow_out(out::Tensor, self::Tensor, exponent::TorchNumber)
outputs__ = Int[0]
exponent_s_ = Scalar(exponent)
__cret = ccall((:atg_pow_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, exponent_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow_out1(out::Tensor, self::Tensor, exponent::Tensor)
Wrapper of C++ function void atg\\_pow\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor exponent)
"""
function pow_out1(out::Tensor, self::Tensor, exponent::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_pow_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, exponent.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
pow_out2(out::Tensor, self::TorchNumber, exponent::Tensor)
Wrapper of C++ function void atg\\_pow\\_out2(tensor *out\\_\\_, tensor out, scalar self, tensor exponent)
"""
function pow_out2(out::Tensor, self::TorchNumber, exponent::Tensor)
outputs__ = Int[0]
self_s_ = Scalar(self)
__cret = ccall((:atg_pow_out2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self_s_.pointer, exponent.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
prelu(self::Tensor, weight::Tensor)
Wrapper of C++ function void atg\\_prelu(tensor *out\\_\\_, tensor self, tensor weight)
"""
function prelu(self::Tensor, weight::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_prelu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
prelu_backward(grad_output::Tensor, self::Tensor, weight::Tensor)
Wrapper of C++ function void atg\\_prelu\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor weight)
"""
function prelu_backward(grad_output::Tensor, self::Tensor, weight::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_prelu_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, weight.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
import Base.prod
"""
prod(self::Tensor, dtype::Int)
Wrapper of C++ function void atg\\_prod(tensor *out\\_\\_, tensor self, int dtype)
"""
function prod(self::Tensor, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_prod, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
prod1(self::Tensor, dim::Int64, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_prod1(tensor *out\\_\\_, tensor self, int64\\_t dim, int keepdim, int dtype)
"""
function prod1(self::Tensor, dim::Int64, keepdim::Int, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_prod1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, self.pointer, dim, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
prod_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_prod\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t dim, int keepdim, int dtype)
"""
function prod_out(out::Tensor, self::Tensor, dim::Int64, keepdim::Int, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_prod_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, out.pointer, self.pointer, dim, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.put!
"""
put!(self::Tensor, index::Tensor, source::Tensor, accumulate::Int)
Wrapper of C++ function void atg\\_put\\_(tensor *out\\_\\_, tensor self, tensor index, tensor source, int accumulate)
"""
function put!(self::Tensor, index::Tensor, source::Tensor, accumulate::Int)
outputs__ = Int[0]
__cret = ccall((:atg_put_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, index.pointer, source.pointer, accumulate)
return self
end
"""
q_per_channel_scales(self::Tensor)
Wrapper of C++ function void atg\\_q\\_per\\_channel\\_scales(tensor *out\\_\\_, tensor self)
"""
function q_per_channel_scales(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_q_per_channel_scales, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
q_per_channel_zero_points(self::Tensor)
Wrapper of C++ function void atg\\_q\\_per\\_channel\\_zero\\_points(tensor *out\\_\\_, tensor self)
"""
function q_per_channel_zero_points(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_q_per_channel_zero_points, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
qr(self::Tensor, some::Int)
Wrapper of C++ function void atg\\_qr(tensor *out\\_\\_, tensor self, int some)
"""
function qr(self::Tensor, some::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_qr, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, some)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
qr_out(Q::Tensor, R::Tensor, self::Tensor, some::Int)
Wrapper of C++ function void atg\\_qr\\_out(tensor *out\\_\\_, tensor Q, tensor R, tensor self, int some)
"""
function qr_out(Q::Tensor, R::Tensor, self::Tensor, some::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_qr_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, Q.pointer, R.pointer, self.pointer, some)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
quantize_per_channel(self::Tensor, scales::Tensor, zero_points::Tensor, axis::Int64, dtype::Int)
Wrapper of C++ function void atg\\_quantize\\_per\\_channel(tensor *out\\_\\_, tensor self, tensor scales, tensor zero\\_points, int64\\_t axis, int dtype)
"""
function quantize_per_channel(self::Tensor, scales::Tensor, zero_points::Tensor, axis::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_quantize_per_channel, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, scales.pointer, zero_points.pointer, axis, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
quantize_per_tensor(self::Tensor, scale::Float64, zero_point::Int64, dtype::Int)
Wrapper of C++ function void atg\\_quantize\\_per\\_tensor(tensor *out\\_\\_, tensor self, double scale, int64\\_t zero\\_point, int dtype)
"""
function quantize_per_tensor(self::Tensor, scale::Float64, zero_point::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_quantize_per_tensor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong, Cint),
outputs__, self.pointer, scale, zero_point, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
quantized_gru(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int)
Wrapper of C++ function void atg\\_quantized\\_gru(tensor *out\\_\\_, tensor input, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first)
"""
function quantized_gru(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_quantized_gru, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint),
outputs__, input.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
quantized_gru1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int)
Wrapper of C++ function void atg\\_quantized\\_gru1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional)
"""
function quantized_gru1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_quantized_gru1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
quantized_gru_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
Wrapper of C++ function void atg\\_quantized\\_gru\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh, tensor packed\\_ih, tensor packed\\_hh, tensor col\\_offsets\\_ih, tensor col\\_offsets\\_hh, scalar scale\\_ih, scalar scale\\_hh, scalar zero\\_point\\_ih, scalar zero\\_point\\_hh)
"""
function quantized_gru_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
outputs__ = Int[0]
scale_ih_s_ = Scalar(scale_ih)
scale_hh_s_ = Scalar(scale_hh)
zero_point_ih_s_ = Scalar(zero_point_ih)
zero_point_hh_s_ = Scalar(zero_point_hh)
__cret = ccall((:atg_quantized_gru_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer, packed_ih.pointer, packed_hh.pointer, col_offsets_ih.pointer, col_offsets_hh.pointer, scale_ih_s_.pointer, scale_hh_s_.pointer, zero_point_ih_s_.pointer, zero_point_hh_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
quantized_lstm(input::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int, dtype::Int, use_dynamic::Int)
Wrapper of C++ function void atg\\_quantized\\_lstm(tensor *out\\_\\_, tensor input, tensor *hx\\_data, int hx\\_len, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first, int dtype, int use\\_dynamic)
"""
function quantized_lstm(input::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int, dtype::Int, use_dynamic::Int) where {T,N}
outputs__ = Int[0, 0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_quantized_lstm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint, Cint, Cint),
outputs__, input.pointer, hx_data_ta_, hx_len, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first, dtype, use_dynamic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
quantized_lstm1(data::Tensor, batch_sizes::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, dtype::Int, use_dynamic::Int)
Wrapper of C++ function void atg\\_quantized\\_lstm1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor *hx\\_data, int hx\\_len, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int dtype, int use\\_dynamic)
"""
function quantized_lstm1(data::Tensor, batch_sizes::Tensor, hx_data::Array{Tensor{T,N}}, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, dtype::Int, use_dynamic::Int) where {T,N}
outputs__ = Int[0, 0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_quantized_lstm1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx_data_ta_, hx_len, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, dtype, use_dynamic)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
quantized_lstm_cell(input::Tensor, hx_data::Array{Tensor{T,N}}, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
Wrapper of C++ function void atg\\_quantized\\_lstm\\_cell(tensor *out\\_\\_, tensor input, tensor *hx\\_data, int hx\\_len, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh, tensor packed\\_ih, tensor packed\\_hh, tensor col\\_offsets\\_ih, tensor col\\_offsets\\_hh, scalar scale\\_ih, scalar scale\\_hh, scalar zero\\_point\\_ih, scalar zero\\_point\\_hh)
"""
function quantized_lstm_cell(input::Tensor, hx_data::Array{Tensor{T,N}}, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber) where {T,N}
outputs__ = Int[0, 0]
hx_data_ta_ = map(x->x.pointer, hx_data)
hx_len = length(hx_data)
scale_ih_s_ = Scalar(scale_ih)
scale_hh_s_ = Scalar(scale_hh)
zero_point_ih_s_ = Scalar(zero_point_ih)
zero_point_hh_s_ = Scalar(zero_point_hh)
__cret = ccall((:atg_quantized_lstm_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx_data_ta_, hx_len, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer, packed_ih.pointer, packed_hh.pointer, col_offsets_ih.pointer, col_offsets_hh.pointer, scale_ih_s_.pointer, scale_hh_s_.pointer, zero_point_ih_s_.pointer, zero_point_hh_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
quantized_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
Wrapper of C++ function void atg\\_quantized\\_max\\_pool2d(tensor *out\\_\\_, tensor self, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len, int ceil\\_mode)
"""
function quantized_max_pool2d(self::Tensor, kernel_size_data::Array{Int64}, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64}, ceil_mode::Int)
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_quantized_max_pool2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, kernel_size_data, kernel_size_len, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len, ceil_mode)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
quantized_rnn_relu_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
Wrapper of C++ function void atg\\_quantized\\_rnn\\_relu\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh, tensor packed\\_ih, tensor packed\\_hh, tensor col\\_offsets\\_ih, tensor col\\_offsets\\_hh, scalar scale\\_ih, scalar scale\\_hh, scalar zero\\_point\\_ih, scalar zero\\_point\\_hh)
"""
function quantized_rnn_relu_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
outputs__ = Int[0]
scale_ih_s_ = Scalar(scale_ih)
scale_hh_s_ = Scalar(scale_hh)
zero_point_ih_s_ = Scalar(zero_point_ih)
zero_point_hh_s_ = Scalar(zero_point_hh)
__cret = ccall((:atg_quantized_rnn_relu_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer, packed_ih.pointer, packed_hh.pointer, col_offsets_ih.pointer, col_offsets_hh.pointer, scale_ih_s_.pointer, scale_hh_s_.pointer, zero_point_ih_s_.pointer, zero_point_hh_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
quantized_rnn_tanh_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
Wrapper of C++ function void atg\\_quantized\\_rnn\\_tanh\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh, tensor packed\\_ih, tensor packed\\_hh, tensor col\\_offsets\\_ih, tensor col\\_offsets\\_hh, scalar scale\\_ih, scalar scale\\_hh, scalar zero\\_point\\_ih, scalar zero\\_point\\_hh)
"""
function quantized_rnn_tanh_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor, packed_ih::Tensor, packed_hh::Tensor, col_offsets_ih::Tensor, col_offsets_hh::Tensor, scale_ih::TorchNumber, scale_hh::TorchNumber, zero_point_ih::TorchNumber, zero_point_hh::TorchNumber)
outputs__ = Int[0]
scale_ih_s_ = Scalar(scale_ih)
scale_hh_s_ = Scalar(scale_hh)
zero_point_ih_s_ = Scalar(zero_point_ih)
zero_point_hh_s_ = Scalar(zero_point_hh)
__cret = ccall((:atg_quantized_rnn_tanh_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer, packed_ih.pointer, packed_hh.pointer, col_offsets_ih.pointer, col_offsets_hh.pointer, scale_ih_s_.pointer, scale_hh_s_.pointer, zero_point_ih_s_.pointer, zero_point_hh_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.rand
"""
rand(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_rand(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function rand(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_rand, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rand_like(self::Tensor)
Wrapper of C++ function void atg\\_rand\\_like(tensor *out\\_\\_, tensor self)
"""
function rand_like(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rand_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rand_like1(self::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_rand\\_like1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device)
"""
function rand_like1(self::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rand_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rand_out(out::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_rand\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len)
"""
function rand_out(out::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_rand_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint(high::Int64, size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randint(tensor *out\\_\\_, int64\\_t high, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function randint(high::Int64, size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randint, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, high, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint1(low::Int64, high::Int64, size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randint1(tensor *out\\_\\_, int64\\_t low, int64\\_t high, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function randint1(low::Int64, high::Int64, size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randint1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, low, high, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_like(self::Tensor, high::Int64)
Wrapper of C++ function void atg\\_randint\\_like(tensor *out\\_\\_, tensor self, int64\\_t high)
"""
function randint_like(self::Tensor, high::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_randint_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, high)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_like1(self::Tensor, low::Int64, high::Int64)
Wrapper of C++ function void atg\\_randint\\_like1(tensor *out\\_\\_, tensor self, int64\\_t low, int64\\_t high)
"""
function randint_like1(self::Tensor, low::Int64, high::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_randint_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, low, high)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_like2(self::Tensor, high::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randint\\_like2(tensor *out\\_\\_, tensor self, int64\\_t high, int options\\_kind, int options\\_device)
"""
function randint_like2(self::Tensor, high::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_randint_like2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, self.pointer, high, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_like3(self::Tensor, low::Int64, high::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randint\\_like3(tensor *out\\_\\_, tensor self, int64\\_t low, int64\\_t high, int options\\_kind, int options\\_device)
"""
function randint_like3(self::Tensor, low::Int64, high::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_randint_like3, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint, Cint),
outputs__, self.pointer, low, high, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_out(out::Tensor, high::Int64, size_data::Array{Int64})
Wrapper of C++ function void atg\\_randint\\_out(tensor *out\\_\\_, tensor out, int64\\_t high, int64\\_t *size\\_data, int size\\_len)
"""
function randint_out(out::Tensor, high::Int64, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randint_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Cint),
outputs__, out.pointer, high, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randint_out1(out::Tensor, low::Int64, high::Int64, size_data::Array{Int64})
Wrapper of C++ function void atg\\_randint\\_out1(tensor *out\\_\\_, tensor out, int64\\_t low, int64\\_t high, int64\\_t *size\\_data, int size\\_len)
"""
function randint_out1(out::Tensor, low::Int64, high::Int64, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randint_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Ptr{Cvoid}, Cint),
outputs__, out.pointer, low, high, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.randn
"""
randn(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randn(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function randn(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randn, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randn_like(self::Tensor)
Wrapper of C++ function void atg\\_randn\\_like(tensor *out\\_\\_, tensor self)
"""
function randn_like(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_randn_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randn_like1(self::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randn\\_like1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device)
"""
function randn_like1(self::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_randn_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randn_out(out::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_randn\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len)
"""
function randn_out(out::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_randn_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
random!(self::Tensor)
Wrapper of C++ function void atg\\_random\\_(tensor *out\\_\\_, tensor self)
"""
function random!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_random_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
random1!(self::Tensor, to::Int64)
Wrapper of C++ function void atg\\_random\\_1(tensor *out\\_\\_, tensor self, int64\\_t to)
"""
function random1!(self::Tensor, to::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_random_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, to)
return self
end
"""
random2!(self::Tensor, from::Int64, to::Int64)
Wrapper of C++ function void atg\\_random\\_2(tensor *out\\_\\_, tensor self, int64\\_t from, int64\\_t to)
"""
function random2!(self::Tensor, from::Int64, to::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_random_2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, from, to)
return self
end
"""
randperm(n::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_randperm(tensor *out\\_\\_, int64\\_t n, int options\\_kind, int options\\_device)
"""
function randperm(n::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_randperm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, n, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
randperm_out(out::Tensor, n::Int64)
Wrapper of C++ function void atg\\_randperm\\_out(tensor *out\\_\\_, tensor out, int64\\_t n)
"""
function randperm_out(out::Tensor, n::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_randperm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, n)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.range
"""
range(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_range(tensor *out\\_\\_, scalar start, scalar end, int options\\_kind, int options\\_device)
"""
function range(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_range, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
range1(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_range1(tensor *out\\_\\_, scalar start, scalar end, int options\\_kind, int options\\_device)
"""
function range1(start::TorchNumber, end_::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_range1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, start_s_.pointer, end__s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
range_out(out::Tensor, start::TorchNumber, end_::TorchNumber)
Wrapper of C++ function void atg\\_range\\_out(tensor *out\\_\\_, tensor out, scalar start, scalar end)
"""
function range_out(out::Tensor, start::TorchNumber, end_::TorchNumber)
outputs__ = Int[0]
start_s_ = Scalar(start)
end__s_ = Scalar(end_)
__cret = ccall((:atg_range_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, start_s_.pointer, end__s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.real
"""
real(self::Tensor)
Wrapper of C++ function void atg\\_real(tensor *out\\_\\_, tensor self)
"""
function real(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_real, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
real_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_real\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function real_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_real_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reciprocal(self::Tensor)
Wrapper of C++ function void atg\\_reciprocal(tensor *out\\_\\_, tensor self)
"""
function reciprocal(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_reciprocal, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reciprocal!(self::Tensor)
Wrapper of C++ function void atg\\_reciprocal\\_(tensor *out\\_\\_, tensor self)
"""
function reciprocal!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_reciprocal_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
reciprocal_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_reciprocal\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function reciprocal_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_reciprocal_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad1d(self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad1d(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad1d(self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad1d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad1d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad1d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad1d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad1d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad1d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad1d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad1d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad1d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad1d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad1d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad1d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad2d(self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad2d(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad2d(self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad2d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad2d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reflection_pad2d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_reflection\\_pad2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function reflection_pad2d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_reflection_pad2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
relu(self::Tensor)
Wrapper of C++ function void atg\\_relu(tensor *out\\_\\_, tensor self)
"""
function relu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_relu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
relu!(self::Tensor)
Wrapper of C++ function void atg\\_relu\\_(tensor *out\\_\\_, tensor self)
"""
function relu!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_relu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
remainder(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_remainder(tensor *out\\_\\_, tensor self, scalar other)
"""
function remainder(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_remainder, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
remainder1(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_remainder1(tensor *out\\_\\_, tensor self, tensor other)
"""
function remainder1(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_remainder1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
remainder!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_remainder\\_(tensor *out\\_\\_, tensor self, scalar other)
"""
function remainder!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_remainder_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
remainder1!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_remainder\\_1(tensor *out\\_\\_, tensor self, tensor other)
"""
function remainder1!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_remainder_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
remainder_out(out::Tensor, self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_remainder\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar other)
"""
function remainder_out(out::Tensor, self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_remainder_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
remainder_out1(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_remainder\\_out1(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function remainder_out1(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_remainder_out1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
renorm(self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
Wrapper of C++ function void atg\\_renorm(tensor *out\\_\\_, tensor self, scalar p, int64\\_t dim, scalar maxnorm)
"""
function renorm(self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
outputs__ = Int[0]
p_s_ = Scalar(p)
maxnorm_s_ = Scalar(maxnorm)
__cret = ccall((:atg_renorm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, self.pointer, p_s_.pointer, dim, maxnorm_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
renorm!(self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
Wrapper of C++ function void atg\\_renorm\\_(tensor *out\\_\\_, tensor self, scalar p, int64\\_t dim, scalar maxnorm)
"""
function renorm!(self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
outputs__ = Int[0]
p_s_ = Scalar(p)
maxnorm_s_ = Scalar(maxnorm)
__cret = ccall((:atg_renorm_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, self.pointer, p_s_.pointer, dim, maxnorm_s_.pointer)
return self
end
"""
renorm_out(out::Tensor, self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
Wrapper of C++ function void atg\\_renorm\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar p, int64\\_t dim, scalar maxnorm)
"""
function renorm_out(out::Tensor, self::Tensor, p::TorchNumber, dim::Int64, maxnorm::TorchNumber)
outputs__ = Int[0]
p_s_ = Scalar(p)
maxnorm_s_ = Scalar(maxnorm)
__cret = ccall((:atg_renorm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, p_s_.pointer, dim, maxnorm_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.repeat
"""
repeat(self::Tensor, repeats_data::Array{Int64})
Wrapper of C++ function void atg\\_repeat(tensor *out\\_\\_, tensor self, int64\\_t *repeats\\_data, int repeats\\_len)
"""
function repeat(self::Tensor, repeats_data::Array{Int64})
outputs__ = Int[0]
repeats_len = length(repeats_data)
__cret = ccall((:atg_repeat, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, repeats_data, repeats_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
repeat_interleave(repeats::Tensor)
Wrapper of C++ function void atg\\_repeat\\_interleave(tensor *out\\_\\_, tensor repeats)
"""
function repeat_interleave(repeats::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_repeat_interleave, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, repeats.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
repeat_interleave1(self::Tensor, repeats::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_repeat\\_interleave1(tensor *out\\_\\_, tensor self, tensor repeats, int64\\_t dim)
"""
function repeat_interleave1(self::Tensor, repeats::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_repeat_interleave1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, repeats.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
repeat_interleave2(self::Tensor, repeats::Int64, dim::Int64)
Wrapper of C++ function void atg\\_repeat\\_interleave2(tensor *out\\_\\_, tensor self, int64\\_t repeats, int64\\_t dim)
"""
function repeat_interleave2(self::Tensor, repeats::Int64, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_repeat_interleave2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, repeats, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad1d(self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad1d(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad1d(self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad1d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad1d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad1d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad1d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad1d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad1d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad1d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad1d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad1d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad1d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad1d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad1d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad2d(self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad2d(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad2d(self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad2d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad2d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad2d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad2d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad2d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad3d(self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad3d(tensor *out\\_\\_, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad3d(self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad3d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad3d_backward(grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad3d_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
replication_pad3d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_replication\\_pad3d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *padding\\_data, int padding\\_len)
"""
function replication_pad3d_out(out::Tensor, self::Tensor, padding_data::Array{Int64})
outputs__ = Int[0]
padding_len = length(padding_data)
__cret = ccall((:atg_replication_pad3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
requires_grad!(self::Tensor, _requires_grad::Int)
Wrapper of C++ function void atg\\_requires\\_grad\\_(tensor *out\\_\\_, tensor self, int \\_requires\\_grad)
"""
function requires_grad!(self::Tensor, _requires_grad::Int)
outputs__ = Int[0]
__cret = ccall((:atg_requires_grad_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, _requires_grad)
return self
end
import Base.reshape
"""
reshape(self::Tensor, shape_data::Array{Int64})
Wrapper of C++ function void atg\\_reshape(tensor *out\\_\\_, tensor self, int64\\_t *shape\\_data, int shape\\_len)
"""
function reshape(self::Tensor, shape_data::Array{Int64})
outputs__ = Int[0]
shape_len = length(shape_data)
__cret = ccall((:atg_reshape, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, shape_data, shape_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
reshape_as(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_reshape\\_as(tensor *out\\_\\_, tensor self, tensor other)
"""
function reshape_as(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_reshape_as, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.resize!
"""
resize!(self::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_resize\\_(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len)
"""
function resize!(self::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_resize_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, size_data, size_len)
return self
end
"""
resize_as!(self::Tensor, the_template::Tensor)
Wrapper of C++ function void atg\\_resize\\_as\\_(tensor *out\\_\\_, tensor self, tensor the\\_template)
"""
function resize_as!(self::Tensor, the_template::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_resize_as_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, the_template.pointer)
return self
end
"""
rfft(self::Tensor, signal_ndim::Int64, normalized::Int, onesided::Int)
Wrapper of C++ function void atg\\_rfft(tensor *out\\_\\_, tensor self, int64\\_t signal\\_ndim, int normalized, int onesided)
"""
function rfft(self::Tensor, signal_ndim::Int64, normalized::Int, onesided::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rfft, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, self.pointer, signal_ndim, normalized, onesided)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rnn_relu(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int)
Wrapper of C++ function void atg\\_rnn\\_relu(tensor *out\\_\\_, tensor input, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first)
"""
function rnn_relu(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_rnn_relu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint),
outputs__, input.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
rnn_relu1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int)
Wrapper of C++ function void atg\\_rnn\\_relu1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional)
"""
function rnn_relu1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_rnn_relu1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
rnn_relu_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
Wrapper of C++ function void atg\\_rnn\\_relu\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh)
"""
function rnn_relu_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rnn_relu_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rnn_tanh(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int)
Wrapper of C++ function void atg\\_rnn\\_tanh(tensor *out\\_\\_, tensor input, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional, int batch\\_first)
"""
function rnn_tanh(input::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int, batch_first::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_rnn_tanh, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint, Cint),
outputs__, input.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional, batch_first)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
rnn_tanh1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int)
Wrapper of C++ function void atg\\_rnn\\_tanh1(tensor *out\\_\\_, tensor data, tensor batch\\_sizes, tensor hx, tensor *params\\_data, int params\\_len, int has\\_biases, int64\\_t num\\_layers, double dropout, int train, int bidirectional)
"""
function rnn_tanh1(data::Tensor, batch_sizes::Tensor, hx::Tensor, params_data::Array{Tensor{T,N}}, has_biases::Int, num_layers::Int64, dropout::Float64, train::Int, bidirectional::Int) where {T,N}
outputs__ = Int[0, 0]
params_data_ta_ = map(x->x.pointer, params_data)
params_len = length(params_data)
__cret = ccall((:atg_rnn_tanh1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong, Cdouble, Cint, Cint),
outputs__, data.pointer, batch_sizes.pointer, hx.pointer, params_data_ta_, params_len, has_biases, num_layers, dropout, train, bidirectional)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
rnn_tanh_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
Wrapper of C++ function void atg\\_rnn\\_tanh\\_cell(tensor *out\\_\\_, tensor input, tensor hx, tensor w\\_ih, tensor w\\_hh, tensor b\\_ih, tensor b\\_hh)
"""
function rnn_tanh_cell(input::Tensor, hx::Tensor, w_ih::Tensor, w_hh::Tensor, b_ih::Tensor, b_hh::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rnn_tanh_cell, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, input.pointer, hx.pointer, w_ih.pointer, w_hh.pointer, b_ih.pointer, b_hh.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
roll(self::Tensor, shifts_data::Array{Int64}, dims_data::Array{Int64})
Wrapper of C++ function void atg\\_roll(tensor *out\\_\\_, tensor self, int64\\_t *shifts\\_data, int shifts\\_len, int64\\_t *dims\\_data, int dims\\_len)
"""
function roll(self::Tensor, shifts_data::Array{Int64}, dims_data::Array{Int64})
outputs__ = Int[0]
shifts_len = length(shifts_data)
dims_len = length(dims_data)
__cret = ccall((:atg_roll, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, shifts_data, shifts_len, dims_data, dims_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rot90(self::Tensor, k::Int64, dims_data::Array{Int64})
Wrapper of C++ function void atg\\_rot90(tensor *out\\_\\_, tensor self, int64\\_t k, int64\\_t *dims\\_data, int dims\\_len)
"""
function rot90(self::Tensor, k::Int64, dims_data::Array{Int64})
outputs__ = Int[0]
dims_len = length(dims_data)
__cret = ccall((:atg_rot90, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Cint),
outputs__, self.pointer, k, dims_data, dims_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.round
"""
round(self::Tensor)
Wrapper of C++ function void atg\\_round(tensor *out\\_\\_, tensor self)
"""
function round(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_round, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
round!(self::Tensor)
Wrapper of C++ function void atg\\_round\\_(tensor *out\\_\\_, tensor self)
"""
function round!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_round_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
round_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_round\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function round_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_round_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rrelu(self::Tensor, training::Int)
Wrapper of C++ function void atg\\_rrelu(tensor *out\\_\\_, tensor self, int training)
"""
function rrelu(self::Tensor, training::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rrelu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, training)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rrelu!(self::Tensor, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_(tensor *out\\_\\_, tensor self, int training)
"""
function rrelu!(self::Tensor, training::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rrelu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, training)
return self
end
"""
rrelu_with_noise(self::Tensor, noise::Tensor, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_with\\_noise(tensor *out\\_\\_, tensor self, tensor noise, int training)
"""
function rrelu_with_noise(self::Tensor, noise::Tensor, training::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rrelu_with_noise, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, noise.pointer, training)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rrelu_with_noise!(self::Tensor, noise::Tensor, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_with\\_noise\\_(tensor *out\\_\\_, tensor self, tensor noise, int training)
"""
function rrelu_with_noise!(self::Tensor, noise::Tensor, training::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rrelu_with_noise_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, noise.pointer, training)
return self
end
"""
rrelu_with_noise_backward(grad_output::Tensor, self::Tensor, noise::Tensor, lower::TorchNumber, upper::TorchNumber, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_with\\_noise\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor noise, scalar lower, scalar upper, int training)
"""
function rrelu_with_noise_backward(grad_output::Tensor, self::Tensor, noise::Tensor, lower::TorchNumber, upper::TorchNumber, training::Int)
outputs__ = Int[0]
lower_s_ = Scalar(lower)
upper_s_ = Scalar(upper)
__cret = ccall((:atg_rrelu_with_noise_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, self.pointer, noise.pointer, lower_s_.pointer, upper_s_.pointer, training)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rrelu_with_noise_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, noise::Tensor, lower::TorchNumber, upper::TorchNumber, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_with\\_noise\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor noise, scalar lower, scalar upper, int training)
"""
function rrelu_with_noise_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, noise::Tensor, lower::TorchNumber, upper::TorchNumber, training::Int)
outputs__ = Int[0]
lower_s_ = Scalar(lower)
upper_s_ = Scalar(upper)
__cret = ccall((:atg_rrelu_with_noise_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, noise.pointer, lower_s_.pointer, upper_s_.pointer, training)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rrelu_with_noise_out(out::Tensor, self::Tensor, noise::Tensor, training::Int)
Wrapper of C++ function void atg\\_rrelu\\_with\\_noise\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor noise, int training)
"""
function rrelu_with_noise_out(out::Tensor, self::Tensor, noise::Tensor, training::Int)
outputs__ = Int[0]
__cret = ccall((:atg_rrelu_with_noise_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, noise.pointer, training)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rsqrt(self::Tensor)
Wrapper of C++ function void atg\\_rsqrt(tensor *out\\_\\_, tensor self)
"""
function rsqrt(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rsqrt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rsqrt!(self::Tensor)
Wrapper of C++ function void atg\\_rsqrt\\_(tensor *out\\_\\_, tensor self)
"""
function rsqrt!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rsqrt_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
rsqrt_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_rsqrt\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function rsqrt_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rsqrt_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rsub(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_rsub(tensor *out\\_\\_, tensor self, tensor other)
"""
function rsub(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_rsub, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
rsub1(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_rsub1(tensor *out\\_\\_, tensor self, scalar other)
"""
function rsub1(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_rsub1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
scalar_tensor(s::TorchNumber, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_scalar\\_tensor(tensor *out\\_\\_, scalar s, int options\\_kind, int options\\_device)
"""
function scalar_tensor(s::TorchNumber, options_kind::Int, options_device::Int)
outputs__ = Int[0]
s_s_ = Scalar(s)
__cret = ccall((:atg_scalar_tensor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, s_s_.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
scatter(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
Wrapper of C++ function void atg\\_scatter(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor src)
"""
function scatter(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_scatter, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, src.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
scatter1(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_scatter1(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, scalar value)
"""
function scatter1(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_scatter1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
scatter!(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
Wrapper of C++ function void atg\\_scatter\\_(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor src)
"""
function scatter!(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_scatter_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, src.pointer)
return self
end
"""
scatter1!(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
Wrapper of C++ function void atg\\_scatter\\_1(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, scalar value)
"""
function scatter1!(self::Tensor, dim::Int64, index::Tensor, value::TorchNumber)
outputs__ = Int[0]
value_s_ = Scalar(value)
__cret = ccall((:atg_scatter_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, value_s_.pointer)
return self
end
"""
scatter_add(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
Wrapper of C++ function void atg\\_scatter\\_add(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor src)
"""
function scatter_add(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_scatter_add, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, src.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
scatter_add!(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
Wrapper of C++ function void atg\\_scatter\\_add\\_(tensor *out\\_\\_, tensor self, int64\\_t dim, tensor index, tensor src)
"""
function scatter_add!(self::Tensor, dim::Int64, index::Tensor, src::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_scatter_add_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, dim, index.pointer, src.pointer)
return self
end
"""
select(self::Tensor, dim::Int64, index::Int64)
Wrapper of C++ function void atg\\_select(tensor *out\\_\\_, tensor self, int64\\_t dim, int64\\_t index)
"""
function select(self::Tensor, dim::Int64, index::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_select, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, dim, index)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
selu(self::Tensor)
Wrapper of C++ function void atg\\_selu(tensor *out\\_\\_, tensor self)
"""
function selu(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_selu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
selu!(self::Tensor)
Wrapper of C++ function void atg\\_selu\\_(tensor *out\\_\\_, tensor self)
"""
function selu!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_selu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
set!(self::Tensor)
Wrapper of C++ function void atg\\_set\\_(tensor *out\\_\\_, tensor self)
"""
function set!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_set_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
set1!(self::Tensor, source::Tensor)
Wrapper of C++ function void atg\\_set\\_1(tensor *out\\_\\_, tensor self, tensor source)
"""
function set1!(self::Tensor, source::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_set_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, source.pointer)
return self
end
"""
set_requires_grad(self::Tensor, r::Int)
Wrapper of C++ function void atg\\_set\\_requires\\_grad(tensor *out\\_\\_, tensor self, int r)
"""
function set_requires_grad(self::Tensor, r::Int)
outputs__ = Int[0]
__cret = ccall((:atg_set_requires_grad, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, r)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sigmoid(self::Tensor)
Wrapper of C++ function void atg\\_sigmoid(tensor *out\\_\\_, tensor self)
"""
function sigmoid(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sigmoid, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sigmoid!(self::Tensor)
Wrapper of C++ function void atg\\_sigmoid\\_(tensor *out\\_\\_, tensor self)
"""
function sigmoid!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sigmoid_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
sigmoid_backward(grad_output::Tensor, output::Tensor)
Wrapper of C++ function void atg\\_sigmoid\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor output)
"""
function sigmoid_backward(grad_output::Tensor, output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sigmoid_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sigmoid_backward_out(grad_input::Tensor, grad_output::Tensor, output::Tensor)
Wrapper of C++ function void atg\\_sigmoid\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor output)
"""
function sigmoid_backward_out(grad_input::Tensor, grad_output::Tensor, output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sigmoid_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sigmoid_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_sigmoid\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function sigmoid_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sigmoid_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.sign
"""
sign(self::Tensor)
Wrapper of C++ function void atg\\_sign(tensor *out\\_\\_, tensor self)
"""
function sign(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sign, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sign!(self::Tensor)
Wrapper of C++ function void atg\\_sign\\_(tensor *out\\_\\_, tensor self)
"""
function sign!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sign_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
sign_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_sign\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function sign_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sign_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.sin
"""
sin(self::Tensor)
Wrapper of C++ function void atg\\_sin(tensor *out\\_\\_, tensor self)
"""
function sin(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sin, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sin!(self::Tensor)
Wrapper of C++ function void atg\\_sin\\_(tensor *out\\_\\_, tensor self)
"""
function sin!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sin_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
sin_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_sin\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function sin_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sin_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.sinh
"""
sinh(self::Tensor)
Wrapper of C++ function void atg\\_sinh(tensor *out\\_\\_, tensor self)
"""
function sinh(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sinh, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sinh!(self::Tensor)
Wrapper of C++ function void atg\\_sinh\\_(tensor *out\\_\\_, tensor self)
"""
function sinh!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sinh_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
sinh_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_sinh\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function sinh_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sinh_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slice(self::Tensor, dim::Int64, start::Int64, end_::Int64, step::Int64)
Wrapper of C++ function void atg\\_slice(tensor *out\\_\\_, tensor self, int64\\_t dim, int64\\_t start, int64\\_t end, int64\\_t step)
"""
function slice(self::Tensor, dim::Int64, start::Int64, end_::Int64, step::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_slice, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, dim, start, end_, step)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slogdet(self::Tensor)
Wrapper of C++ function void atg\\_slogdet(tensor *out\\_\\_, tensor self)
"""
function slogdet(self::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_slogdet, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
slow_conv3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv3d(tensor *out\\_\\_, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function slow_conv3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_slow_conv3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv3d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv3d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len)
"""
function slow_conv3d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
__cret = ccall((:atg_slow_conv3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_dilated2d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_dilated2d(tensor *out\\_\\_, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_dilated2d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_dilated2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_dilated3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_dilated3d(tensor *out\\_\\_, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_dilated3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_dilated3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_transpose2d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_transpose2d(tensor *out\\_\\_, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_transpose2d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_transpose2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_transpose2d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_transpose2d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_transpose2d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_transpose2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_transpose3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_transpose3d(tensor *out\\_\\_, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_transpose3d(self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_transpose3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
slow_conv_transpose3d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
Wrapper of C++ function void atg\\_slow\\_conv\\_transpose3d\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor weight, int64\\_t *kernel\\_size\\_data, int kernel\\_size\\_len, tensor bias, int64\\_t *stride\\_data, int stride\\_len, int64\\_t *padding\\_data, int padding\\_len, int64\\_t *output\\_padding\\_data, int output\\_padding\\_len, int64\\_t *dilation\\_data, int dilation\\_len)
"""
function slow_conv_transpose3d_out(out::Tensor, self::Tensor, weight::Tensor, kernel_size_data::Array{Int64}, bias::Tensor, stride_data::Array{Int64}, padding_data::Array{Int64}, output_padding_data::Array{Int64}, dilation_data::Array{Int64})
outputs__ = Int[0]
kernel_size_len = length(kernel_size_data)
stride_len = length(stride_data)
padding_len = length(padding_data)
output_padding_len = length(output_padding_data)
dilation_len = length(dilation_data)
__cret = ccall((:atg_slow_conv_transpose3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, weight.pointer, kernel_size_data, kernel_size_len, bias.pointer, stride_data, stride_len, padding_data, padding_len, output_padding_data, output_padding_len, dilation_data, dilation_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
smm(self::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_smm(tensor *out\\_\\_, tensor self, tensor mat2)
"""
function smm(self::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_smm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
smooth_l1_loss(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_smooth\\_l1\\_loss(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function smooth_l1_loss(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_smooth_l1_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
smooth_l1_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_smooth\\_l1\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function smooth_l1_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_smooth_l1_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
smooth_l1_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_smooth\\_l1\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function smooth_l1_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_smooth_l1_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
smooth_l1_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_smooth\\_l1\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, int64\\_t reduction)
"""
function smooth_l1_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_smooth_l1_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
soft_margin_loss(self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_soft\\_margin\\_loss(tensor *out\\_\\_, tensor self, tensor target, int64\\_t reduction)
"""
function soft_margin_loss(self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_soft_margin_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
soft_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_soft\\_margin\\_loss\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function soft_margin_loss_backward(grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_soft_margin_loss_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
soft_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_soft\\_margin\\_loss\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, tensor target, int64\\_t reduction)
"""
function soft_margin_loss_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_soft_margin_loss_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
soft_margin_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
Wrapper of C++ function void atg\\_soft\\_margin\\_loss\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor target, int64\\_t reduction)
"""
function soft_margin_loss_out(out::Tensor, self::Tensor, target::Tensor, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_soft_margin_loss_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, target.pointer, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softmax(self::Tensor, dim::Int64, dtype::Int)
Wrapper of C++ function void atg\\_softmax(tensor *out\\_\\_, tensor self, int64\\_t dim, int dtype)
"""
function softmax(self::Tensor, dim::Int64, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_softmax, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softplus(self::Tensor)
Wrapper of C++ function void atg\\_softplus(tensor *out\\_\\_, tensor self)
"""
function softplus(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_softplus, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softplus_backward(grad_output::Tensor, self::Tensor, beta::TorchNumber, threshold::TorchNumber, output::Tensor)
Wrapper of C++ function void atg\\_softplus\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, scalar beta, scalar threshold, tensor output)
"""
function softplus_backward(grad_output::Tensor, self::Tensor, beta::TorchNumber, threshold::TorchNumber, output::Tensor)
outputs__ = Int[0]
beta_s_ = Scalar(beta)
threshold_s_ = Scalar(threshold)
__cret = ccall((:atg_softplus_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, beta_s_.pointer, threshold_s_.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softplus_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, beta::TorchNumber, threshold::TorchNumber, output::Tensor)
Wrapper of C++ function void atg\\_softplus\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, scalar beta, scalar threshold, tensor output)
"""
function softplus_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, beta::TorchNumber, threshold::TorchNumber, output::Tensor)
outputs__ = Int[0]
beta_s_ = Scalar(beta)
threshold_s_ = Scalar(threshold)
__cret = ccall((:atg_softplus_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, beta_s_.pointer, threshold_s_.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softplus_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_softplus\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function softplus_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_softplus_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softshrink(self::Tensor)
Wrapper of C++ function void atg\\_softshrink(tensor *out\\_\\_, tensor self)
"""
function softshrink(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_softshrink, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softshrink_backward(grad_output::Tensor, self::Tensor, lambd::TorchNumber)
Wrapper of C++ function void atg\\_softshrink\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, scalar lambd)
"""
function softshrink_backward(grad_output::Tensor, self::Tensor, lambd::TorchNumber)
outputs__ = Int[0]
lambd_s_ = Scalar(lambd)
__cret = ccall((:atg_softshrink_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, lambd_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softshrink_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, lambd::TorchNumber)
Wrapper of C++ function void atg\\_softshrink\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor self, scalar lambd)
"""
function softshrink_backward_out(grad_input::Tensor, grad_output::Tensor, self::Tensor, lambd::TorchNumber)
outputs__ = Int[0]
lambd_s_ = Scalar(lambd)
__cret = ccall((:atg_softshrink_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, self.pointer, lambd_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
softshrink_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_softshrink\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function softshrink_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_softshrink_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
solve(self::Tensor, A::Tensor)
Wrapper of C++ function void atg\\_solve(tensor *out\\_\\_, tensor self, tensor A)
"""
function solve(self::Tensor, A::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_solve, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, A.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
solve_out(solution::Tensor, lu::Tensor, self::Tensor, A::Tensor)
Wrapper of C++ function void atg\\_solve\\_out(tensor *out\\_\\_, tensor solution, tensor lu, tensor self, tensor A)
"""
function solve_out(solution::Tensor, lu::Tensor, self::Tensor, A::Tensor)
outputs__ = Int[0, 0]
__cret = ccall((:atg_solve_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, solution.pointer, lu.pointer, self.pointer, A.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
import Base.sort
"""
sort(self::Tensor, dim::Int64, descending::Int)
Wrapper of C++ function void atg\\_sort(tensor *out\\_\\_, tensor self, int64\\_t dim, int descending)
"""
function sort(self::Tensor, dim::Int64, descending::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_sort, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, self.pointer, dim, descending)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
sort_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, descending::Int)
Wrapper of C++ function void atg\\_sort\\_out(tensor *out\\_\\_, tensor values, tensor indices, tensor self, int64\\_t dim, int descending)
"""
function sort_out(values::Tensor, indices::Tensor, self::Tensor, dim::Int64, descending::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_sort_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint),
outputs__, values.pointer, indices.pointer, self.pointer, dim, descending)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
sparse_coo_tensor(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_sparse\\_coo\\_tensor(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function sparse_coo_tensor(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_sparse_coo_tensor, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sparse_coo_tensor1(indices::Tensor, values::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_sparse\\_coo\\_tensor1(tensor *out\\_\\_, tensor indices, tensor values, int options\\_kind, int options\\_device)
"""
function sparse_coo_tensor1(indices::Tensor, values::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_sparse_coo_tensor1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, indices.pointer, values.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sparse_coo_tensor2(indices::Tensor, values::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_sparse\\_coo\\_tensor2(tensor *out\\_\\_, tensor indices, tensor values, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function sparse_coo_tensor2(indices::Tensor, values::Tensor, size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_sparse_coo_tensor2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, indices.pointer, values.pointer, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sparse_mask(self::Tensor, mask::Tensor)
Wrapper of C++ function void atg\\_sparse\\_mask(tensor *out\\_\\_, tensor self, tensor mask)
"""
function sparse_mask(self::Tensor, mask::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sparse_mask, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mask.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sparse_resize!(self::Tensor, size_data::Array{Int64}, sparse_dim::Int64, dense_dim::Int64)
Wrapper of C++ function void atg\\_sparse\\_resize\\_(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int64\\_t sparse\\_dim, int64\\_t dense\\_dim)
"""
function sparse_resize!(self::Tensor, size_data::Array{Int64}, sparse_dim::Int64, dense_dim::Int64)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_sparse_resize_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong, Clonglong),
outputs__, self.pointer, size_data, size_len, sparse_dim, dense_dim)
return self
end
"""
sparse_resize_and_clear!(self::Tensor, size_data::Array{Int64}, sparse_dim::Int64, dense_dim::Int64)
Wrapper of C++ function void atg\\_sparse\\_resize\\_and\\_clear\\_(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len, int64\\_t sparse\\_dim, int64\\_t dense\\_dim)
"""
function sparse_resize_and_clear!(self::Tensor, size_data::Array{Int64}, sparse_dim::Int64, dense_dim::Int64)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_sparse_resize_and_clear_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong, Clonglong),
outputs__, self.pointer, size_data, size_len, sparse_dim, dense_dim)
return self
end
import Base.split
"""
split(self::Tensor, split_size::Int64, dim::Int64)
Wrapper of C++ function tensor *atg\\_split(tensor self, int64\\_t split\\_size, int64\\_t dim)
"""
function split(self::Tensor, split_size::Int64, dim::Int64)
__cret = ccall((:atg_split, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Clonglong, Clonglong),
self.pointer, split_size, dim)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
"""
split_with_sizes(self::Tensor, split_sizes_data::Array{Int64}, dim::Int64)
Wrapper of C++ function tensor *atg\\_split\\_with\\_sizes(tensor self, int64\\_t *split\\_sizes\\_data, int split\\_sizes\\_len, int64\\_t dim)
"""
function split_with_sizes(self::Tensor, split_sizes_data::Array{Int64}, dim::Int64)
split_sizes_len = length(split_sizes_data)
__cret = ccall((:atg_split_with_sizes, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong),
self.pointer, split_sizes_data, split_sizes_len, dim)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
import Base.sqrt
"""
sqrt(self::Tensor)
Wrapper of C++ function void atg\\_sqrt(tensor *out\\_\\_, tensor self)
"""
function sqrt(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sqrt, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sqrt!(self::Tensor)
Wrapper of C++ function void atg\\_sqrt\\_(tensor *out\\_\\_, tensor self)
"""
function sqrt!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sqrt_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
sqrt_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_sqrt\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function sqrt_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sqrt_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
squeeze(self::Tensor)
Wrapper of C++ function void atg\\_squeeze(tensor *out\\_\\_, tensor self)
"""
function squeeze(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_squeeze, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
squeeze1(self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_squeeze1(tensor *out\\_\\_, tensor self, int64\\_t dim)
"""
function squeeze1(self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_squeeze1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
squeeze!(self::Tensor)
Wrapper of C++ function void atg\\_squeeze\\_(tensor *out\\_\\_, tensor self)
"""
function squeeze!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_squeeze_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
squeeze1!(self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_squeeze\\_1(tensor *out\\_\\_, tensor self, int64\\_t dim)
"""
function squeeze1!(self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_squeeze_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, dim)
return self
end
"""
sspaddmm(self::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_sspaddmm(tensor *out\\_\\_, tensor self, tensor mat1, tensor mat2)
"""
function sspaddmm(self::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sspaddmm, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sspaddmm_out(out::Tensor, self::Tensor, mat1::Tensor, mat2::Tensor)
Wrapper of C++ function void atg\\_sspaddmm\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor mat1, tensor mat2)
"""
function sspaddmm_out(out::Tensor, self::Tensor, mat1::Tensor, mat2::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sspaddmm_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, mat1.pointer, mat2.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
stack(tensors_data::Array{Tensor{T,N}}, dim::Int64)
Wrapper of C++ function void atg\\_stack(tensor *out\\_\\_, tensor *tensors\\_data, int tensors\\_len, int64\\_t dim)
"""
function stack(tensors_data::Array{Tensor{T,N}}, dim::Int64) where {T,N}
outputs__ = Int[0]
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_stack, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong),
outputs__, tensors_data_ta_, tensors_len, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
stack_out(out::Tensor, tensors_data::Array{Tensor{T,N}}, dim::Int64)
Wrapper of C++ function void atg\\_stack\\_out(tensor *out\\_\\_, tensor out, tensor *tensors\\_data, int tensors\\_len, int64\\_t dim)
"""
function stack_out(out::Tensor, tensors_data::Array{Tensor{T,N}}, dim::Int64) where {T,N}
outputs__ = Int[0]
tensors_data_ta_ = map(x->x.pointer, tensors_data)
tensors_len = length(tensors_data)
__cret = ccall((:atg_stack_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Clonglong),
outputs__, out.pointer, tensors_data_ta_, tensors_len, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
std(self::Tensor, unbiased::Int)
Wrapper of C++ function void atg\\_std(tensor *out\\_\\_, tensor self, int unbiased)
"""
function std(self::Tensor, unbiased::Int)
outputs__ = Int[0]
__cret = ccall((:atg_std, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, unbiased)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
std1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_std1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function std1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_std1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
std_mean(self::Tensor, unbiased::Int)
Wrapper of C++ function void atg\\_std\\_mean(tensor *out\\_\\_, tensor self, int unbiased)
"""
function std_mean(self::Tensor, unbiased::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_std_mean, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, unbiased)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
std_mean1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_std\\_mean1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function std_mean1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0, 0]
dim_len = length(dim_data)
__cret = ccall((:atg_std_mean1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
std_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_std\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function std_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_std_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
stft(self::Tensor, n_fft::Int64, hop_length::Int64, win_length::Int64, window::Tensor, normalized::Int, onesided::Int)
Wrapper of C++ function void atg\\_stft(tensor *out\\_\\_, tensor self, int64\\_t n\\_fft, int64\\_t hop\\_length, int64\\_t win\\_length, tensor window, int normalized, int onesided)
"""
function stft(self::Tensor, n_fft::Int64, hop_length::Int64, win_length::Int64, window::Tensor, normalized::Int, onesided::Int)
outputs__ = Int[0]
__cret = ccall((:atg_stft, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, n_fft, hop_length, win_length, window.pointer, normalized, onesided)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sub(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_sub(tensor *out\\_\\_, tensor self, tensor other)
"""
function sub(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sub, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sub1(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_sub1(tensor *out\\_\\_, tensor self, scalar other)
"""
function sub1(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_sub1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sub!(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_sub\\_(tensor *out\\_\\_, tensor self, tensor other)
"""
function sub!(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sub_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
return self
end
"""
sub1!(self::Tensor, other::TorchNumber)
Wrapper of C++ function void atg\\_sub\\_1(tensor *out\\_\\_, tensor self, scalar other)
"""
function sub1!(self::Tensor, other::TorchNumber)
outputs__ = Int[0]
other_s_ = Scalar(other)
__cret = ccall((:atg_sub_1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other_s_.pointer)
return self
end
"""
sub_out(out::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_sub\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor other)
"""
function sub_out(out::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_sub_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.sum
"""
sum(self::Tensor, dtype::Int)
Wrapper of C++ function void atg\\_sum(tensor *out\\_\\_, tensor self, int dtype)
"""
function sum(self::Tensor, dtype::Int)
outputs__ = Int[0]
__cret = ccall((:atg_sum, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sum1(self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_sum1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function sum1(self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_sum1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sum_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
Wrapper of C++ function void atg\\_sum\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int keepdim, int dtype)
"""
function sum_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, keepdim::Int, dtype::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_sum_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, keepdim, dtype)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
sum_to_size(self::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_sum\\_to\\_size(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len)
"""
function sum_to_size(self::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_sum_to_size, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
svd(self::Tensor, some::Int, compute_uv::Int)
Wrapper of C++ function void atg\\_svd(tensor *out\\_\\_, tensor self, int some, int compute\\_uv)
"""
function svd(self::Tensor, some::Int, compute_uv::Int)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_svd, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, some, compute_uv)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
svd_out(U::Tensor, S::Tensor, V::Tensor, self::Tensor, some::Int, compute_uv::Int)
Wrapper of C++ function void atg\\_svd\\_out(tensor *out\\_\\_, tensor U, tensor S, tensor V, tensor self, int some, int compute\\_uv)
"""
function svd_out(U::Tensor, S::Tensor, V::Tensor, self::Tensor, some::Int, compute_uv::Int)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_svd_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, U.pointer, S.pointer, V.pointer, self.pointer, some, compute_uv)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
symeig(self::Tensor, eigenvectors::Int, upper::Int)
Wrapper of C++ function void atg\\_symeig(tensor *out\\_\\_, tensor self, int eigenvectors, int upper)
"""
function symeig(self::Tensor, eigenvectors::Int, upper::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_symeig, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, eigenvectors, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
symeig_out(e::Tensor, V::Tensor, self::Tensor, eigenvectors::Int, upper::Int)
Wrapper of C++ function void atg\\_symeig\\_out(tensor *out\\_\\_, tensor e, tensor V, tensor self, int eigenvectors, int upper)
"""
function symeig_out(e::Tensor, V::Tensor, self::Tensor, eigenvectors::Int, upper::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_symeig_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, e.pointer, V.pointer, self.pointer, eigenvectors, upper)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
t(self::Tensor)
Wrapper of C++ function void atg\\_t(tensor *out\\_\\_, tensor self)
"""
function t(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_t, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
t!(self::Tensor)
Wrapper of C++ function void atg\\_t\\_(tensor *out\\_\\_, tensor self)
"""
function t!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_t_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
take(self::Tensor, index::Tensor)
Wrapper of C++ function void atg\\_take(tensor *out\\_\\_, tensor self, tensor index)
"""
function take(self::Tensor, index::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_take, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, index.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
take_out(out::Tensor, self::Tensor, index::Tensor)
Wrapper of C++ function void atg\\_take\\_out(tensor *out\\_\\_, tensor out, tensor self, tensor index)
"""
function take_out(out::Tensor, self::Tensor, index::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_take_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, index.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.tan
"""
tan(self::Tensor)
Wrapper of C++ function void atg\\_tan(tensor *out\\_\\_, tensor self)
"""
function tan(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tan, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tan!(self::Tensor)
Wrapper of C++ function void atg\\_tan\\_(tensor *out\\_\\_, tensor self)
"""
function tan!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tan_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
tan_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_tan\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function tan_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tan_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.tanh
"""
tanh(self::Tensor)
Wrapper of C++ function void atg\\_tanh(tensor *out\\_\\_, tensor self)
"""
function tanh(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tanh, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tanh!(self::Tensor)
Wrapper of C++ function void atg\\_tanh\\_(tensor *out\\_\\_, tensor self)
"""
function tanh!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tanh_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
tanh_backward(grad_output::Tensor, output::Tensor)
Wrapper of C++ function void atg\\_tanh\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor output)
"""
function tanh_backward(grad_output::Tensor, output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tanh_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tanh_backward_out(grad_input::Tensor, grad_output::Tensor, output::Tensor)
Wrapper of C++ function void atg\\_tanh\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, tensor output)
"""
function tanh_backward_out(grad_input::Tensor, grad_output::Tensor, output::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tanh_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_input.pointer, grad_output.pointer, output.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tanh_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_tanh\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function tanh_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_tanh_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tensordot(self::Tensor, other::Tensor, dims_self_data::Array{Int64}, dims_other_data::Array{Int64})
Wrapper of C++ function void atg\\_tensordot(tensor *out\\_\\_, tensor self, tensor other, int64\\_t *dims\\_self\\_data, int dims\\_self\\_len, int64\\_t *dims\\_other\\_data, int dims\\_other\\_len)
"""
function tensordot(self::Tensor, other::Tensor, dims_self_data::Array{Int64}, dims_other_data::Array{Int64})
outputs__ = Int[0]
dims_self_len = length(dims_self_data)
dims_other_len = length(dims_other_data)
__cret = ccall((:atg_tensordot, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, self.pointer, other.pointer, dims_self_data, dims_self_len, dims_other_data, dims_other_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
threshold(self::Tensor, threshold::TorchNumber, value::TorchNumber)
Wrapper of C++ function void atg\\_threshold(tensor *out\\_\\_, tensor self, scalar threshold, scalar value)
"""
function threshold(self::Tensor, threshold::TorchNumber, value::TorchNumber)
outputs__ = Int[0]
threshold_s_ = Scalar(threshold)
value_s_ = Scalar(value)
__cret = ccall((:atg_threshold, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, threshold_s_.pointer, value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
threshold!(self::Tensor, threshold::TorchNumber, value::TorchNumber)
Wrapper of C++ function void atg\\_threshold\\_(tensor *out\\_\\_, tensor self, scalar threshold, scalar value)
"""
function threshold!(self::Tensor, threshold::TorchNumber, value::TorchNumber)
outputs__ = Int[0]
threshold_s_ = Scalar(threshold)
value_s_ = Scalar(value)
__cret = ccall((:atg_threshold_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, threshold_s_.pointer, value_s_.pointer)
return self
end
"""
threshold_backward(grad_output::Tensor, self::Tensor, threshold::TorchNumber)
Wrapper of C++ function void atg\\_threshold\\_backward(tensor *out\\_\\_, tensor grad\\_output, tensor self, scalar threshold)
"""
function threshold_backward(grad_output::Tensor, self::Tensor, threshold::TorchNumber)
outputs__ = Int[0]
threshold_s_ = Scalar(threshold)
__cret = ccall((:atg_threshold_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad_output.pointer, self.pointer, threshold_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
threshold_out(out::Tensor, self::Tensor, threshold::TorchNumber, value::TorchNumber)
Wrapper of C++ function void atg\\_threshold\\_out(tensor *out\\_\\_, tensor out, tensor self, scalar threshold, scalar value)
"""
function threshold_out(out::Tensor, self::Tensor, threshold::TorchNumber, value::TorchNumber)
outputs__ = Int[0]
threshold_s_ = Scalar(threshold)
value_s_ = Scalar(value)
__cret = ccall((:atg_threshold_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer, threshold_s_.pointer, value_s_.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to(self::Tensor, device::Int)
Wrapper of C++ function void atg\\_to(tensor *out\\_\\_, tensor self, int device)
"""
function to(self::Tensor, device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_to, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to1(self::Tensor, options_kind::Int, options_device::Int, non_blocking::Int, copy::Int)
Wrapper of C++ function void atg\\_to1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device, int non\\_blocking, int copy)
"""
function to1(self::Tensor, options_kind::Int, options_device::Int, non_blocking::Int, copy::Int)
outputs__ = Int[0]
__cret = ccall((:atg_to1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint, Cint),
outputs__, self.pointer, options_kind, options_device, non_blocking, copy)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to2(self::Tensor, dtype::Int, non_blocking::Int, copy::Int)
Wrapper of C++ function void atg\\_to2(tensor *out\\_\\_, tensor self, int dtype, int non\\_blocking, int copy)
"""
function to2(self::Tensor, dtype::Int, non_blocking::Int, copy::Int)
outputs__ = Int[0]
__cret = ccall((:atg_to2, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dtype, non_blocking, copy)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to3(self::Tensor, other::Tensor, non_blocking::Int, copy::Int)
Wrapper of C++ function void atg\\_to3(tensor *out\\_\\_, tensor self, tensor other, int non\\_blocking, int copy)
"""
function to3(self::Tensor, other::Tensor, non_blocking::Int, copy::Int)
outputs__ = Int[0]
__cret = ccall((:atg_to3, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, other.pointer, non_blocking, copy)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to4(self::Tensor, device::Int, dtype::Int, non_blocking::Int, copy::Int)
Wrapper of C++ function void atg\\_to4(tensor *out\\_\\_, tensor self, int device, int dtype, int non\\_blocking, int copy)
"""
function to4(self::Tensor, device::Int, dtype::Int, non_blocking::Int, copy::Int)
outputs__ = Int[0]
__cret = ccall((:atg_to4, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint, Cint),
outputs__, self.pointer, device, dtype, non_blocking, copy)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_dense(self::Tensor)
Wrapper of C++ function void atg\\_to\\_dense(tensor *out\\_\\_, tensor self)
"""
function to_dense(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_to_dense, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_dense_backward(grad::Tensor, input::Tensor)
Wrapper of C++ function void atg\\_to\\_dense\\_backward(tensor *out\\_\\_, tensor grad, tensor input)
"""
function to_dense_backward(grad::Tensor, input::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_to_dense_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad.pointer, input.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_mkldnn(self::Tensor)
Wrapper of C++ function void atg\\_to\\_mkldnn(tensor *out\\_\\_, tensor self)
"""
function to_mkldnn(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_to_mkldnn, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_mkldnn_backward(grad::Tensor, input::Tensor)
Wrapper of C++ function void atg\\_to\\_mkldnn\\_backward(tensor *out\\_\\_, tensor grad, tensor input)
"""
function to_mkldnn_backward(grad::Tensor, input::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_to_mkldnn_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, grad.pointer, input.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_sparse(self::Tensor)
Wrapper of C++ function void atg\\_to\\_sparse(tensor *out\\_\\_, tensor self)
"""
function to_sparse(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_to_sparse, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
to_sparse1(self::Tensor, sparse_dim::Int64)
Wrapper of C++ function void atg\\_to\\_sparse1(tensor *out\\_\\_, tensor self, int64\\_t sparse\\_dim)
"""
function to_sparse1(self::Tensor, sparse_dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_to_sparse1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, sparse_dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
topk(self::Tensor, k::Int64, dim::Int64, largest::Int, sorted::Int)
Wrapper of C++ function void atg\\_topk(tensor *out\\_\\_, tensor self, int64\\_t k, int64\\_t dim, int largest, int sorted)
"""
function topk(self::Tensor, k::Int64, dim::Int64, largest::Int, sorted::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_topk, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint, Cint),
outputs__, self.pointer, k, dim, largest, sorted)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
topk_out(values::Tensor, indices::Tensor, self::Tensor, k::Int64, dim::Int64, largest::Int, sorted::Int)
Wrapper of C++ function void atg\\_topk\\_out(tensor *out\\_\\_, tensor values, tensor indices, tensor self, int64\\_t k, int64\\_t dim, int largest, int sorted)
"""
function topk_out(values::Tensor, indices::Tensor, self::Tensor, k::Int64, dim::Int64, largest::Int, sorted::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_topk_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Cint, Cint),
outputs__, values.pointer, indices.pointer, self.pointer, k, dim, largest, sorted)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
totype(self::Tensor, scalar_type::Int)
Wrapper of C++ function void atg\\_totype(tensor *out\\_\\_, tensor self, int scalar\\_type)
"""
function totype(self::Tensor, scalar_type::Int)
outputs__ = Int[0]
__cret = ccall((:atg_totype, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, scalar_type)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
trace(self::Tensor)
Wrapper of C++ function void atg\\_trace(tensor *out\\_\\_, tensor self)
"""
function trace(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_trace, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.transpose
"""
transpose(self::Tensor, dim0::Int64, dim1::Int64)
Wrapper of C++ function void atg\\_transpose(tensor *out\\_\\_, tensor self, int64\\_t dim0, int64\\_t dim1)
"""
function transpose(self::Tensor, dim0::Int64, dim1::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_transpose, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, dim0, dim1)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
transpose!(self::Tensor, dim0::Int64, dim1::Int64)
Wrapper of C++ function void atg\\_transpose\\_(tensor *out\\_\\_, tensor self, int64\\_t dim0, int64\\_t dim1)
"""
function transpose!(self::Tensor, dim0::Int64, dim1::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_transpose_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong),
outputs__, self.pointer, dim0, dim1)
return self
end
"""
trapz(y::Tensor, x::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_trapz(tensor *out\\_\\_, tensor y, tensor x, int64\\_t dim)
"""
function trapz(y::Tensor, x::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_trapz, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, y.pointer, x.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
trapz1(y::Tensor, dx::Float64, dim::Int64)
Wrapper of C++ function void atg\\_trapz1(tensor *out\\_\\_, tensor y, double dx, int64\\_t dim)
"""
function trapz1(y::Tensor, dx::Float64, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_trapz1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Clonglong),
outputs__, y.pointer, dx, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
triangular_solve(self::Tensor, A::Tensor, upper::Int, transpose::Int, unitriangular::Int)
Wrapper of C++ function void atg\\_triangular\\_solve(tensor *out\\_\\_, tensor self, tensor A, int upper, int transpose, int unitriangular)
"""
function triangular_solve(self::Tensor, A::Tensor, upper::Int, transpose::Int, unitriangular::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_triangular_solve, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, A.pointer, upper, transpose, unitriangular)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
triangular_solve_out(X::Tensor, M::Tensor, self::Tensor, A::Tensor, upper::Int, transpose::Int, unitriangular::Int)
Wrapper of C++ function void atg\\_triangular\\_solve\\_out(tensor *out\\_\\_, tensor X, tensor M, tensor self, tensor A, int upper, int transpose, int unitriangular)
"""
function triangular_solve_out(X::Tensor, M::Tensor, self::Tensor, A::Tensor, upper::Int, transpose::Int, unitriangular::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_triangular_solve_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, X.pointer, M.pointer, self.pointer, A.pointer, upper, transpose, unitriangular)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
tril(self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_tril(tensor *out\\_\\_, tensor self, int64\\_t diagonal)
"""
function tril(self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_tril, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tril!(self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_tril\\_(tensor *out\\_\\_, tensor self, int64\\_t diagonal)
"""
function tril!(self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_tril_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, diagonal)
return self
end
"""
tril_indices(row::Int64, col::Int64, offset::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_tril\\_indices(tensor *out\\_\\_, int64\\_t row, int64\\_t col, int64\\_t offset, int options\\_kind, int options\\_device)
"""
function tril_indices(row::Int64, col::Int64, offset::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_tril_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Cint, Cint),
outputs__, row, col, offset, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
tril_out(out::Tensor, self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_tril\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t diagonal)
"""
function tril_out(out::Tensor, self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_tril_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
triplet_margin_loss(anchor::Tensor, positive::Tensor, negative::Tensor, margin::Float64, p::Float64, eps::Float64, swap::Int, reduction::Int64)
Wrapper of C++ function void atg\\_triplet\\_margin\\_loss(tensor *out\\_\\_, tensor anchor, tensor positive, tensor negative, double margin, double p, double eps, int swap, int64\\_t reduction)
"""
function triplet_margin_loss(anchor::Tensor, positive::Tensor, negative::Tensor, margin::Float64, p::Float64, eps::Float64, swap::Int, reduction::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_triplet_margin_loss, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble, Cdouble, Cint, Clonglong),
outputs__, anchor.pointer, positive.pointer, negative.pointer, margin, p, eps, swap, reduction)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
triu(self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_triu(tensor *out\\_\\_, tensor self, int64\\_t diagonal)
"""
function triu(self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_triu, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
triu!(self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_triu\\_(tensor *out\\_\\_, tensor self, int64\\_t diagonal)
"""
function triu!(self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_triu_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, diagonal)
return self
end
"""
triu_indices(row::Int64, col::Int64, offset::Int64, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_triu\\_indices(tensor *out\\_\\_, int64\\_t row, int64\\_t col, int64\\_t offset, int options\\_kind, int options\\_device)
"""
function triu_indices(row::Int64, col::Int64, offset::Int64, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_triu_indices, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Clonglong, Clonglong, Clonglong, Cint, Cint),
outputs__, row, col, offset, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
triu_out(out::Tensor, self::Tensor, diagonal::Int64)
Wrapper of C++ function void atg\\_triu\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t diagonal)
"""
function triu_out(out::Tensor, self::Tensor, diagonal::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_triu_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, out.pointer, self.pointer, diagonal)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.trunc
"""
trunc(self::Tensor)
Wrapper of C++ function void atg\\_trunc(tensor *out\\_\\_, tensor self)
"""
function trunc(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_trunc, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
trunc!(self::Tensor)
Wrapper of C++ function void atg\\_trunc\\_(tensor *out\\_\\_, tensor self)
"""
function trunc!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_trunc_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
"""
trunc_out(out::Tensor, self::Tensor)
Wrapper of C++ function void atg\\_trunc\\_out(tensor *out\\_\\_, tensor out, tensor self)
"""
function trunc_out(out::Tensor, self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_trunc_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, out.pointer, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
type_as(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_type\\_as(tensor *out\\_\\_, tensor self, tensor other)
"""
function type_as(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_type_as, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
unbind(self::Tensor, dim::Int64)
Wrapper of C++ function tensor *atg\\_unbind(tensor self, int64\\_t dim)
"""
function unbind(self::Tensor, dim::Int64)
__cret = ccall((:atg_unbind, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid}, Clonglong),
self.pointer, dim)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
"""
unfold(self::Tensor, dimension::Int64, size::Int64, step::Int64)
Wrapper of C++ function void atg\\_unfold(tensor *out\\_\\_, tensor self, int64\\_t dimension, int64\\_t size, int64\\_t step)
"""
function unfold(self::Tensor, dimension::Int64, size::Int64, step::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_unfold, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Clonglong, Clonglong),
outputs__, self.pointer, dimension, size, step)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
uniform!(self::Tensor, from::Float64, to::Float64)
Wrapper of C++ function void atg\\_uniform\\_(tensor *out\\_\\_, tensor self, double from, double to)
"""
function uniform!(self::Tensor, from::Float64, to::Float64)
outputs__ = Int[0]
__cret = ccall((:atg_uniform_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cdouble, Cdouble),
outputs__, self.pointer, from, to)
return self
end
"""
unique_consecutive(self::Tensor, return_inverse::Int, return_counts::Int, dim::Int64)
Wrapper of C++ function void atg\\_unique\\_consecutive(tensor *out\\_\\_, tensor self, int return\\_inverse, int return\\_counts, int64\\_t dim)
"""
function unique_consecutive(self::Tensor, return_inverse::Int, return_counts::Int, dim::Int64)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_unique_consecutive, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Clonglong),
outputs__, self.pointer, return_inverse, return_counts, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
unique_dim(self::Tensor, dim::Int64, sorted::Int, return_inverse::Int, return_counts::Int)
Wrapper of C++ function void atg\\_unique\\_dim(tensor *out\\_\\_, tensor self, int64\\_t dim, int sorted, int return\\_inverse, int return\\_counts)
"""
function unique_dim(self::Tensor, dim::Int64, sorted::Int, return_inverse::Int, return_counts::Int)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_unique_dim, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint, Cint),
outputs__, self.pointer, dim, sorted, return_inverse, return_counts)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
unique_dim_consecutive(self::Tensor, dim::Int64, return_inverse::Int, return_counts::Int)
Wrapper of C++ function void atg\\_unique\\_dim\\_consecutive(tensor *out\\_\\_, tensor self, int64\\_t dim, int return\\_inverse, int return\\_counts)
"""
function unique_dim_consecutive(self::Tensor, dim::Int64, return_inverse::Int, return_counts::Int)
outputs__ = Int[0, 0, 0]
__cret = ccall((:atg_unique_dim_consecutive, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong, Cint, Cint),
outputs__, self.pointer, dim, return_inverse, return_counts)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
__o_3 = tensor_from_ptr(Ptr{Cvoid}(outputs__[3]))
return __o_1, __o_2, __o_3
end
"""
unsqueeze(self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_unsqueeze(tensor *out\\_\\_, tensor self, int64\\_t dim)
"""
function unsqueeze(self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_unsqueeze, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, dim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
unsqueeze!(self::Tensor, dim::Int64)
Wrapper of C++ function void atg\\_unsqueeze\\_(tensor *out\\_\\_, tensor self, int64\\_t dim)
"""
function unsqueeze!(self::Tensor, dim::Int64)
outputs__ = Int[0]
__cret = ccall((:atg_unsqueeze_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Clonglong),
outputs__, self.pointer, dim)
return self
end
"""
upsample_bicubic2d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bicubic2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_bicubic2d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_bicubic2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bicubic2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bicubic2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_bicubic2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_bicubic2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bicubic2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bicubic2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_bicubic2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_bicubic2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bicubic2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bicubic2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_bicubic2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_bicubic2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bilinear2d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bilinear2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_bilinear2d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_bilinear2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bilinear2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bilinear2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_bilinear2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_bilinear2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bilinear2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bilinear2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_bilinear2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_bilinear2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_bilinear2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_bilinear2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_bilinear2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_bilinear2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_linear1d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_linear1d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_linear1d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_linear1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_linear1d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_linear1d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_linear1d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_linear1d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_linear1d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_linear1d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_linear1d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_linear1d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_linear1d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_linear1d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_linear1d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_linear1d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest1d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest1d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest1d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest1d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest1d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest1d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest1d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest1d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest1d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest1d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest1d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest1d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest1d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest1d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest1d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest1d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest2d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest2d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest2d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest2d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest2d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest2d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest2d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest2d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest2d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest2d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest2d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest2d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest2d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest3d(self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest3d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest3d(self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest3d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest3d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest3d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len)
"""
function upsample_nearest3d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_nearest3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_nearest3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
Wrapper of C++ function void atg\\_upsample\\_nearest3d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len)
"""
function upsample_nearest3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64})
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_nearest3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_trilinear3d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_trilinear3d(tensor *out\\_\\_, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_trilinear3d(self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_trilinear3d, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_trilinear3d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_trilinear3d\\_backward(tensor *out\\_\\_, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_trilinear3d_backward(grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_trilinear3d_backward, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_trilinear3d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_trilinear3d\\_backward\\_out(tensor *out\\_\\_, tensor grad\\_input, tensor grad\\_output, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int64\\_t *input\\_size\\_data, int input\\_size\\_len, int align\\_corners)
"""
function upsample_trilinear3d_backward_out(grad_input::Tensor, grad_output::Tensor, output_size_data::Array{Int64}, input_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
input_size_len = length(input_size_data)
__cret = ccall((:atg_upsample_trilinear3d_backward_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Ptr{Cvoid}, Cint, Cint),
outputs__, grad_input.pointer, grad_output.pointer, output_size_data, output_size_len, input_size_data, input_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
upsample_trilinear3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
Wrapper of C++ function void atg\\_upsample\\_trilinear3d\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *output\\_size\\_data, int output\\_size\\_len, int align\\_corners)
"""
function upsample_trilinear3d_out(out::Tensor, self::Tensor, output_size_data::Array{Int64}, align_corners::Int)
outputs__ = Int[0]
output_size_len = length(output_size_data)
__cret = ccall((:atg_upsample_trilinear3d_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, out.pointer, self.pointer, output_size_data, output_size_len, align_corners)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.values
"""
values(self::Tensor)
Wrapper of C++ function void atg\\_values(tensor *out\\_\\_, tensor self)
"""
function values(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_values, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
var(self::Tensor, unbiased::Int)
Wrapper of C++ function void atg\\_var(tensor *out\\_\\_, tensor self, int unbiased)
"""
function var(self::Tensor, unbiased::Int)
outputs__ = Int[0]
__cret = ccall((:atg_var, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, unbiased)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
var1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_var1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function var1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_var1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
var_mean(self::Tensor, unbiased::Int)
Wrapper of C++ function void atg\\_var\\_mean(tensor *out\\_\\_, tensor self, int unbiased)
"""
function var_mean(self::Tensor, unbiased::Int)
outputs__ = Int[0, 0]
__cret = ccall((:atg_var_mean, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, unbiased)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
var_mean1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_var\\_mean1(tensor *out\\_\\_, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function var_mean1(self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0, 0]
dim_len = length(dim_data)
__cret = ccall((:atg_var_mean1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
__o_2 = tensor_from_ptr(Ptr{Cvoid}(outputs__[2]))
return __o_1, __o_2
end
"""
var_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
Wrapper of C++ function void atg\\_var\\_out(tensor *out\\_\\_, tensor out, tensor self, int64\\_t *dim\\_data, int dim\\_len, int unbiased, int keepdim)
"""
function var_out(out::Tensor, self::Tensor, dim_data::Array{Int64}, unbiased::Int, keepdim::Int)
outputs__ = Int[0]
dim_len = length(dim_data)
__cret = ccall((:atg_var_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, out.pointer, self.pointer, dim_data, dim_len, unbiased, keepdim)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
import Base.view
"""
view(self::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_view(tensor *out\\_\\_, tensor self, int64\\_t *size\\_data, int size\\_len)
"""
function view(self::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_view, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, self.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
view_as(self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_view\\_as(tensor *out\\_\\_, tensor self, tensor other)
"""
function view_as(self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_view_as, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
where(condition::Tensor)
Wrapper of C++ function tensor *atg\\_where(tensor condition)
"""
function where(condition::Tensor)
__cret = ccall((:atg_where, :libtorch_capi),
Ptr{Int}, (Ptr{Cvoid},),
condition.pointer)
ptrs__, i__ = Int[], 1
while true
ptr__ = unsafe_load(__cret, i__)
ptr__ == 0 && break
push!(ptrs__, ptr__)
i__ += 1
end
ccall(:free, Cvoid, (Ptr{Cvoid},), __cret)
return map(x -> tensor_from_ptr(Ptr{Nothing}(x)), ptrs__)
end
"""
where1(condition::Tensor, self::Tensor, other::Tensor)
Wrapper of C++ function void atg\\_where1(tensor *out\\_\\_, tensor condition, tensor self, tensor other)
"""
function where1(condition::Tensor, self::Tensor, other::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_where1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, condition.pointer, self.pointer, other.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
zero!(self::Tensor)
Wrapper of C++ function void atg\\_zero\\_(tensor *out\\_\\_, tensor self)
"""
function zero!(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_zero_, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
return self
end
import Base.zeros
"""
zeros(size_data::Array{Int64}, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_zeros(tensor *out\\_\\_, int64\\_t *size\\_data, int size\\_len, int options\\_kind, int options\\_device)
"""
function zeros(size_data::Array{Int64}, options_kind::Int, options_device::Int)
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_zeros, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint, Cint),
outputs__, size_data, size_len, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
zeros_like(self::Tensor)
Wrapper of C++ function void atg\\_zeros\\_like(tensor *out\\_\\_, tensor self)
"""
function zeros_like(self::Tensor)
outputs__ = Int[0]
__cret = ccall((:atg_zeros_like, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}),
outputs__, self.pointer)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
zeros_like1(self::Tensor, options_kind::Int, options_device::Int)
Wrapper of C++ function void atg\\_zeros\\_like1(tensor *out\\_\\_, tensor self, int options\\_kind, int options\\_device)
"""
function zeros_like1(self::Tensor, options_kind::Int, options_device::Int)
outputs__ = Int[0]
__cret = ccall((:atg_zeros_like1, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Cint, Cint),
outputs__, self.pointer, options_kind, options_device)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
"""
zeros_out(out::Tensor, size_data::Array{Int64})
Wrapper of C++ function void atg\\_zeros\\_out(tensor *out\\_\\_, tensor out, int64\\_t *size\\_data, int size\\_len)
"""
function zeros_out(out::Tensor, size_data::Array{Int64})
outputs__ = Int[0]
size_len = length(size_data)
__cret = ccall((:atg_zeros_out, :libtorch_capi),
Cvoid, (Ptr{Cvoid}, Ptr{Cvoid}, Ptr{Cvoid}, Cint),
outputs__, out.pointer, size_data, size_len)
__o_1 = tensor_from_ptr(Ptr{Cvoid}(outputs__[1]))
return __o_1
end
include("thc-opt.jl")
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1359 | using ThArrays
using Random
using Test
@testset "Tensor Gradient" begin
Random.seed!(0);
a = rand(3, 2)
b = rand(3, 2)
ad_reseult = [4.647295015954825 3.354657692932529;
4.820713075852873 3.557760218662402;
3.3291315962673704 3.4069531160838453]
@testset "Simple AD" begin
ta = Tensor(a, requires_grad=true)
tb = Tensor(b)
tc = ta^2 + 3ta + sin(tb) - tb
tg = Tensor(ones(3, 2))
ThAD.backward(tc, tg)
@test ThC.grad(ta) == Tensor(ad_reseult)
end
@testset "ThAD.gradient" begin
f(x, y) = x^2 + 3x + sin(y) - y
grads = ThAD.gradient(f, a, b; d=Tensor(ones(3,2)))
@test grads[1] == Tensor(ad_reseult)
end
@testset "Reset gradient" begin
t = Tensor(a, requires_grad=true)
grads = ThAD.gradient(x -> sum(2x), t)
@test grads[1] == Tensor(ones(3, 2)) * 2
grads = ThAD.gradient(x -> sum(2x), t)
@test grads[1] == Tensor(ones(3, 2)) * 4
ThAD.reset_grad!(t)
grads = ThAD.gradient(x -> sum(2x), t)
@test grads[1] == Tensor(ones(3, 2)) * 2
end
@testset "ThAD.forward" begin
f(x, y) = sum(2x + 2y)
y, back = ThAD.forward(f, a, b)
grads = back(1)
@test grads[1] == grads[2] == Tensor(ones(3, 2)) * 2
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 529 | using ThArrays
using Test
@testset "Issues Regression" begin
@testset "Issue 7" begin
Test.@test_throws ErrorException Tensor(3, requires_grad=true)
end
@testset "Issue 8" begin
x = Tensor(rand(1, 10), requires_grad=true);
f(x) = begin
y = Tensor(0.0, requires_grad=true)
for i = 1:length(x)
y += x[i]
end
y
end
y = f(x)
ThAD.backward(y)
@test ThAD.grad(x) == Tensor(ones(1, 10))
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 190 | include("scalar-creation.jl")
include("tensor-creation.jl")
include("tensor-arrayif.jl")
include("tensor-indexing.jl")
include("grad.jl")
include("simple-script.jl")
include("issues.jl")
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 401 | using ThArrays
using Test
@testset "Scalar Creation" begin
@testset "Scalar creation" begin
s = Scalar(1.0)
@test s[] == 1.0
end
@testset "Scalar data type" begin
for typ in [Float16, Float32, Float64,
Bool,
Int8, Int16, Int32, Int64]
s = Scalar(typ(1))
@test s.type == typ
end
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 374 | using ThArrays
using Test
@testset "Simple TorchScript" begin
@testset "Run simple method" begin
script = """
def main(a, b):
return a + b
"""
cu = ThJIT.compile(script)
ta = Tensor(rand(3, 2))
tb = Tensor(rand(3, 2))
# cu["main"], cu[:main], cu.main
res = cu.main(ta, tb)
@test res == ta + tb
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 881 | using ThArrays
using Test
@testset "Tensor Array interface (except indexing)" begin
@testset "Tensor array interface" begin
ary = rand(2, 3)
ten = Tensor(ary)
@test eltype(ten) == eltype(ary)
@test ndims(ten) == ndims(ary)
@test size(ten) == size(ary)
end
@testset "Tensor iteration" begin
ary = rand(2, 3)
ten = Tensor(ary)
for (idx, val) in enumerate(ten)
@test ary[idx] == val[]
end
end
@testset "Tensor concatenation" begin
a1 = rand(2, 3)
a2 = rand(2, 3)
a3 = rand(2, 3)
a4 = rand(2, 3)
t1 = Tensor(a1)
t2 = Tensor(a2)
t3 = Tensor(a3)
t4 = Tensor(a4)
@test [t1 t2] == Tensor([a1 a2])
@test [t1; t2] == Tensor([a1; a2])
@test [t1 t2; t3 t4] == Tensor([a1 a2; a3 a4])
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 794 | using ThArrays
using Test
@testset "Tensor Creation" begin
@testset "Creation with Array" begin
ary = rand(2, 3)
@test Tensor(ary) == Tensor(ary)
@test convert(Array, Tensor(ary)) == ary
end
@testset "Creation with Array (sharing data)" begin
ary = rand(2, 3)
t = Tensor(ary)
ThArrays.ThC.sin!(t)
@test t == Tensor(ary)
end
@testset "Creation with Array (copying data)" begin
ary = rand(2, 3)
t = Tensor(ary, detach=true)
ThArrays.ThC.sin!(t)
@test isapprox(convert(Array, t), sin.(ary), atol=0.001)
end
@testset "Create with Number (0-dim Tensor)" begin
t = Tensor(1.0)
@test size(t) == ()
@test length(t) == 1
@test t[] == 1.0
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1478 | using ThArrays
using Test
@testset "Tensor Indexing" begin
j_vect = rand(10)
t_vect = Tensor(j_vect)
j_data = rand(2, 3, 4)
t_data = Tensor(j_data)
@testset "Indexing with Int" begin
for i in 1:length(j_vect)
@test t_vect[i][] == j_vect[i]
end
for i in 1:length(j_data)
@test t_data[i][] == j_data[i]
end
end
@testset "Indexing with Range" begin
@test t_data[1:6] == Tensor(j_data[1:6])
@test t_data[8:12] == Tensor(j_data[8:12])
@test t_data[18:24] == Tensor(j_data[18:24])
end
@testset "Indexing with CartesianIndex" begin
@test t_data[1, 2, :] == Tensor(j_data[1, 2, :])
@test t_data[[1], 2, :] == Tensor(j_data[[1], 2, :])
@test t_data[:, 2, :] == Tensor(j_data[:, 2, :])
@test t_data[:, [2], :] == Tensor(j_data[:, [2], :])
@test t_data[1:2, 2:3, 2:4] == Tensor(j_data[1:2, 2:3, 2:4])
end
@testset "Set Index with Int" begin
for i in 1:length(j_data)
t_data[i] = i
end
for i in 1:length(j_data)
@test t_data[i][] == i
end
end
@testset "Set Index with Range" begin
t_data[1:4] = collect(1:4)
for i in 1:4
@test t_data[i][] == i
end
end
@testset "Set Index with CartesianIndex" begin
t_data[1, 1:2, 1:2] = zeros(2, 2)
@test t_data[1, 1:2, 1:2] == Tensor(zeros(2, 2))
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | code | 1163 | using ThArrays
using Tracker: forward, data
using Random
using Test
using ThArrays.TrackerAD: _th, _tr
@testset "Torch and Tracker Mixed Gradient" begin
Random.seed!(0);
a = rand(3, 2)
b = rand(3, 2)
@testset "Simple Mixed AD" begin
# all with th(PyTorch Backend)
f1(x, y) = sum(sin.(_th(x)) + sin.(_th(y)))
# sin with tr, (+, sum) wtih th
f2(x, y) = sum(_th(sin.(x)) + _th(sin.(y)))
# all with tr(Tracker Backend)
f3(x, y) = sum((sin.(x)) + (sin.(y)))
# (sin, +) with th, sum with tr
f4(x, y) = sum(_tr(sin.(_th(x)) + sin.(_th(y))))
# (sin, +) with tr, sum with th
f5(x, y) = sum(_th(sin.(x) + sin.(y)))
y1, back1 = forward(f1, a, b)
y2, back2 = forward(f2, a, b)
y3, back3 = forward(f3, a, b)
y4, back4 = forward(f4, a, b)
y5, back5 = forward(f5, a, b)
b1 = data(back1(2))
b2 = data(back2(2))
b3 = data(back3(2))
b4 = data(back4(2))
b5 = data(back5(2))
# @show y1, y2, y3, y4, y5
# @show b1, b2, b3, b4, b5
@test b1 == b2 == b3 == b4 == b5
end
end
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 1371 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# ThArrays
A Julia interface for PyTorch's C++ backend.

## Features
- `ThArrays.Tensor`: PyTorch Tensor as an Array-like data type in
Julia
- `ThArrays.ThAD`: AD using PyTorch C++ backend
- `ThArrays.TrackerAD`: AD using Tracker.jl and PyTorch C++
backend mixed, on your choice
- `ThArrays.ThJIT`: using TorchScript in Julia
## Getting Started
1. Install the package: `] add ThArrays`
2. Read the docs [here](https://turinglang.github.io/ThArrays.jl), or
3. Experiment in the Julia REPL directly:
```julia
julia> using ThArrays
julia> t = Tensor( -rand(3, 3) )
PyTorch.Tensor{Float64, 2}:
-0.1428 -0.7099 -0.1446
-0.3447 -0.0686 -0.8287
-0.2692 -0.0501 -0.2092
[ CPUDoubleType{3,3} ]
julia> sin(t)^2 + cos(t)^2
PyTorch.Tensor{Float64, 2}:
1.0000 1.0000 1.0000
1.0000 1.0000 1.0000
1.0000 1.0000 1.0000
[ CPUDoubleType{3,3} ]
julia> ThAD.gradient(x->sum(sin(x)+x^2), rand(3,3))
(PyTorch.Tensor{Float64, 2}:
2.3776 1.5465 2.0206
1.2542 1.2081 2.1156
2.1034 1.1568 2.2599
[ CPUDoubleType{3,3} ]
,)
julia>
```
You can find more examples under the `test` directory.
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 122 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# Auto Differentiation
## `ThArrays.AD`
## `ThArrays.TrackerAD`
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 4848 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# ThArrays
ThArrays is a Julia interface for the PyTorch's C++ backend. It aims
on bringing the fundamental facilities, e.g., `Tensor`, `AutoGrad`,
`TorchScript`, etc., to the Julia ecosystem.
## Getting Started
1. Install the package by `] add Tharrays`, or if you cloned the code
repository and intend to build it from source, set the environment
variable `export THARRAYS_DEV=1` and run `] build ThArrays`. The
build script will download the libtorch zip file, compile the
shared library, and generate many Julia methods in module
`ThArrays.ThC`. Without setting `THARRAYS_DEV`, the build script
will download the pre-built binary library instead of building it
locally.
2. Run a simple example:
```julia
julia> using ThArrays
julia> t = Tensor( -rand(3, 3) )
PyTorch.Tensor{Float64, 2}:
-0.1428 -0.7099 -0.1446
-0.3447 -0.0686 -0.8287
-0.2692 -0.0501 -0.2092
[ CPUDoubleType{3,3} ]
julia> abs(t)
PyTorch.Tensor{Float64, 2}:
0.1428 0.7099 0.1446
0.3447 0.0686 0.8287
0.2692 0.0501 0.2092
[ CPUDoubleType{3,3} ]
julia> sin(t)^2 + cos(t)^2
PyTorch.Tensor{Float64, 2}:
1.0000 1.0000 1.0000
1.0000 1.0000 1.0000
1.0000 1.0000 1.0000
[ CPUDoubleType{3,3} ]
julia> t
PyTorch.Tensor{Float64, 2}:
-0.1428 -0.7099 -0.1446
-0.3447 -0.0686 -0.8287
-0.2692 -0.0501 -0.2092
[ CPUDoubleType{3,3} ]
julia> ThC.abs!(t)
PyTorch.Tensor{Float64, 2}:
0.1428 0.7099 0.1446
0.3447 0.0686 0.8287
0.2692 0.0501 0.2092
[ CPUDoubleType{3,3} ]
julia> t
PyTorch.Tensor{Float64, 2}:
0.1428 0.7099 0.1446
0.3447 0.0686 0.8287
0.2692 0.0501 0.2092
[ CPUDoubleType{3,3} ]
julia> ThAD.gradient(x->sum(sin(x)+x^2), rand(3,3))
(PyTorch.Tensor{Float64, 2}:
2.3776 1.5465 2.0206
1.2542 1.2081 2.1156
2.1034 1.1568 2.2599
[ CPUDoubleType{3,3} ]
,)
julia>
```
Read on the documents to learn more about ThArrays.
## Features
ThArrays provides:
- `ThArrays.Tensor`: PyTorch Tensor as an Array-like data type in
Julia
- `ThArrays.ThAD`: AD using PyTorch C++ backend
- `ThArrays.TrackerAD`: AD using Tracker.jl and PyTorch C++
backend mixed, on your choice
- `ThArrays.ThJIT`: using TorchScript in Julia
## The shared library
We wrap libtorch to a shared library (`libtorch_capi`) to expose
symbols that can be called by Julia's `ccall` directly. That shared
library depends on nothing but the libtorch C++ library, that is, it
does NOT depend on Julia either, so every language or platform who has
an FFI facility like Juiia's `ccall` can use it to wrap a PyTorch
library.
The files `csrc/torch_capi*` are maintianed by this project and they
are used to provide consturctors and several crucial functions of the
`Tensor` and `Scalar` types.
The files `csrc/torch_api*` are copied from project
[ocaml-torch](https://github.com/LaurentMazare/ocaml-torch) (the
`src/wrapper` directory) with a few minor modifications (remove ocaml
dependency, add a generic error handling approach, etc.).
## The auto-generated `ThArrays.ThC` module
As we said in the last section, we borrowed some C++ sources from the
ocaml-torch project, and these files are auto-generated (by a program
in the ocaml-torch project and based on the YAML declaration files,
for example the file
[native_functions.yaml](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml),
in the PyTorch project).
In this project, we use a Julia program, `src/thc/thc-generator.jl` to
generate Julia functions who call the auto-generated C/C++ functions
via `ccall`, and put them into module `ThArrays.ThC`
(`src/thc/thc.jl`).
Beside the functions in `ThArrays.ThC` module, we can find the Python
API of type `Tensor`
[here](https://pytorch.org/docs/stable/tensors.html), and extract a
list by running:
```
cat tensors.html | perl -n -e 'print "$1\n" if (m{<code class="sig-name descname">(.+)</code>.*x2192; Tensor}i);' | uniq
```
The result of this command is saved as `python-api-tensor.txt` under
this directory, if you found any convenient API in it but not in this
package, tell us and we can add it in.
Another place to find functions on `Tensor` is [the C++ API
document](https://pytorch.org/cppdocs/api/namespace_at.html#functions).
## Build with CUDA support
By default, if you install this package using Julia's package
manager(`Pkg`), it only supports Tensor on CPU. But it also supports
Tensors on CUDA GPU if you:
1. have CUDA installed on your machine
2. download libtorch with CUDA support and unzip it to the
`csrc/libtorch` directory of this package
3. `export THARRAYS_DEV=1`
4. start Julia, run `] build ThArrays`
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 187 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# API Reference
This page provides a comprehensive reference for ThArrays functionality.
## Tensor
```@docs
ThArrays.Tensor
```
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 93 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# Tensor
## Tensor
## Tensor on GPU
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.2.0 | 86d52c13ec76988df8b02ff8c247eac6cedd0021 | docs | 96 | <!-- # ( -*- mode: markdown; mode: auto-fill -*- )
-->
# TorchScript Support: `ThArrays.ThJIT`
| ThArrays | https://github.com/TuringLang/ThArrays.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 812 | #Test code
#-------------------------------------------------------------------------------
using SpiceData
include("importCppSimData.jl")
#No real test code yet... just demonstrate use:
stdout_ct = IOContext(stdout, :compact=>true)
testpath(testfile::String) = joinpath(CppSimData.rootpath, "core/data", testfile)
testfile = "test.tr0"
filepath = testpath(testfile)
println("\nLoad $filepath:")
reader = SpiceData._open(filepath)
@show(reader)
println("\nCompact output:")
show(stdout_ct, reader)
println("\n")
println("\nRead in list of signal names:")
@show names(reader)
signame = reader.sweepname
println("\nRead in sweep vector \"$signame\":")
t = reader.sweep
@show t[1], t[end]
signame = "vin"
println("\nRead in \"$signame\" vector:")
v = read(reader, signame)
@show v[1], v[end]
:Test_Complete
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 346 | #Import CppSimData or explain how to install.
try
eval(:(import CppSimData))
catch e
msg = "This sample requires data installed in the CppSimData module."
msg *= "\nTo continue demo, install with the following:\n\n"
msg *= " Pkg.clone(\"git://github.com/ma-laforge/CppSimData.jl\")"
@info(msg)
println();println()
rethrow(e)
end
#Last line
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 975 | #SpiceData: A pure Julia SPICE data reader
#-------------------------------------------------------------------------------
__precompile__(true)
#=
TAGS:
#WANTCONST, HIDEWARN_0.7
=#
module SpiceData
include("base.jl")
include("show.jl")
#==DataReader object: public members
================================================================================
.sweepname
.signalnames
.sweep #Sweep values
==#
#==Exported symbols
===============================================================================#
#==Un-"exported" symbols
================================================================================
_open(filepath::String)::DataReader
==#
#==Other interface tools (symbols not exported to avoid collisions):
================================================================================
#Already in base:
Base.names(reader::DataReader)
Base.read(reader::DataReader, signum::Int)
Base.read(reader::DataReader, signame::String)
==#
end #module
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 2042 | #Tools to help get a feel for Tr0
module Tr0Tools
#==Aliases
===============================================================================#
const DataWord = UInt32
#==Constants
===============================================================================#
const WRITEBLOCK_SYNCWORD = DataWord(0x4)
#==Main Types
===============================================================================#
struct BlockHeader
_type::DataWord
_size::DataWord
end
#==Helper Functions
===============================================================================#
function corruptword_exception(io::IO, w::DataWord, expected::DataWord)
pos = position(io) - sizeof(DataWord)
pos = hex(pos)
w = hex(w)
expected = hex(expected)
return "Corrupt word @ 0x$pos: 0x$w, 0x$expected"
end
function readsyncword(io::IO)
w = read(io, DataWord)
if w != WRITEBLOCK_SYNCWORD
throw(corruptword_exception(io, w, WRITEBLOCK_SYNCWORD))
end
end
#Data read:
function _dread(io::IO, ::Type{BlockHeader})
readsyncword(io)
_type = read(io, DataWord)
readsyncword(io)
_size = read(io, DataWord)
return BlockHeader(_type, _size)
end
function _show(io::IO, hdr::BlockHeader, pos::Int)
print(io, "Block: 0x", hex(WRITEBLOCK_SYNCWORD))
print(io, " 0x", hex(hdr._type))
print(io, " 0x", hex(WRITEBLOCK_SYNCWORD))
print(io, " 0x", hex(hdr._size))
print(io, " (start 0x", hex(pos), ")")
println(io)
end
#==Main functions
===============================================================================#
function dumpsegments(io::IO, filepath::String)
r = open(filepath)
blockcount = 0
totalsize = 0
while !eof(r)
pos = position(r)
hdr = _dread(r, BlockHeader)
_show(io, hdr, pos)
blockcount += 1
totalsize += hdr._size
seek(r, position(r)+hdr._size)
blksize = read(r, DataWord)
if blksize != hdr._size
throw(corruptword_exception(r, blksize, hdr._size))
end
end
close(r)
println(io, "Blocks read: $blockcount, total size: $totalsize.")
end
dumpsegments(filepath::String) = dumpsegments(STDOUT, filepath)
end #module
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 11734 | #SpiceData: Base types & core functions
#-------------------------------------------------------------------------------
#==Aliases
===============================================================================#
const DataWord = UInt32
#==Constants
===============================================================================#
const SIGNAME_BUFSIZE = 256 #Maximum supported length
const WRITEBLOCK_SYNCWORD = DataWord(0x4)
const WRITEBLOCK_HEADERSIZE = 4*sizeof(DataWord)
#TODO: Not convinced this code is really "block type"... or if ids match their meaning.
const BLOCKTYPEID_HEADER = DataWord(0x70)
const BLOCKTYPEID_DATA = DataWord(0x80)
const DATATYPEID_VOLTAGE = 1
const DATATYPEID_CURRENT = 8
const SWEEPDATA_LASTPOINT = 1e30 #Can detect end of data with this.
#==Main Types
===============================================================================#
abstract type Endianness end #Of data file
struct BigEndian <: Endianness; end
struct LittleEndian <: Endianness; end
const NetworkEndianness = BigEndian #Informative only
#=Comment:
Apparently SPICE files used to use network endianness (big-endian), but are now
little-endian.
=#
abstract type SpiceFormat end
struct Format_Unknown <: SpiceFormat; end
struct Format_9601 <: SpiceFormat; end #x: 32-bit floats, y: 32-bit floats
struct Format_9602 <: SpiceFormat; end #x: 64-bit floats, y: 64-bit floats
struct Format_2001 <: SpiceFormat; end #x: 64-bit floats, y: 64-bit floats
struct Format_2013 <: SpiceFormat; end #x: 64-bit floats, y: 32-bit floats
#=Comment:
I believe 9602 is a non-standard format used by CppSim.
=#
struct BlockHeader
typeid::DataWord #TODO: is this really type id?
_size::DataWord #Number of bits in current block
end
mutable struct BlockReader{T<:Endianness}
io::IO
header::BlockHeader
endpos::Int
end
#NOTE: Parameterized so we can specialize (dispatch) on endianness.
#Convenience:
Endianness(::BlockReader{E}) where E<:Endianness = E
#TODO: Not sure if the SpiceTags are named correctly:
mutable struct SpiceTags
id::String
date::String
time::String
comments::String
end
SpiceTags() = SpiceTags("", "", "", "")
#SPICE file reader: Main object
#-------------------------------------------------------------------------------
mutable struct DataReader
io::IOStream
filepath::String #Informative only
format::SpiceFormat
sweepname::String
signalnames::Vector{String}
sweep::Vector
tags::SpiceTags
datastart::Int
rowsize::Int
endianness::Endianness
end
#==Helper Functions
===============================================================================#
hex(x::Integer) = string(x, base=16)
printable(v::String) = isprint(v) ? v : ""
_reorder(v::T, ::BigEndian) where T = ntoh(v)
_reorder(v::T, ::LittleEndian) where T = ltoh(v)
#Debug: show block header info
function _show(io::IO, hdr::BlockHeader, pos::Int)
print(io, "Block: 0x", hex(WRITEBLOCK_SYNCWORD))
print(io, " 0x", hex(hdr.typeid))
print(io, " 0x", hex(WRITEBLOCK_SYNCWORD))
print(io, " 0x", hex(hdr._size))
print(io, " (start 0x", hex(pos), ")")
println(io)
end
#==Exceptions
===============================================================================#
function corruptword_exception(io::IO, w::DataWord, expected::DataWord)
pos = position(io) - sizeof(DataWord)
pos = hex(pos)
w = hex(w)
expected = hex(expected)
return "Corrupt word 0x$w @ 0x$pos (expected 0x$expected)"
end
function stringboundary_exception(io::IO, )
hpos = hex(position(io))
return "Reading string across block boundary: 0x$hpos"
end
#==
===============================================================================#
#-------------------------------------------------------------------------------
xtype(::Format_9601) = Float32
ytype(::Format_9601) = Float32
xtype(::Format_9602) = Float64
ytype(::Format_9602) = Float64
xtype(::Format_2001) = Float64
ytype(::Format_2001) = Float64
xtype(::Format_2013) = Float64
ytype(::Format_2013) = Float32
#IO reads
#-------------------------------------------------------------------------------
_read(io::IO, ::Type{T}, endianness::Endianness) where T<:Real =
_reorder(read(io, T), endianness)
#Read in a WRITEBLOCK_SYNCWORD & validate:
function readsyncword(io::IO, endianness::Endianness)
w = _read(io, DataWord, endianness)
if w != WRITEBLOCK_SYNCWORD
throw(corruptword_exception(io, w, WRITEBLOCK_SYNCWORD))
end
end
#Read in a block header:
function _read(io::IO, ::Type{BlockHeader}, endianness::Endianness)
readsyncword(io, endianness)
typeid = _read(io, DataWord, endianness)
readsyncword(io, endianness)
_size = _read(io, DataWord, endianness)
return BlockHeader(typeid, _size)
end
#Block reader
#-------------------------------------------------------------------------------
bytesleft(r::BlockReader) = (r.endpos - position(r.io))
canread(r::BlockReader, nbytes::Int) = bytesleft(r) >= nbytes
function nextblock(r::BlockReader{E}) where E
seek(r.io, r.endpos)
sz = _read(r.io, DataWord, E())
if sz != r.header._size
hpos = hex(position(r.io) - 1)
throw("Inconsistent block size @ 0x$hpos.")
end
r.header = _read(r.io, BlockHeader, E())
r.endpos = position(r.io) + r.header._size
return r
end
function _skip(r::BlockReader, offset::Int)
while offset > 0
rmg = offset - bytesleft(r)
if rmg > 0
nextblock(r)
offset = rmg
else
return skip(r.io, offset)
end
end
end
function _read(r::BlockReader{E}, ::Type{T}) where {E, T<:Number}
#NOTE: don't check if bytesleft<0... checked by reading BlockHeader
if bytesleft(r) < 1
nextblock(r)
end
if !canread(r, sizeof(T))
hpos = hex(position(r.io))
throw("Cannot read $T @ 0x$hpos")
end
return _read(r.io, T, E())
end
#Read in fixed-length string:
function _read(r::BlockReader, ::Type{String}, nchars::Int)
if !canread(r, nchars)
throw(stringboundary_exception(r.io))
end
buf = Array{UInt8}(undef, nchars)
readbytes!(r.io, buf)
return String(buf)
end
#Read in space-delimited string:
function readsigname(r::BlockReader)
DELIM = UInt8(' ') #WANTCONST
buf = Array{UInt8}(undef, SIGNAME_BUFSIZE)
#TODO: improve test?
isdelim(v::UInt8) = (v != DELIM)
lastchar = DELIM
while DELIM == lastchar
lastchar = _read(r, UInt8)
end
if !isdelim(lastchar)
hpos = hex(position(r.io)-1)
throw("Invalid string @ 0x$hpos")
end
i = 1
while isdelim(lastchar)
buf[i] = lastchar
lastchar = _read(r, UInt8)
i+=1
if i > SIGNAME_BUFSIZE
throw("Insufficient buffer size: 'SIGNAME_BUFSIZE'")
end
end
buf[i] = 0
return unsafe_string(pointer(buf))
end
#==Constructors
===============================================================================#
#"Construct" a BlockReader, by reading a header from IO:
function BlockReader(io::IO, endianness::Endianness; start::Int=0)
seek(io, start)
hdr = _read(io, BlockHeader, endianness)
endpos = position(io) + hdr._size
return BlockReader{typeof(endianness)}(io, hdr, endpos)
end
#==Main functions
===============================================================================#
#Detect endianness from first word:
function _read(io::IO, ::Type{Endianness})
w = read(io, DataWord)
for endianness in [LittleEndian(), BigEndian()]
if WRITEBLOCK_SYNCWORD == _reorder(w, endianness)
return endianness
end
end
throw(corruptword_exception(io, w, WRITEBLOCK_SYNCWORD))
end
#Read in SPICE data file format:
function _read(r::BlockReader, ::Type{SpiceFormat})
versiontxt = strip(_read(r, String, 8))
try
version = parse(Int, versiontxt)
if 9601 == version
return Format_9601()
elseif 9602 == version
return Format_9602()
elseif 2001 == version
return Format_2001()
elseif 2013 == version
return Format_2013()
end
catch
end
versiontxt = printable(versiontxt)
throw("SPICE data format not recognized: '$versiontxt'")
end
#Read in signal names:
function readnames(r::BlockReader, datacolumns::Int)
for i in 1:datacolumns
sigtype = _read(r, String, 8)
try
parse(Int, sigtype)
catch
hpos = hex(position(r.io) - 8)
sigtype = printable(sigtype)
throw("Non-numerical signal type '$sigtype' @ 0x$hpos")
end
end
sweepname = readsigname(r)
nsigs = datacolumns - 1
signalnames = Array{String}(undef, nsigs)
for i in 1:length(signalnames)
signalnames[i] = readsigname(r)
end
return (sweepname, signalnames)
end
#Read in signal data to vector d.
function readsignal!(r::BlockReader, d::Vector{T}, offset::Int, rowsize::Int) where T
rowskip = rowsize - sizeof(T)
_skip(r, offset)
lastrowcomplete = false
npoints = 0
lastpos = 0
try
while npoints <= length(d)
val = _read(r, T)
npoints += 1
d[npoints] = val
lastrowcomplete = false
#lastpos=position(r.io)
_skip(r, rowskip)
lastrowcomplete = true
end
catch
end
#When reading main sweep (offset == 0):
#Get rid of last value if last row is not completely written:
if 0 == offset && !lastrowcomplete
#hpos = hex(lastpos); chpos = hex(position(r.io))
#throw("INCOMPLETE DATASET: lastpos @0x$hpos (curpos @0x$chpos)")
npoints = max(0, npoints-1)
end
return resize!(d, npoints)
end
#Read in main sweep vector:
function readsweep(r::BlockReader, fmt::SpiceFormat, rowsize::Int)
#Compute estimated signal length:
curpos = position(r.io)
sz = filesize(r.io)
estimatedlen = div(sz - curpos, rowsize)
data = Array{xtype(fmt)}(undef, estimatedlen)
return readsignal!(r, data, 0, rowsize)
end
#Read in signal by number:
function readsignal(r::DataReader, signum::Int)
nsigs = length(r.signalnames)
if signum < 1 || signum > nsigs
throw("Invalid signal number: $signum ∉ [1, $nsigs].")
end
blkreader = BlockReader(r.io, r.endianness, start=r.datastart)
_xtype = xtype(r.format); _ytype = ytype(r.format)
offset = sizeof(_xtype) + (signum-1) * sizeof(_ytype)
data = Array{_ytype}(undef, length(r.sweep))
return readsignal!(blkreader, data, offset, r.rowsize)
end
#Read in a SPICE file from path:
function _open(filepath::String)
io = open(filepath, "r")
endianness = _read(io, Endianness)
blkreader = BlockReader(io, endianness, start=0)
#Read in signal counts:
count1 = _read(blkreader, String, 4)
#What are other counts for? Are they counts?
count2 = _read(blkreader, String, 4)
count3 = _read(blkreader, String, 4)
count4 = _read(blkreader, String, 4)
try
count1 = parse(Int, count1)
count2 = parse(Int, count2)
catch
throw("Invalid signal count.")
end
datacolumns = Int(count1)+Int(count2)
#Read in file format:
format = _read(blkreader, SpiceFormat)
#Read in "tags":
header = SpiceTags(
strip(_read(blkreader, String, 4*16)), #id
strip(_read(blkreader, String, 16)), #date
strip(_read(blkreader, String, 8)), #time
strip(_read(blkreader, String, 4*16+8)) #comments
)
#Read in signal names:
_skip(blkreader, 5*16) #Why? What is here?
sweepname, signalnames = readnames(blkreader, datacolumns)
#Compute row size:
nsigs = length(signalnames)
_xtype = xtype(format); _ytype = ytype(format)
rowsize = sizeof(_xtype) + nsigs*sizeof(_ytype)
#Move to start of first data block:
nextblock(blkreader)
datastart = position(blkreader.io) - WRITEBLOCK_HEADERSIZE
sweep = readsweep(blkreader, format, rowsize)
return DataReader(io, filepath, format,
sweepname, signalnames, sweep,
header, datastart, rowsize, endianness
)
end
#==Higher-level interface
===============================================================================#
#_open(filepath::String)
Base.names(r::DataReader) = r.signalnames
Base.read(r::DataReader, signum::Int) = readsignal(r, signum)
function Base.read(r::DataReader, signame::String)
signum = findfirst(isequal(signame), r.signalnames)
if nothing == signum
throw("Signal not found: $signame.")
end
return readsignal(r, signum)
end
Base.close(r::DataReader) = close(r.io)
#Last line
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 1458 | #SpiceData: Show functions
#-------------------------------------------------------------------------------
Base.show(io::IO, ::BigEndian) = print(io, "BigEndian")
Base.show(io::IO, ::LittleEndian) = print(io, "LittleEndian")
_showcompact(io::IO, ::SpiceFormat) = print(io, "Format:Unknown")
_showcompact(io::IO, ::Format_9601) = print(io, "SPICE:9601")
_showcompact(io::IO, ::Format_9602) = print(io, "CppSim:9602")
_showcompact(io::IO, ::Format_2001) = print(io, "SPICE:2001")
_showcompact(io::IO, ::Format_2013) = print(io, "SPICE:2013")
Base.show(io::IO, fmt::Format_Unknown) = _showcompact(io, fmt)
function Base.show(io::IO, fmt::SpiceFormat)
_showcompact(io, fmt)
print(io, " (x: $(xtype(fmt))[], y: $(ytype(fmt))[])")
end
function _show(io::IO, r::DataReader, compact::Bool = false)
#Base (compact) information:
print(io, DataReader, "(")
print(io, basename(r.filepath))
print(io, ", nsig=", length(r.signalnames))
print(io, ", npts=", length(r.sweep))
print(io, ", ")
print(io, r.format)
print(io, ")")
if compact; return; end
#Extended information:
println(io)
print(io, ">> (", r.endianness, ")")
print(io, " sweep = '", r.sweepname, "'")
println(io)
tags = r.tags
println(io, ">> ", tags.date, " (", tags.time, ")")
println(io, ">> ", tags.id)
println(io, ">> ", tags.comments)
end
Base.show(io::IO, r::DataReader) = _show(io, r)
Base.show(io::IOContext, r::DataReader) = _show(io, r, haskey(io.dict, :compact))
#End
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | code | 244 | #Test code
#-------------------------------------------------------------------------------
using SpiceData
@warn("No real test code yet... just ensuring that \"using\" works.")
@info("See sample/demo*.jl for sample usage.")
:Test_Complete
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.3.2 | 139b29d9ca2af86113a901a4fff40f8c4ba00d71 | docs | 1254 | # SpiceData.jl
[](https://travis-ci.org/ma-laforge/SpiceData.jl)
## Description
The SpiceData.jl module provides a pure-Julia SPICE data file reader inspired by Michael H. Perrott's CppSim reader.
## Sample Usage
Examples on how to use the SpiceData.jl capabilities can be found under the [sample directory](sample/).
<a name="Installation"></a>
## Installation
julia> Pkg.add("SpiceData")
## Resources/Acknowledgments
### CppSim and NGspice Data Modules for Python
The following are links to Michael H. Perrott's original tools:
- **CppSim**: <http://www.cppsim.com/index.html>.
- **Hspice Toolbox**: <http://www.cppsim.com/download_hspice_tools.html>.
## Known Limitations
### Supported file formats
SpiceData currently supports the following SPICE file formats:
- 9601 (32-bit x-values & 32-bit y-values)
- 9602 (CppSim-specific format? 64-bit x-values & 64-bit y-values?)
- 2001 (64-bit x-values & 64-bit y-values)
- 2013 (64-bit x-values & 32-bit y-values)
### Compatibility
Extensive compatibility testing of SpiceData.jl has not been performed. The module has been tested using the following environment(s):
- Linux / Julia-1.1.1 (64-bit)
| SpiceData | https://github.com/ma-laforge/SpiceData.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 2790 | module ChunkedJSONL
export parse_file, DebugContext
export consume!, setup_tasks!, task_done!
using JSON3
using SnoopPrecompile
using ChunkedBase
using SentinelArrays.BufferedVectors
struct ParsingContext <: AbstractParsingContext
ignoreemptyrows::Bool
end
_nonspace(b::UInt8) = !isspace(Char(b))
include("result_buffer.jl")
include("consume_context.jl")
include("row_parsing.jl")
function parse_file(
input,
consume_ctx::AbstractConsumeContext=DebugContext();
# In bytes. This absolutely has to be larger than any single row.
# Much safer if any two consecutive rows are smaller than this threshold.
buffersize::Integer=Threads.nthreads() * 1024 * 1024,
nworkers::Integer=Threads.nthreads(),
limit::Int=0,
skipto::Int=0,
comment::Union{Nothing,String,Char,UInt8,Vector{UInt8}}=nothing,
ignoreemptyrows::Bool=true,
newlinechar::Union{UInt8,Char,Nothing}=UInt8('\n'),
use_mmap::Bool=false,
_force::Symbol=:default,
)
_force in (:default, :serial, :parallel) || throw(ArgumentError("`_force` argument must be one of (:default, :serial, :parallel)."))
if !isnothing(newlinechar)
newlinechar = UInt8(newlinechar)
sizeof(newlinechar) > 1 && throw(ArgumentError("`newlinechar` must be a single-byte character."))
end
should_close, io = ChunkedBase._input_to_io(input, use_mmap)
parsing_ctx = ParsingContext(ignoreemptyrows)
chunking_ctx = ChunkingContext(buffersize, nworkers, limit, comment)
# chunking_ctx.bytes is now filled with `bytes_read_in` bytes, we've skipped over BOM
# and since the third argument is true, we also skipped over any leading whitespace.
bytes_read_in = ChunkedBase.initial_read!(io, chunking_ctx, true)
newline = isnothing(newlinechar) ?
ChunkedBase._detect_newline(chunking_ctx.bytes, 1, bytes_read_in) :
UInt8(newlinechar)
lexer = Lexer(io, nothing, newline)
ChunkedBase.initial_lex!(lexer, chunking_ctx, bytes_read_in)
ChunkedBase.skip_rows_init!(lexer, chunking_ctx, skipto, ignoreemptyrows)
nrows = length(chunking_ctx.newline_positions) - 1
try
if ChunkedBase.should_use_parallel(chunking_ctx, _force)
ntasks = tasks_per_chunk(chunking_ctx)
nbuffers = total_result_buffers_count(chunking_ctx)
result_buffers = TaskResultBuffer[TaskResultBuffer(id, cld(nrows, ntasks)) for id in 1:nbuffers]
parse_file_parallel(lexer, parsing_ctx, consume_ctx, chunking_ctx, result_buffers, Tuple{})
else
result_buf = TaskResultBuffer(0, nrows)
parse_file_serial(lexer, parsing_ctx, consume_ctx, chunking_ctx, result_buf, Tuple{})
end
finally
should_close && close(io)
end
return nothing
end
end # module
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 1578 | struct DebugContext <: AbstractConsumeContext; end
function ChunkedBase.consume!(consume_ctx::DebugContext, payload::ParsedPayload)
task_buf = payload.results
io = IOBuffer()
write(io, string("Start row: ", payload.row_num, ", nrows: ", length(task_buf.tapeidxs), ", $(Base.current_task()) "))
printstyled(IOContext(io, :color => true), "❚", color=Int(hash(Base.current_task()) % UInt8))
println(io)
@info String(take!(io))
return nothing
end
struct ValueExtractionContext <: AbstractConsumeContext
elements::Vector{Union{Dict{Symbol},Vector,String,Nothing,Bool,Float64,Int}}
indices::Vector{Int}
lock::ReentrantLock
end
ValueExtractionContext() = ValueExtractionContext([], Int[], ReentrantLock())
function Base.sort!(ctx::ValueExtractionContext)
ctx.elements .= @view ctx.elements[sortperm(ctx.indices)]
ctx.indices .= 1:length(ctx.indices)
return ctx
end
function ChunkedBase.consume!(consume_ctx::ValueExtractionContext, payload::ParsedPayload)
tape = payload.results.tape
buf = payload.chunking_ctx.bytes
row = Int(payload.row_num)
@inbounds for tapeidx in payload.results.tapeidxs
t = tape[tapeidx]
val = JSON3.getvalue(Any, buf, tape, tapeidx, t)
if isa(val, Union{JSON3.Object,JSON3.Array})
val = copy(val)
end
lock(consume_ctx.lock)
try
push!(consume_ctx.elements, val)
push!(consume_ctx.indices, row)
finally
unlock(consume_ctx.lock)
end
row += 1
end
return nothing
end
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 534 | struct TaskResultBuffer <: AbstractResultBuffer
id::Int
tape::Vector{UInt64}
tapeidxs::BufferedVector{Int}
end
TaskResultBuffer(id::Int) = TaskResultBuffer(id, UInt64[], BufferedVector{Int}())
TaskResultBuffer(id::Int, n) = TaskResultBuffer(id, UInt64[], BufferedVector{Int}(Vector{Int}(undef, n), 0))
function Base.empty!(buf::TaskResultBuffer)
empty!(buf.tapeidxs)
empty!(buf.tape)
return buf
end
function Base.ensureroom(buf::TaskResultBuffer, n)
Base.ensureroom(buf.tapeidxs, n)
return buf
end
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 1036 | function ChunkedBase.populate_result_buffer!(
result_buf::TaskResultBuffer,
newlines_segment::AbstractVector{Int32},
parsing_ctx::ParsingContext,
buf::Vector{UInt8},
comment::Union{Nothing,Vector{UInt8}}=nothing,
::Type{CT}=Tuple{}
) where {CT}
tape = result_buf.tape
empty!(result_buf)
Base.ensureroom(result_buf, ceil(Int, length(newlines_segment) * 1.01))
ignoreemptyrows = parsing_ctx.ignoreemptyrows
tapeidx = 1
@inbounds for i in 1:length(newlines_segment) - 1
pos = Int(newlines_segment[i]) + 1
len = Int(newlines_segment[i+1]) - 1
# skip over leading spaces
ChunkedBase._startswith(buf, pos - 1, comment) && continue
_nonspace(buf[pos]) || (pos = something(findnext(_nonspace, buf, pos), -1))
ignoreemptyrows && (pos == -1 || pos > len) && continue
JSON3.@check
unsafe_push!(result_buf.tapeidxs, tapeidx)
_, tapeidx = JSON3.read!(buf, pos, len, buf[pos], tape, tapeidx, Any)
end
return nothing
end
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 36313 | using Test
using ChunkedJSONL
using ChunkedJSONL: ValueExtractionContext
alg=:serial
@testset "Single elements" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1"), ctx, _force=alg)
@test ctx.elements[1] == 1
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1.0"), ctx, _force=alg)
@test ctx.elements[1] == 1.0
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\"1.0\""), ctx, _force=alg)
@test ctx.elements[1] == "1.0"
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\"aaa\\\"aaa\""), ctx, _force=alg)
@test ctx.elements[1] == "aaa\"aaa"
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("false"), ctx, _force=alg)
ChunkedJSONL.parse_file(IOBuffer("true"), ctx, _force=alg)
@test ctx.elements[1] == false
@test ctx.elements[2] == true
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("null"), ctx, _force=alg)
@test ctx.elements[1] === nothing
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[]"), ctx, _force=alg)
@test ctx.elements[1] == []
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1, 2]"), ctx, _force=alg)
@test ctx.elements[1] == [1, 2]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1.0, 2.0]"), ctx, _force=alg)
@test ctx.elements[1] == [1.0, 2.0]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[\"1\", \"2\"]"), ctx, _force=alg)
@test ctx.elements[1] == ["1", "2"]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[\"aaa\\\"aaa\", \"bbb\\\"bbb\"]"), ctx, _force=alg)
@test ctx.elements[1] == ["aaa\"aaa", "bbb\"bbb"]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[true, false]"), ctx, _force=alg)
@test ctx.elements[1] == [true, false]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[null, null]"), ctx, _force=alg)
@test ctx.elements[1] == [nothing, nothing]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[{}]"), ctx, _force=alg)
@test ctx.elements[1] == [Dict{Symbol,Any}()]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}()
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => 1)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\\\"a\": 1}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(Symbol("a\"a") => 1)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1.0}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => 1.0)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": \"1\"}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => "1")
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": true}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => true)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": null}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => nothing)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": []}"), ctx, _force=alg)
@test ctx.elements[1] == Dict{Symbol,Any}(:a => [])
end
end
end
@testset "Multiple lines small buffer" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1\n1"), ctx, _force=alg, buffersize=4)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1.0\n1.0"), ctx, _force=alg, buffersize=4)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\"1.0\"\n\"1.0\""), ctx, _force=alg, buffersize=6)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("false\ntrue"), ctx, _force=alg, buffersize=6)
@test ctx.elements == [false, true]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("null\nnull"), ctx, _force=alg, buffersize=5)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[]\n[]"), ctx, _force=alg, buffersize=4)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1, 2]\n[1, 2]"), ctx, _force=alg, buffersize=7)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1.0, 2.0]\n[1.0, 2.0]"), ctx, _force=alg, buffersize=11)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[\"1\", \"2\"]\n[\"1\", \"2\"]"), ctx, _force=alg, buffersize=11)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[\"1\\\"\", \"2\\\"\"]\n[\"1\\\"\", \"2\\\"\"]"), ctx, _force=alg, buffersize=15)
@test ctx.elements == [["1\"", "2\""],["1\"", "2\""]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[true, false]\n[true, false]"), ctx, _force=alg, buffersize=14)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[null, null]\n[null, null]"), ctx, _force=alg, buffersize=13)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[{}]\n[{}]"), ctx, _force=alg, buffersize=5)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{}\n{}"), ctx, _force=alg, buffersize=4)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1}\n{\"a\": 1}"), ctx, _force=alg, buffersize=9)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1.0}\n{\"a\": 1.0}"), ctx, _force=alg, buffersize=11)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": \"1\"}\n{\"a\": \"1\"}"), ctx, _force=alg, buffersize=11)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\\\"\": \"1\\\"\"}\n{\"a\\\"\": \"1\\\"\"}"), ctx, _force=alg, buffersize=15)
@test ctx.elements == [Dict{Symbol,Any}(Symbol("a\"") => "1\""),Dict{Symbol,Any}(Symbol("a\"") => "1\"")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": true}\n{\"a\": true}"), ctx, _force=alg, buffersize=12)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": null}\n{\"a\": null}"), ctx, _force=alg, buffersize=12)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": []}\n{\"a\": []}"), ctx, _force=alg, buffersize=10)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
end
end
@testset "Multiple lines" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1\n1"), ctx, _force=alg)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1.0\n1.0"), ctx, _force=alg)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\"1.0\"\n\"1.0\""), ctx, _force=alg)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("false\ntrue"), ctx, _force=alg)
@test ctx.elements == [false, true]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("null\nnull"), ctx, _force=alg)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[]\n[]"), ctx, _force=alg)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1, 2]\n[1, 2]"), ctx, _force=alg)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[1.0, 2.0]\n[1.0, 2.0]"), ctx, _force=alg)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[\"1\", \"2\"]\n[\"1\", \"2\"]"), ctx, _force=alg)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[true, false]\n[true, false]"), ctx, _force=alg)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[null, null]\n[null, null]"), ctx, _force=alg)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("[{}]\n[{}]"), ctx, _force=alg)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{}\n{}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1}\n{\"a\": 1}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": 1.0}\n{\"a\": 1.0}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": \"1\"}\n{\"a\": \"1\"}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": true}\n{\"a\": true}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": null}\n{\"a\": null}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("{\"a\": []}\n{\"a\": []}"), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
end
end
@testset "Multiple lines leading and trailing whitespace" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" 1 \n 1 "), ctx, _force=alg)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" 1.0 \n 1.0 "), ctx, _force=alg)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" \"1.0\" \n \"1.0\" "), ctx, _force=alg)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" false \n true "), ctx, _force=alg)
@test ctx.elements == [false, true]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" null \n null "), ctx, _force=alg)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [] \n [] "), ctx, _force=alg)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [1, 2] \n [1, 2] "), ctx, _force=alg)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [1.0, 2.0] \n [1.0, 2.0] "), ctx, _force=alg)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [\"1\", \"2\"] \n [\"1\", \"2\"] "), ctx, _force=alg)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [true, false] \n [true, false] "), ctx, _force=alg)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [null, null] \n [null, null] "), ctx, _force=alg)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [{}] \n [{}] "), ctx, _force=alg)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {} \n {} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": 1} \n {\"a\": 1} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": 1.0} \n {\"a\": 1.0} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": \"1\"} \n {\"a\": \"1\"} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": true} \n {\"a\": true} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": null} \n {\"a\": null} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": []} \n {\"a\": []} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
end
end
@testset "Multiple lines leading and a lot of surrounding whitespace" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" 1 \n 1 "), ctx, _force=alg)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" 1.0 \n 1.0 "), ctx, _force=alg)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" \"1.0\" \n \"1.0\" "), ctx, _force=alg)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" false \n true "), ctx, _force=alg)
@test ctx.elements == [false, true]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" null \n null "), ctx, _force=alg)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [] \n [] "), ctx, _force=alg)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [1, 2] \n [1, 2] "), ctx, _force=alg)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [1.0, 2.0] \n [1.0, 2.0] "), ctx, _force=alg)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [\"1\", \"2\"] \n [\"1\", \"2\"] "), ctx, _force=alg)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [true, false] \n [true, false] "), ctx, _force=alg)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [null, null] \n [null, null] "), ctx, _force=alg)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" [{}] \n [{}] "), ctx, _force=alg)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {} \n {} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": 1} \n {\"a\": 1} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": 1.0} \n {\"a\": 1.0} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": \"1\"} \n {\"a\": \"1\"} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": true} \n {\"a\": true} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": null} \n {\"a\": null} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" {\"a\": []} \n {\"a\": []} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
end
end
@testset "Multiple lines leading and trailing whitespace with BOM" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf 1 \n 1 "), ctx, _force=alg)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf 1.0 \n 1.0 "), ctx, _force=alg)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf \"1.0\" \n \"1.0\" "), ctx, _force=alg)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf false \n true "), ctx, _force=alg)
@test ctx.elements == [false, true]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf null \n null "), ctx, _force=alg)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [] \n [] "), ctx, _force=alg)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [1, 2] \n [1, 2] "), ctx, _force=alg)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [1.0, 2.0] \n [1.0, 2.0] "), ctx, _force=alg)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [\"1\", \"2\"] \n [\"1\", \"2\"] "), ctx, _force=alg)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [\"1\\\"\", \"2\\\"\"] \n [\"1\\\"\", \"2\\\"\"] "), ctx, _force=alg)
@test ctx.elements == [["1\"", "2\""],["1\"", "2\""]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [true, false] \n [true, false] "), ctx, _force=alg)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [null, null] \n [null, null] "), ctx, _force=alg)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [{}] \n [{}] "), ctx, _force=alg)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {} \n {} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": 1} \n {\"a\": 1} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": 1.0} \n {\"a\": 1.0} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": \"1\"} \n {\"a\": \"1\"} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\\\"\": \"1\\\"\"} \n {\"a\\\"\": \"1\\\"\"} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(Symbol("a\"") => "1\""),Dict{Symbol,Any}(Symbol("a\"") => "1\"")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": true} \n {\"a\": true} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": null} \n {\"a\": null} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": []} \n {\"a\": []} "), ctx, _force=alg)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
end
end
@testset "Multiple lines leading and trailing whitespace with BOM, 2 chunks" begin
for alg in [:serial, :parallel]
@testset "Int $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf 1 \n 1 "), ctx, _force=alg, buffersize=4)
@test ctx.elements == [1,1]
end
@testset "Float64 $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf 1.0 \n 1.0 "), ctx, _force=alg, buffersize=5)
@test ctx.elements == [1.0, 1.0]
end
@testset "String $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf \"1.0\" \n \"1.0\" "), ctx, _force=alg, buffersize=7)
@test ctx.elements == ["1.0","1.0"]
end
@testset "Bool $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf false \n false "), ctx, _force=alg, buffersize=7)
@test ctx.elements == [false, false]
end
@testset "Null $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf null \n null "), ctx, _force=alg, buffersize=6)
@test ctx.elements == [nothing,nothing]
end
@testset "Array $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [] \n [] "), ctx, _force=alg, buffersize=5)
@test ctx.elements == [[],[]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [1, 2] \n [1, 2] "), ctx, _force=alg, buffersize=8)
@test ctx.elements == [[1, 2],[1, 2]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [1.0, 2.0] \n [1.0, 2.0] "), ctx, _force=alg, buffersize=13)
@test ctx.elements == [[1.0, 2.0],[1.0, 2.0]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [\"1\", \"2\"] \n [\"1\", \"2\"] "), ctx, _force=alg, buffersize=13)
@test ctx.elements == [["1", "2"],["1", "2"]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [\"1\\\"\", \"2\\\"\"] \n [\"1\\\"\", \"2\\\"\"] "), ctx, _force=alg, buffersize=17)
@test ctx.elements == [["1\"", "2\""],["1\"", "2\""]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [true, false] \n [true, false] "), ctx, _force=alg, buffersize=15)
@test ctx.elements == [[true, false],[true, false]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [null, null] \n [null, null] "), ctx, _force=alg, buffersize=14)
@test ctx.elements == [[nothing, nothing],[nothing, nothing]]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf [{}] \n [{}] "), ctx, _force=alg, buffersize=14)
@test ctx.elements == [[Dict{Symbol,Any}()],[Dict{Symbol,Any}()]]
end
@testset "Object $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {} \n {} "), ctx, _force=alg, buffersize=5)
@test ctx.elements == [Dict{Symbol,Any}(),Dict{Symbol,Any}()]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": 1} \n {\"a\": 1} "), ctx, _force=alg, buffersize=10)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1),Dict{Symbol,Any}(:a => 1)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": 1.0} \n {\"a\": 1.0} "), ctx, _force=alg, buffersize=14)
@test ctx.elements == [Dict{Symbol,Any}(:a => 1.0),Dict{Symbol,Any}(:a => 1.0)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": \"1\"} \n {\"a\": \"1\"} "), ctx, _force=alg, buffersize=14)
@test ctx.elements == [Dict{Symbol,Any}(:a => "1"),Dict{Symbol,Any}(:a => "1")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\\\"\": \"1\\\"\"} \n {\"a\\\"\": \"1\\\"\"} "), ctx, _force=alg, buffersize=18)
@test ctx.elements == [Dict{Symbol,Any}(Symbol("a\"") => "1\""),Dict{Symbol,Any}(Symbol("a\"") => "1\"")]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": true} \n {\"a\": true} "), ctx, _force=alg, buffersize=16)
@test ctx.elements == [Dict{Symbol,Any}(:a => true),Dict{Symbol,Any}(:a => true)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": null} \n {\"a\": null} "), ctx, _force=alg, buffersize=16)
@test ctx.elements == [Dict{Symbol,Any}(:a => nothing),Dict{Symbol,Any}(:a => nothing)]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf {\"a\": []} \n {\"a\": []} "), ctx, _force=alg, buffersize=11)
@test ctx.elements == [Dict{Symbol,Any}(:a => []),Dict{Symbol,Any}(:a => [])]
end
@testset "Empty input $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(""), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf"), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("\xef\xbb\xbf "), ctx, _force=alg, buffersize=4)
@test isempty(ctx.elements)
end
end
end
@testset "Skipping comments and whitespace" begin
for alg in [:serial, :parallel]
for comment in ('#', "#", UInt8('#'), [UInt8('#')])
@testset "comment: $(repr(comment)) $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer(" \n\n\n\n #\n#x\n#xx\n1\n\n\n#x\n1\n\n"), ctx, _force=alg, buffersize=4, comment=comment)
@test ctx.elements == [1, 1]
end
end
end
end
@testset "Limit and skipto" begin
for alg in [:serial, :parallel]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1\n2\n3\n4\n5"), ctx, _force=alg, buffersize=4, skipto=2, limit=1)
@test ctx.elements == [3]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1\n2\n\n3\n4\n5"), ctx, _force=alg, buffersize=4, skipto=2, limit=2)
@test sort(ctx.elements) == [3, 4]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("0\n1\n2\n\n3\n4\n5"), ctx, _force=alg, buffersize=4, skipto=2, limit=2)
@test ctx.elements == [2]
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1\n#\n#\n#\n2\n#\n3\n4\n5"), ctx, _force=alg, buffersize=4, skipto=2, limit=3, comment='#')
@test sort(ctx.elements) == [2, 3]
end
end
@testset "Newlinechar" begin
for alg in [:serial, :parallel]
for (arg, nl) in (('\n', "\n"), ('\n', "\r\n"), ('\r', "\r"), (UInt8('\n'), "\n"), (UInt8('\n'), "\r\n"),
(UInt8('\r'), "\r"), (nothing, "\r"), (nothing, "\n"), (nothing, "\r\n"))
@testset "arg: $(repr(arg)), nl: $(repr(nl)) $alg" begin
ctx = ValueExtractionContext()
ChunkedJSONL.parse_file(IOBuffer("1 $nl 1"), ctx, _force=alg, buffersize=4, newlinechar=arg)
@test ctx.elements == [1, 1]
end
end
end
end
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 4237 | using ChunkedJSONL
using ChunkedBase
using ChunkedJSONL: ParsingContext, TaskResultBuffer, AbstractConsumeContext
using Test
# Throws when a specified row is greater than the first row of a task buffer
struct TestThrowingContext <: AbstractConsumeContext
tasks::Vector{Task}
conds::Vector{ChunkedJSONL.TaskCounter}
throw_row::Int
end
TestThrowingContext(throw_row) = TestThrowingContext(Task[], ChunkedJSONL.TaskCounter[], throw_row)
# Throws in the last quarter of the buffer
struct ThrowingIO <: IO
io::IOBuffer
throw_byte::Int
end
ThrowingIO(s::String) = ThrowingIO(IOBuffer(s), length(s) - cld(length(s), 4))
Base.read(io::ThrowingIO, ::Type{UInt8}) = io.io.ptr > io.throw_byte ? error("That should be enough data for everyone") : read(io.io, UInt8)
ChunkedBase.readbytesall!(io::ThrowingIO, buf, n::Int) = io.io.ptr > io.throw_byte ? error("That should be enough data for everyone") : ChunkedBase.readbytesall!(io.io, buf, n)
Base.eof(io::ThrowingIO) = Base.eof(io.io)
function ChunkedJSONL.consume!(ctx::TestThrowingContext, payload::ParsedPayload)
t = current_task()
c = payload.chunking_ctx.counter
c in ctx.conds || push!(ctx.conds, c)
t in ctx.tasks || push!(ctx.tasks, t)
payload.row_num >= ctx.throw_row && error("These contexts are for throwing, and that's all what they do")
sleep(0.01) # trying to get the task off a fast path to claim everything from the parsing queue
return nothing
end
@testset "Exception handling" begin
@testset "consume!" begin
@testset "serial" begin
throw_ctx = TestThrowingContext(2)
@test_throws ErrorException("These contexts are for throwing, and that's all what they do") parse_file(IOBuffer("""
[1,2]
[3,4]
"""),
throw_ctx,
_force=:serial,
buffersize=6
)
@assert !isempty(throw_ctx.tasks)
@test throw_ctx.tasks[1] === current_task()
@test throw_ctx.conds[1].exception isa ErrorException
end
@testset "parallel" begin
# 1500 rows should be enough to get each of the 3 task at least one consume!
throw_ctx = TestThrowingContext(1500)
@test_throws TaskFailedException parse_file(
IOBuffer(("[1,2]\n[3,4]\n" ^ 800)), # 1600 rows total
throw_ctx,
nworkers=min(3, Threads.nthreads()),
_force=:parallel,
buffersize=12,
)
sleep(0.2)
@test length(throw_ctx.tasks) == min(3, Threads.nthreads())
@test all(istaskdone, throw_ctx.tasks)
@test throw_ctx.conds[1].exception isa CapturedException
@test throw_ctx.conds[1].exception.ex.msg == "These contexts are for throwing, and that's all what they do"
end
end
@testset "io" begin
@testset "serial" begin
throw_ctx = TestThrowingContext(typemax(Int)) # Only capture tasks, let IO do the throwing
@test_throws ErrorException("That should be enough data for everyone") parse_file(
ThrowingIO(("[1,2]\n[3,4]\n" ^ 10)), # 20 rows total
throw_ctx,
_force=:serial,
buffersize=6,
)
@assert !isempty(throw_ctx.tasks)
@test throw_ctx.tasks[1] === current_task()
@test throw_ctx.conds[1].exception isa ErrorException
end
@testset "parallel" begin
throw_ctx = TestThrowingContext(typemax(Int)) # Only capture tasks, let IO do the throwing
@test_throws TaskFailedException parse_file(
ThrowingIO(("[1,2]\n[3,4]\n" ^ 800)), # 1600 rows total
throw_ctx,
nworkers=min(3, Threads.nthreads()),
_force=:parallel,
buffersize=12,
)
sleep(0.2)
@test length(throw_ctx.tasks) == min(3, Threads.nthreads())
@test all(istaskdone, throw_ctx.tasks)
@test throw_ctx.conds[1].exception isa CapturedException
@test throw_ctx.conds[1].exception.ex.task.result.msg == "That should be enough data for everyone"
@test throw_ctx.conds[2].exception isa CapturedException
@test throw_ctx.conds[2].exception.ex.task.result.msg == "That should be enough data for everyone"
end
end
end # @testset "Exception handling"
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.1.0 | 5810e8b9d35071a43c67cf357730e4467252696c | code | 705 | using Test
using ChunkedJSONL
using Aqua
Threads.nthreads() == 1 && @warn "Running tests with a single thread -- won't be able to spot concurrency issues"
@testset "ChunkedJSONL.jl" begin
Aqua.test_all(ChunkedJSONL, ambiguities=false)
include("basic_tests.jl")
include("exception_handling.jl")
end
#=
using Coverage
using ChunkedJSONL
pkg_path = pkgdir(ChunkedJSONL);
coverage = process_folder(joinpath(pkg_path, "src"));
open(joinpath(pkg_path, "lcov.info"), "w") do io
LCOV.write(io, coverage)
end;
covered_lines, total_lines = get_summary(coverage);
println("Coverage: $(round(100 * covered_lines / total_lines, digits=2))%");
run(`find $pkg_path -name "*.cov" -type f -delete`);
=#
| ChunkedJSONL | https://github.com/RelationalAI/ChunkedJSONL.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 473 | module SurrealdbWS
export Surreal,
connect,
signin,
signup,
use,
select,
create,
insert,
update,
merge,
query,
patch,
delete,
close,
ping,
authenticate,
invalidate,
set,
unset,
set_format,
info
import Base.Threads: @spawn
import Base64: base64encode
import HTTP.Sockets: send
import HTTP.WebSockets: WebSocket, close, receive, isclosed, CloseFrameBody
import HTTP.openraw
include("surreal.jl")
include("connection.jl")
include("send_receive.jl")
include("query.jl")
end | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 1666 |
struct TimeoutError <: Exception
msg::String
end
Base.showerror(io::IO, e::TimeoutError) = print(io, e.msg)
function generate_header()
[ "Upgrade" => "websocket",
"Connection" => "Upgrade",
"Sec-WebSocket-Key" => base64encode(rand(UInt8, 16)),
"Sec-WebSocket-Version" => "13"]
end
"""
connect(db::Surreal, url::Union{Nothing, String}=nothing)
connect to a local or remote database endpoint
# Examples
```jldoctest
julia> db = Surreal()
julia> connect(db, "ws://127.0.0.1:8000/rpc")
julia> signin(db, user="root", pass="root")
# Connect to a remote endpoint
julia> db = Surreal()
julia> connect(db,"http://cloud.surrealdb.com/rpc")
julia> signin(db, user="root", pass="root")
```
"""
function connect(db::Surreal; timeout::Real=10.0)
db.url = correct_url(db.url)
db.ws_ch = Channel{WebSocket}(db.npool)
for _ in 1:db.npool
task = @spawn openraw("GET", db.url, generate_header())
res = timedwait(()->istaskdone(task),timeout, pollint=0.01) #:ok or :timeout
if res == :timed_out
throw(TimeoutError("Connection timed out. Check your url($(db.url)). Or set timeout($(timeout) sec) to larger value and try again."))
else #res == :ok
socket, _ = fetch(task)
put!(db.ws_ch, WebSocket(socket))
end
end
db.client_state = CONNECTED
nothing
end
"""
close(db::Surreal)
Closes the persistent connection to the database.
"""
function close(db::Surreal)::Nothing
if db.client_state == CONNECTED
for _ in 1:db.npool
ws = take!(db.ws_ch)
@spawn close(ws, CloseFrameBody(1000, ""))
put!(db.ws_ch, ws)
end
db.ws_ch = nothing
end
db.client_state = DISCONNECTED
nothing
end | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 9913 | """
signin(db::Surreal; user::String, pass::String)::Union{String, Nothing}
Signs this connection in to a specific authentication scope.
# Arguments
- `user`: username in signin query
- `pass`: password in signin query
# Examples:
```jldoctest
julia> signin(db, user="root", pass="root")
```
"""
function signin(db::Surreal; user::String, pass::String)::Nothing
params = Dict("user"=>user, "pass"=>pass)
tasks = [@spawn send_receive(db, method="signin", params=(params,)) for _ in 1:db.npool]
db.token = fetch.(tasks) |> first
nothing
end
"""
signup(db::Surreal; user::String, pass::String)::Union{String, Nothing}
Signs this connection up to a specific authentication scope.
#Arguments
- `user`: username in signup query
- `pass`: password in signup query
# Examples
```jldoctest
julia> signup(db, user="bob", pass="123456")
```
"""
function signup(db::Surreal; vars::Dict)::Nothing
if db.npool > 1
throw(ArgumentError("signup is not supported in multipool mode"))
end
task = @spawn send_receive(db, method="signup", params=(vars,))
db.token = fetch(task)
nothing
end
"""
authenticate(db::Surreal; token::Union{String, Nothing}=nothing)::Nothing
Authenticates the current connection with a JWT token.
# Arguments
- `token`: The token to use for the connection.
# Examples
```jldoctest
julia> authenticate(db, token="JWT token here")
```
"""
function authenticate(db::Surreal; token::Union{String, Nothing}=nothing)::Nothing
if !isnothing(token)
db.token = token
end
@sync begin
for _ in 1:db.npool
@spawn send_receive(db, method="authenticate", params=(db.token,))
end
end
nothing
end
"""
invalidate(db::Surreal)::Nothing
invalidate the user's session for the current connection
# Examples
```jldoctest
julia> invalidate(db)
```
"""
function invalidate(db::Surreal)::Nothing
@sync begin
for _ in 1:db.npool
@spawn send_receive(db, method="invalidate")
end
end
nothing
end
"""
set(db::Surreal; params)::Nothing
This method specifies the namespace and database for the current connection
amples
```jldoctest
julia> let(db,params=("website", "https://surrealdb.com/"))
```
"""
function set(db::Surreal; params::Tuple)::Nothing
@sync begin
for _ in 1:db.npool
@spawn send_receive(db, method="let", params=params,)
end
end
nothing
end
"""
unset(db::Surreal; name::String)::Nothing
This method specifies the namespace and database for the current connection
amples
```jldoctest
julia> let(db,params=("website", "https://surrealdb.com/"))
```
"""
function unset(db::Surreal; name::String)::Nothing
@sync begin
for _ in 1:db.npool
@spawn send_receive(db, method="unset", params=(name,))
end
end
nothing
end
"""
use(db::Surreal; namespace::String, database::String)
Switch to a specific namespace and database.
# Arguments
- `namespace`: Switches to a specific namespace.
- `database`: Switches to a specific database.
# Examples
```jldoctest
julia> use(db, namespace='test', database='test')
```
"""
function use(db::Surreal; namespace::String, database::String)::Nothing
@sync begin
for _ in 1:db.npool
@spawn send_receive(db, method="use", params=(namespace, database))
end
end
nothing
end
"""
create(db::Surreal; thing::String, data::Union{Dict, Nothing}=nothing)
Create a record in the database.
This function will run the following query in the database:
create `thing` content `data`
# Arguments
- `thing`: The table or record ID.
- `data`: The document / record data to insert.
# Examples
```jldoctest
# Create a record with a random ID
julia> person = create(db, "person")
# Create a record with a specific ID
julia> record = create(db,"person:tobie", Dict(
"name"=> "Tobie",
"settings"=> Dict(
"active"=> true,
"marketing"=> true,
),
)
"""
function create(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
task = @spawn send_receive(db, method="create", params=(thing, data))
return fetch(task)
end
"""
insert(db::Surreal; thing::String, data::Union{Dict, Nothing}=nothing)
insert a record in the database.
This function will run the following query in the database:
insert `thing` content `data`
# Arguments
- `thing`: The table or record ID.
- `data`: The document / record data to insert.
# Examples
```jldoctest
# insert a record with a random ID
julia> person = insert(db, "person")
# insert a record with a specific ID
julia> record = insert(db,"person:tobie", Dict(
"name"=> "Tobie",
"settings"=> Dict(
"active"=> true,
"marketing"=> true,
),
)
"""
function insert(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
task = @spawn send_receive(db, method="insert", params=(thing, data))
return fetch(task)
end
"""
merge(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
merge(db::Surreal; thing::String, data::Union{Dict, Nothing}=nothing)
merge a record in the database.
This function will run the following query in the database:
merge `thing` content `data`
# Arguments
- `thing`: The table or record ID.
- `data`: The document / record data to merge.
# Examples
```jldoctest
# merge a record with a random ID
julia> person = merge(db, "person")
# merge a record with a specific ID
julia> record = merge(db,"person:tobie", Dict(
"name"=> "Tobie",
"settings"=> Dict(
"active"=> true,
"marketing"=> true,
),
)
"""
function Base.merge(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
task = @spawn send_receive(db, method="merge", params=(thing, data))
return fetch(task)
end
"""
select(db::Surreal; thing::String)
Selects all records in a table (or other entity),
or a specific record, in the database.
This function will run the following query in the database:
select * from `thing`
# Arguments
`thing`: The table or record ID to select.
# Returns:
The records.
# Examples
```jldoctest
# Select all records from a table (or other entity)
julia> people = select(db, "person")
# Select a specific record from a table (or other entity)
julia> person = select(db, "person:h5wxrf2ewk8xjxosxtyc")
```
"""
function select(db::Surreal; thing::String)
task = @spawn send_receive(db, method="select", params=(thing, ))
return fetch(task)
end
"""
update(db::Surreal; thing::String, data::Union{Dict, Nothing}=nothing)
Updates all records in a table, or a specific record, in the database.
This function replaces the current document / record data with the
specified data.
This function will run the following query in the database:
update `thing` content `data`
# Arguments
- `thing`: The table or record ID.
- `data`: The document / record data to insert.
# Examples:
```jldoctest
julia> # Update all records in a table
julia> person = update(db, "person")
julia> # Update a record with a specific ID
julia> record = update(db, "person:tobie", Dict(
"name"=> "Tobie",
"settings"=> Dict(
"active"=> true,
"marketing"=> true,
),
))
```
"""
function update(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
task = @spawn send_receive(db, method="update", params=(thing, data))
return fetch(task)
end
"""
query(db::Surreal; sql::String, vars::Union{Dict, Nothing}=nothing)
Runs a set of SurrealQL statements against the database.
# Arguments
`sql`: Specifies the SurrealQL statements.
`vars`: Assigns variables which can be used in the query.
# Returns
The records.
# Examples
```jldoctest
julia> # Assign the variable on the connection
julia> result = query(db, sql=r"create person; select * from type::table(\$tb)",vars=Dict("tb"=> "person"))
julia> # Get the first result from the first query
julia> result[0]["result"][0]
julia> # Get all of the results from the second query
julia> result[1]["result"]
```
"""
function query(db::Surreal; sql::String, vars::Union{AbstractDict, Nothing}=nothing)
task = @spawn send_receive(db, method="query", params=(sql, vars))
return fetch(task)
end
"""
patch(db::Surreal; thing::String, data::Union{Dict, Nothing}=nothing)
Applies JSON Patch changes to all records, or a specific record, in the database.
This function patches the current document / record data with
the specified JSON Patch data.
This function will run the following query in the database:
update `thing` patch `data`
# Arguments
`thing`: The table or record ID.
`data`: The data to modify the record with.
# Examples
```jldoctest
julia> # Update all records in a table
julia> people = patch(db, "person", Dict(
"op"=> "replace", "path"=> "/created_at", "value"=> str(datetime.datetime.utcnow()) }])
julia> # Update a record with a specific ID
julia> person = patch(db, "person:tobie", [
Dict("op"=> "replace", "path"=> "/settings/active", "value"=> false ),
Dict("op"=> "add", "path"=> "/tags", "value"=> ["developer", "engineer"]),
Dict("op"=> "remove", "path"=> "/temp"),
])
```
"""
# function patch(db::Surreal; thing::String, data::Union{AbstractDict, Nothing}=nothing)
# task = @spawn send_receive(db, method="patch", params=(thing, data))
# return fetch(task)
# end
"""
delete(db::Surreal; thing::String)
Deletes all records in a table, or a specific record, from the database.
This function will run the following query in the database:
delete * from `thing`
# Arguments
`thing`: The table name or a record ID to delete.
# Examples
julia> # Delete all records from a table
julia> delete(db, "person")
julia> # Delete a specific record from a table
julia> delete(db, "person:h5wxrf2ewk8xjxosxtyc")
"""
function delete(db::Surreal; thing::String)
task = @spawn send_receive(db, method="delete", params=(thing, ))
return fetch(task)
end
"""
info(db::Surreal)
Retreive info about the current Surreal instance.
# Returns
The information of the Surreal server.
"""
function info(db::Surreal)
task = @spawn send_receive(db, method="info")
return fetch(task)
end
"""
ping(db::Surreal)
Ping the Surreal server.
"""
function ping(db::Surreal)
task = @spawn send_receive(db, method="ping")
return fetch(task)
end | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 2518 | import JSON: json, parse
import UUIDs: uuid4
import Dates: DateTime
"""
generate_uuid()::String
Generate a UUID.
# Returns
A UUID as a string.
"""
generate_uuid()::String = string(uuid4())
"""
send_receive(db::Surreal, params::Dict)
Send a request to the Surreal server and receive a response.
# Arguments
`params`: The request to send.
# Returns
The response from the Surreal server.
# Raises
Exception: If the client is not connected to the Surreal server.
Exception: If the response contains an error.
"""
function send_receive(db::Surreal; method::String, params::Union{Nothing, Tuple, AbstractVector}=nothing)
# Check Connection State
if db.client_state != CONNECTED
throw(ErrorException("Not connected to Surreal server."))
end
# TODO
# typed_params = type_annotate(params)
# set sending data to server as json
data_send = isnothing(params) ? Dict("id"=>generate_uuid(), "method"=>method) : Dict("id"=>generate_uuid(), "method"=>method, "params"=>params)
# take available websocket from channel, if not available, wait for it
ws = take!(db.ws_ch)
send(ws, json(data_send))
data_receive = receive(ws)
# put websocket back to channel
put!(db.ws_ch, ws)
# Parse response
response = parse(data_receive)
# Check response has Error
haskey(response, "error") && throw(ErrorException("SurrealDB Error:" * response["error"]["message"]))
data_send["id"] != response["id"] && throw(ErrorException(
"Response ID does not match request ID. sent id is $(data_send["id"]) but response id is $(response["id"]))"))
return response["result"] |> parse_chain
end
"""
parse_chain(v::Vector{AbstractDict})
TBW
"""
function parse_chain(v::Vector{AbstractDict})
if length(v) == 1
return parse_chain(v[1])
else
return parse_chain.(v)
end
end
"""
parse_chain(d::AbstractDict)
TBW
"""
function parse_chain(d::AbstractDict)
for (k, v) in d
d[k] = parse_chain(v)
end
return d
end
"""
parse_chain(s)
TBW
"""
function parse_chain(s)
tryparse_raw(Float64, s) |> s ->
tryparse_raw(DateTime, s) |> s ->
identity(s)
end
"""
parse_chain(s::AbstractVector)
TBW
"""
function parse_chain(s::AbstractVector)
parse_chain.(s)
end
"""
tryparse_raw(dist_type::DataType, s::String)::Union{dist_type, String}
TBW
"""
function tryparse_raw(dist_type::DataType, s::String)::Union{dist_type, String}
res = tryparse(dist_type, s)
return res === nothing ? s : res
end
# match s except for string
"""
tryparse_raw(dist_type::DataType, s)
TBW
"""
tryparse_raw(dist_type::DataType, s) = s | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 2085 | @enum ConnectionState CONNECTING=0 CONNECTED=1 DISCONNECTED=2
"""
Surreal(url::Union{Nothing, String}, token::Union{Nothing, String}, client_state::ConnectionState, ws::Union{Nothing, websocket}
A struct represents a Surreal server.
# Constructors
```julia
Surreal()
Surreal(url::String)
```
# Keyword arguments
- url: The URL of the Surreal server.
# Examples
```jldoctest
db = Surreal("ws://127.0.0.1:8000/rpc")
db = Surreal("http://cloud.surrealdb.com/rpc")
```
"""
mutable struct Surreal
url::String
token::Union{Nothing, String}
client_state::ConnectionState
ws_ch ::Union{Nothing, Channel{WebSocket}}
npool::Int
end
"""
Surreal(url::String; npool=1)::Surreal
A struct represents a Surreal server.
# Constructors
```julia
Surreal(url::String)
```
# Keyword arguments
- url: The URL of the Surreal server.
- npool: The number of connection pool. Default is 1.
# Examples
```jldoctest
db = Surreal("ws://localhost:8000/rpc", npool=20)
db = Surreal("http://cloud.surrealdb.com/rpc")
```
"""
function Surreal(url::String; npool=1)::Surreal
return Surreal(
url,
nothing,
CONNECTING,
nothing,
npool
)
end
"""
Surreal(f::Function, url::String; npool=1)
Apply the function `f` to the result of `Surreal(url, npool)` and close the db
descriptor upon completion.
# Examples
```jldoctest
julia> Surreal("ws://localhost:8000/rpc") do db
connect(db)
signin(db,user="root", pass="root")
use(db, namespace="test", database="test")
create(db, thing="person",
data = Dict("user"=> "me","pass"=> "safe","marketing"=> true,
"tags"=> ["python", "documentation"]))
end
```
"""
function Surreal(f::Function, url::String; npool=1)
db = Surreal(url, npool=npool)
try
f(db)
finally
close(db)
end
end
"""
correct_url(url::String)::String
Correct the URL to the correct format.
"""
function correct_url(url::String)::String
if occursin("https", url)
url = replace(url, "https://" => "wss://")
elseif occursin("http", url)
url = replace(url, "http://" => "ws://")
end
if !occursin("/rpc", url)
url *= "/rpc"
end
url
end | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 446 |
@testset "notebook" begin
# Surreal
db = Surreal(URL)
@test db.client_state == SurrealdbWS.ConnectionState(0)
#connect
connect(db)
@test db.client_state == SurrealdbWS.ConnectionState(1)
# signin
res = signin(db, user="root", pass="root")
@test res===nothing
@show("sign in ", res)
#info
# @test info(db)===nothing
#ping
# @test ping(db)===nothing
#close
close(db)
@test db.client_state == SurrealdbWS.ConnectionState(2)
end | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 173 | include("../src/SurrealdbWS.jl")
using .SurrealdbWS
using Test
const URL = "ws://localhost:8001"
# include("notebook.jl")
include("script.jl")
# import Pkg; Pkg.add("HTTP") | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | code | 4547 | import Base.Threads: @spawn
import RDatasets: dataset
import Random: rand,seed!
@testset "open close manually" begin
db = Surreal(URL)
@test db.client_state == SurrealdbWS.ConnectionState(0)
#conncet
connect(db)
@test db.client_state == SurrealdbWS.ConnectionState(1)
# close
close(db)
@test db.client_state == SurrealdbWS.ConnectionState(2)
end
Surreal(URL, npool=5) do db
@testset "connect" begin
connect(db, timeout=30)
end
@testset "sign in" begin
res = signin(db, user="root", pass="root")
@test res === nothing
end
@testset "use" begin
res = use(db, namespace="test", database="test")
@test res === nothing
end
end
#DEFINE TABLE user for authenticate
Surreal(URL) do db
connect(db, timeout=0.1)
res = signin(db, user="root", pass="root")
res = use(db, namespace="test", database="test")
# config for signup...
user_set = query(db, sql=
"""
--sql
DEFINE TABLE user SCHEMAFULL
PERMISSIONS
FOR select, update WHERE id = \$auth.id,
FOR create, delete NONE;
DEFINE FIELD user ON user TYPE string;
DEFINE FIELD pass ON user TYPE string;
DEFINE INDEX idx_user ON user COLUMNS user UNIQUE;
DEFINE SCOPE allusers
SESSION 10m
SIGNUP ( CREATE user SET user = \$user, pass = crypto::argon2::generate(\$pass))
SIGNIN ( SELECT * FROM user WHERE user = \$user AND crypto::argon2::compare(pass, \$pass) )
;
"""
)
end
@testset "sign up" begin
global token = Surreal(URL) do db
connect(db, timeout=30)
res = signup(db, vars=Dict("ns" =>"test", "db"=>"test", "sc" => "allusers",
"user"=>"test_user" * string(rand(UInt16)), "pass"=>"test_user_pass"))
@test res === nothing
db.token
end
end
@testset "authenticate" begin
Surreal(URL) do db
connect(db, timeout=30)
res = authenticate(db, token=token)
@test res === nothing
end
end
seed!(42)
#sync create
df_boston = dataset("MASS", "Boston")
Surreal(URL, npool=1) do db
connect(db, timeout=30)
signin(db, user="root", pass="root")
use(db, namespace="test", database="test")
@testset "delete" begin
res = delete(db, thing="price")
@test res !== nothing
end
@testset "set" begin
res = set(db, params=("lang","Julia"))
@test res === nothing
end
@testset "unset" begin
res = unset(db, name="lang")
@test res === nothing
end
@testset "sync create" begin
for (i, d) in enumerate(eachrow(df_boston[1:2,:]))
data = Dict((names(d) .=> values(d)))
thing = "price:$(i)"
res = create(db, thing=thing, data=data)
@test res !== nothing
end
end
@testset "insert" begin
res = insert(db, thing="price", data=Dict("price2"=>100.0))
println(res)
@test res !== nothing
end
@testset "update" begin
res = update(db, thing="price", data=Dict("price"=>1000.0))
@test res !== nothing
end
# @testset "patch" begin
# res = patch(db, thing="price",
# data=Dict(
# "city"=> "Boston",
# "tags"=> ["Harrison, D. and Rubinfeld, D.L. (1978)", "house"]
# )
# )
# @test res !== nothing
# end
@testset "merge" begin
res = merge(db, thing="price", data=Dict("in sale"=>true))
@test res !== nothing
end
@testset "select" begin
res = select(db, thing="price:1")
println(res)
@test res !== nothing
end
@testset "info" begin
res = info(db)
@test res === nothing
end
@testset "ping" begin
res = ping(db)
@test res === nothing
end
@testset "invalidate" begin
res = invalidate(db)
@test res === nothing
end
end
# async create
seed!(43)
Surreal(URL, npool=5) do db
connect(db, timeout=30)
signin(db, user="root", pass="root")
use(db, namespace="test", database="test")
@testset "async create" begin
res = []
@sync begin
for (i, d) in enumerate(eachrow(df_boston))
data = Dict((names(d) .=> values(d)))
thing = "price:$(i)_$(string(rand(UInt16)))"
push!(res, @spawn create(db, thing=thing, data=data))
end
end
res = fetch.(res)
for val in res
@test res !== nothing
end
end
end
@testset "errors" begin
db = Surreal("ws://localhost:8099")
@test_throws SurrealdbWS.TimeoutError connect(db, timeout=1)
db = Surreal("https://localhost:8099")
@test_throws SurrealdbWS.TimeoutError connect(db, timeout=1)
db = Surreal("http://localhost:8099")
@test_throws SurrealdbWS.TimeoutError connect(db, timeout=1)
@test_throws TaskFailedException info(db)
end
| SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.3 | de077e80f48cffe05bf6b450f495616753f0e8b1 | docs | 1833 | # SurrealdbWS
[](https://travis-ci.com/YuriMiyamori/SurrealdbWS.jl)
[](https://codecov.io/gh/YuriMiyamori/SurrealdbWS.jl)
The [SurrealDB](https://surrealdb.com) driver for Julia via WebSocket(unofficial)
# Getting Started
First [install SurrealDB](https://surrealdb.com/install) if you haven't already.
## Installation
```julia
using Pkg
Pkg.add("SurrealdbWS")
```
## Usage
### Do-Block Syntax
```julia
using SurrealdbWS
Surreal("ws://localhost:8000/rpc") do db
connect(db)
signin(db, user="root", pass="root")
use(db, namespace="test", database="test")
create(db, thing="person",
data = Dict("user"=> "Myra Eggleston",
"email"=> "eggleston@domain.com",
"marketing"=> true,
"tags"=> ["Julialang", "documentation", "CFD"]
)
)
create(db, thing="person",
data = Dict("user"=> "Domenico Risi",
"email"=> "domenico.risi@domain.com",
"marketing"=> false,
"tags"=> ["julialang", "bioinformatics"],
)
)
change(db, thing="person",data = Dict("computer science"=> true,))
selcet(db, thing="person")
end
```
### Close manulally for e.g. notebooks
```julia
using SurrealdbWS
db = Surreal("ws://localhost:8000/rpc")
connect(db)
signin(db,user="root", pass="root")
use(db, namespace="test", database="test")
create(db, thing="person",
data = Dict("user"=> "me","pass"=> "safe","marketing"=> true,
"tags"=> ["python", "documentation"]))
delete(db, thing="person")
close(db)
``` | SurrealdbWS | https://github.com/YuriMiyamori/SurrealdbWS.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 279 | using Documenter, RoundingEmulator
makedocs(
sitename = "RoundingEmulator.jl",
pages = [
"Home" => "index.md",
"Functions" => "functions.md",
"References" => "references.md"
]
)
deploydocs(repo = "github.com/matsueushi/RoundingEmulator.jl")
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 161 | module RoundingEmulator
export add_up, add_down, sub_up, sub_down, mul_up, mul_down, div_up, div_down, sqrt_up, sqrt_down
include("rounding.jl")
end # module
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 9232 | using Base: add12, mul12, significand_bits
using Base.Math: ldexp
const SysFloat = Union{Float32, Float64}
# N_min^s : The smallest positive subnormal number (=nextfloat(zero(T)))
# N_max^s : The largest positive subnormal number (=prevfloat(floatmin(T)))
# N_min^n : The smallest positive normal number (=floatmin(T))
# N_max^n : The largest positive normal number (=floatmax(T))
# constants
for T in (Float32, Float64)
# log_2(N_min^s)
# N_min^s = 2 * 2^{-precision(T)} * N_min^n
@eval exponent_smallest_subnormal(::Type{$T}) = $(Int(log2(nextfloat(zero(T)))))
@eval exponent_product_errorfree_threshold(::Type{$T}) = $(exponent_smallest_subnormal(T) + 2 * significand_bits(T))
@eval product_errorfree_threshold(::Type{$T}) = $(ldexp(one(T), exponent_product_errorfree_threshold(T)))
@eval exponent_product_underflow_mult(::Type{$T}) = $(ceil(Int, -exponent_smallest_subnormal(T)//2))
@eval product_underflow_mult(::Type{$T}) = $(ldexp(one(T), exponent_product_underflow_mult(T)))
@eval exponent_quotient_errorfree_threshold(::Type{$T}) = $(-exponent_smallest_subnormal(T) - 3 * significand_bits(T))
@eval quotient_errorfree_threshold(::Type{$T}) = $(ldexp(one(T), exponent_quotient_errorfree_threshold(T)))
@eval exponent_quotient_underflow_mult(::Type{$T}) = $(2 * significand_bits(T) + 1)
@eval quotient_underflow_mult(::Type{$T}) = $(ldexp(one(T), exponent_quotient_underflow_mult(T)))
@eval inverse_smallest_normal(::Type{$T}) = $(ldexp(one(T), precision(T)))
end
"""
add_up(a, b)
Computes `a + b` with the rounding mode
[`Base.Rounding.RoundUp`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundUp).
```jldoctest
julia> add_up(0.1, 0.2)
0.30000000000000004
julia> add_up(10.0^308, 10.0^308)
Inf
julia> add_up(-10.0^308, -10.0^308)
-1.7976931348623157e308
julia> add_up(-0.1, 0.1)
0.0
julia> add_up(0.0, 0.0)
0.0
julia> add_up(0.0, -0.0)
0.0
julia> add_up(-0.0, -0.0)
-0.0
```
"""
function add_up(a::T, b::T) where {T<:SysFloat}
x, y = add12(a, b) # twosum
if isinf(x)
ifelse(x == typemin(x) && isfinite(a) && isfinite(b), -floatmax(x), x)
else
y > zero(y) ? nextfloat(x) : x
end
end
"""
add_down(a, b)
Computes `a + b` with the rounding mode
[`Base.Rounding.RoundDown`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundDown).
```jldoctest
julia> add_down(0.1, 0.2)
0.3
julia> add_down(10.0^308, 10.0^308)
1.7976931348623157e308
julia> add_down(-10.0^308, -10.0^308)
-Inf
julia> add_down(-0.1, 0.1)
-0.0
julia> add_down(0.0, 0.0)
0.0
julia> add_down(0.0, -0.0)
-0.0
julia> add_down(-0.0, -0.0)
-0.0
```
"""
function add_down(a::T, b::T) where {T<:SysFloat}
x, y = add12(a, b) # twosum
if isinf(x)
ifelse(x == typemax(x) && isfinite(a) && isfinite(b), floatmax(x), x)
elseif y < zero(y)
prevfloat(x)
else
ifelse(iszero(x) && (signbit(a) || signbit(b)), -zero(x), x)
end
end
"""
sub_up(a, b)
Computes `a - b` with the rounding mode
[`Base.Rounding.RoundUp`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundUp).
```jldoctest
julia> sub_up(-0.1, 0.2)
-0.3
julia> sub_up(-10.0^308, 10.0^308)
-1.7976931348623157e308
julia> sub_up(10.0^308, -10.0^308)
Inf
julia> sub_up(0.1, 0.1)
0.0
julia> sub_up(0.0, 0.0)
0.0
julia> sub_up(0.0, -0.0)
0.0
julia> sub_up(-0.0, 0.0)
-0.0
julia> sub_up(-0.0, -0.0)
0.0
```
"""
sub_up(a::T, b::T) where {T<:SysFloat} = add_up(a, -b)
"""
sub_down(a, b)
Computes `a - b` with the rounding mode
[`Base.Rounding.RoundDown`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundDown).
```jldoctest
julia> sub_down(-0.1, 0.2)
-0.30000000000000004
julia> sub_down(-10.0^308, 10.0^308)
-Inf
julia> sub_down(10.0^308, -10.0^308)
1.7976931348623157e308
julia> sub_down(0.1, 0.1)
-0.0
julia> sub_down(0.0, 0.0)
-0.0
julia> sub_down(0.0, -0.0)
0.0
julia> sub_down(-0.0, 0.0)
-0.0
julia> sub_down(-0.0, -0.0)
-0.0
```
"""
sub_down(a::T, b::T) where {T<:SysFloat} = add_down(a, -b)
"""
mul_up(a, b)
Computes `a * b` with the rounding mode
[`Base.Rounding.RoundUp`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundUp).
```jldoctest
julia> mul_up(0.1, 0.2)
0.020000000000000004
julia> mul_up(10.0^308, 10.0^308)
Inf
julia> mul_up(10.0^308, -10.0^308)
-1.7976931348623157e308
julia> mul_up(5.0e-324, 5.0e-324)
5.0e-324
julia> mul_up(-0.1, 0.1)
-0.01
julia> mul_up(0.0, 0.0)
0.0
julia> mul_up(0.0, -0.0)
-0.0
julia> mul_up(-0.0, -0.0)
0.0
```
"""
function mul_up(a::T, b::T) where {T<:SysFloat}
x, y = mul12(a, b)
if isinf(x)
ifelse(x == typemin(x) && isfinite(a) && isfinite(b), -floatmax(x), x)
elseif abs(x) > product_errorfree_threshold(T) # not zero(x): (a, b) = (-2.1634867667116802e-200, 1.6930929484402486e-119) fails
y > zero(y) ? nextfloat(x) : x
else
mult = product_underflow_mult(T)
s, s2 = mul12(a * mult, b * mult)
t = (x * mult) * mult
t < s || (t == s && s2 > zero(s2)) ? nextfloat(x) : x
end
end
"""
mul_down(a, b)
Computes `a * b` with the rounding mode
[`Base.Rounding.RoundDown`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundDown).
```jldoctest
julia> mul_down(0.1, 0.2)
0.02
julia> mul_down(10.0^308, 10.0^308)
1.7976931348623157e308
julia> mul_down(10.0^308, -10.0^308)
-Inf
julia> mul_down(5.0e-324, 5.0e-324)
0.0
julia> mul_down(-0.1, 0.1)
-0.010000000000000002
julia> mul_down(0.0, 0.0)
0.0
julia> mul_down(0.0, -0.0)
-0.0
julia> mul_down(-0.0, -0.0)
0.0
```
"""
function mul_down(a::T, b::T) where {T<:SysFloat}
x, y = mul12(a, b)
if isinf(x)
ifelse(x == typemax(x) && isfinite(a) && isfinite(b), floatmax(x), x)
elseif abs(x) > product_errorfree_threshold(T) # not zero(x): (a, b) = (6.640350825165134e-116, -1.1053488936824272e-202) fails
y < zero(y) ? prevfloat(x) : x
else
mult = product_underflow_mult(T)
s, s2 = mul12(a * mult, b * mult)
t = (x * mult) * mult
t > s || (t == s && s2 < zero(s2)) ? prevfloat(x) : x
end
end
"""
div_up(a, b)
Computes `a / b` with the rounding mode
[`Base.Rounding.RoundUp`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundUp).
```jldoctest
julia> div_up(0.1, 0.3)
0.33333333333333337
julia> div_up(2.0^-100, 2.0^1000)
5.0e-324
julia> div_up(-0.0, 1.0)
-0.0
```
"""
function div_up(a::T, b::T) where {T<:SysFloat}
if iszero(a) || iszero(b) || isinf(a) || isinf(b) || isnan(a) || isnan(b)
a / b
else
# if b < 0, flip sign of a and b
a = flipsign(a, b)
b = abs(b)
if abs(a) < product_errorfree_threshold(T) && abs(b) < quotient_errorfree_threshold(T)
mult = quotient_underflow_mult(T)
a *= mult
b *= mult
end
d = a / b
x, y = mul12(d, b)
x < a || (x == a && y < zero(y)) ? nextfloat(d) : d
end
end
"""
div_down(a, b)
Computes `a / b` with the rounding mode
[`Base.Rounding.RoundDown`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundDown).
```jldoctest
julia> div_down(0.1, 0.3)
0.3333333333333333
julia> div_down(2.0^-100, 2.0^1000)
0.0
julia> div_down(-0.0, 1.0)
-0.0
```
"""
function div_down(a::T, b::T) where {T<:SysFloat}
if iszero(a) || iszero(b) || isinf(a) || isinf(b) || isnan(a) || isnan(b)
a / b
else
# if b < 0, flip sign of a and b
a = flipsign(a, b)
b = abs(b)
if abs(a) < product_errorfree_threshold(T) && abs(b) < quotient_errorfree_threshold(T)
mult = quotient_underflow_mult(T)
a *= mult
b *= mult
end
d = a / b
x, y = mul12(d, b)
x > a || (x == a && y > zero(y)) ? prevfloat(d) : d
end
end
"""
sqrt_up(a)
Computes `sqrt(a)` with the rounding mode
[`Base.Rounding.RoundUp`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundUp).
```jldoctest
julia> sqrt_up(2.0)
1.4142135623730951
julia> sqrt_up(0.0)
0.0
julia> sqrt_up(-0.0)
-0.0
```
"""
function sqrt_up(a::SysFloat)
d = sqrt(a)
if isinf(d)
typemax(d)
elseif a < product_errorfree_threshold(typeof(a))
invn = inverse_smallest_normal(typeof(a))
a2 = a * invn^2
d2 = d * invn
x, y = mul12(d2, d2)
x < a2 || (x == a2 && y < zero(y)) ? nextfloat(d) : d
else
x, y = mul12(d, d)
x < a || (x == a && y < zero(y)) ? nextfloat(d) : d
end
end
"""
sqrt_down(a)
Computes `sqrt(a)` with the rounding mode
[`Base.Rounding.RoundDown`](https://docs.julialang.org/en/v1/base/math/#Base.Rounding.RoundDown).
```jldoctest
julia> sqrt_down(2.0)
1.414213562373095
julia> sqrt_down(0.0)
0.0
julia> sqrt_down(-0.0)
-0.0
```
"""
function sqrt_down(a::SysFloat)
d = sqrt(a)
if isinf(d)
typemax(d)
elseif a < product_errorfree_threshold(typeof(a))
invn = inverse_smallest_normal(typeof(a))
a2 = a * invn^2
d2 = d * invn
x, y = mul12(d2, d2)
x > a2 || (x == a2 && y > zero(y)) ? prevfloat(d) : d
else
x, y = mul12(d, d)
x > a || (x == a && y > zero(y)) ? prevfloat(d) : d
end
end
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 94 | using RoundingEmulator
using Test
include("special_values.jl")
include("setrounding_raw.jl")
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 3215 | using Base.Rounding: setrounding_raw, to_fenv
using Printf
function compare_calc_raw(op, updown, calc, raw, args...)
if isequal(calc, raw)
true
else
@info("Erorr", op, updown)
for (i, v) in enumerate(args...)
@info(@sprintf("a%d = %0.18e, bit rep : %s", i, v, bitstring(v)))
end
@info(@sprintf("calc = %0.18e, bit rep : %s", calc, bitstring(calc)))
@info(@sprintf("raw = %0.18e, bit rep : %s", raw, bitstring(raw)))
false
end
end
function rounding_check(op, base_op, arrays...)
elt = eltype(first(arrays))
setrounding_raw(elt, to_fenv(RoundNearest))
@eval begin
up_calc = broadcast($(Symbol(op, "_up")), $(arrays...))
down_calc = broadcast($(Symbol(op, "_down")), $(arrays...))
setrounding_raw($elt, to_fenv(RoundUp))
up_raw = broadcast($base_op, $(arrays...))
setrounding_raw($elt, to_fenv(RoundDown))
down_raw = broadcast($base_op, $(arrays...))
end
# Compare
for (calc, raw, args) in zip(up_calc, up_raw, zip(arrays...))
@test compare_calc_raw(op, "up", calc, raw, args)
end
for (calc, raw, args) in zip(down_calc, down_raw, zip(arrays...))
@test compare_calc_raw(op, "down", calc, raw, args)
end
setrounding_raw(elt, to_fenv(RoundNearest))
end
rounding_check_unary(a::AbstractVector) = rounding_check(:sqrt, :sqrt, a)
rounding_check_unary(a) = rounding_check_unary([a])
function rounding_check_binary(a::T, b::T) where {T<:AbstractVector}
for (op, base_op) in zip((:add, :sub, :mul, :div), (:+, :-, :*, :/))
rounding_check(op, base_op, a, b)
end
end
rounding_check_binary(a, b) = rounding_check_binary([a], [b])
special_value_list(T::Type) = [
zero(T), -zero(T), # 0.0, -0.0
one(T), -one(T), # 1.0, -1.0
nextfloat(zero(T)), prevfloat(zero(T)), # N_min^s, -N_min^s
prevfloat(floatmin(T)), nextfloat(-floatmin(T)), # N_max^s, -N_max^s
floatmin(T), -floatmin(T), # N_min^n, -N_min^n
floatmax(T), -floatmax(T), # N_max^n, -N_max^n
eps(T), -eps(T), # machine epsilon
typemax(T), typemin(T), # Inf, -Inf
T(NaN) # NaN
]
for T in (Float64, Float32)
@testset "$(T), Special Cases" begin
special_values = special_value_list(T)
len = Base.length(special_values)
a = repeat(special_values, len)
b = sort(a)
rounding_check_unary(filter(x->x ≥ zero(x), special_values)) # sqrt
rounding_check_binary(a, b)
end
end
for n in 3:6
N = 10^n
for T in (Float64, Float32)
@testset "$(T), Random Sampling, 10^$(n)" begin
rand_a = reinterpret.(T, rand(Base.uinttype(T), N))
rand_b = reinterpret.(T, rand(Base.uinttype(T), N))
rounding_check_unary(abs.(rand_a))
rounding_check_unary(abs.(rand_b))
rounding_check_binary(rand_a, rand_b)
rounding_check_binary(rand_b, rand_a)
end
end
end
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | code | 3127 | for T in (Float32, Float64)
@testset "$(T): Signed zero" begin
@test isequal(add_up(zero(T), zero(T)), zero(T))
@test isequal(add_up(zero(T), -zero(T)), zero(T))
@test isequal(add_up(-zero(T), zero(T)), zero(T))
@test isequal(add_up(-zero(T), -zero(T)), -zero(T))
@test isequal(add_down(zero(T), zero(T)), zero(T))
@test isequal(add_down(zero(T), -zero(T)), -zero(T))
@test isequal(add_down(-zero(T), zero(T)), -zero(T))
@test isequal(add_down(-zero(T), -zero(T)), -zero(T))
@test isequal(sub_up(zero(T), zero(T)), zero(T))
@test isequal(sub_up(zero(T), -zero(T)), zero(T))
@test isequal(sub_up(-zero(T), zero(T)), -zero(T))
@test isequal(sub_up(-zero(T), -zero(T)), zero(T))
@test isequal(sub_down(zero(T), zero(T)), -zero(T))
@test isequal(sub_down(zero(T), -zero(T)), zero(T))
@test isequal(sub_down(-zero(T), zero(T)), -zero(T))
@test isequal(sub_down(-zero(T), -zero(T)), -zero(T))
@test isequal(mul_up(zero(T), zero(T)), zero(T))
@test isequal(mul_up(zero(T), -zero(T)), -zero(T))
@test isequal(mul_up(-zero(T), zero(T)), -zero(T))
@test isequal(mul_up(-zero(T), -zero(T)), zero(T))
@test isequal(mul_down(zero(T), zero(T)), zero(T))
@test isequal(mul_down(zero(T), -zero(T)), -zero(T))
@test isequal(mul_down(-zero(T), zero(T)), -zero(T))
@test isequal(mul_down(-zero(T), -zero(T)), zero(T))
@test isequal(div_up(zero(T), zero(T)), T(NaN))
@test isequal(div_up(zero(T), -zero(T)), T(NaN))
@test isequal(div_up(-zero(T), zero(T)), T(NaN))
@test isequal(div_up(-zero(T), -zero(T)), T(NaN))
@test isequal(div_down(zero(T), zero(T)), T(NaN))
@test isequal(div_down(zero(T), -zero(T)), T(NaN))
@test isequal(div_down(-zero(T), zero(T)), T(NaN))
@test isequal(div_down(-zero(T), -zero(T)), T(NaN))
@test isequal(sqrt_up(zero(T)), zero(T))
@test isequal(sqrt_down(-zero(T)), -zero(T))
end
end
@testset "Corner cases" begin
# TODO
# Add tests for Float32
@testset "twosum intermediate overflow" begin
# http://verifiedby.me/adiary/09
a = 3.5630624444874539e+307
b = -floatmax(Float64)
x = a + b
@test isfinite(x)
tmp = x - a
@test isinf(tmp)
@test isequal(add_up(a, b), -1.4413868904135702e308)
@test isequal(add_down(a, b), -1.4413868904135704e308)
end
@testset "twoprod intermediate overflow" begin
# http://verifiedby.me/adiary/09
function split(a)
tmp = a * (2.0^27 + 1.0)
x = tmp - (tmp - a)
y = a - x
x, y
end
a = 6.929001713869936e+236
b = 2.5944475251952003e+71
x = a * b
@test isfinite(x)
a1, _ = split(a)
b1, _ = split(a)
tmp = a1 * b1
@test isinf(tmp)
@test isequal(mul_up(a, b), floatmax(Float64))
@test isequal(mul_down(a, b), prevfloat(floatmax(Float64)))
end
end
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | docs | 744 | # RoundingEmulator.jl
Emulate directed rounding using only the default rounding mode.
[](https://travis-ci.com/matsueushi/RoundingEmulator.jl) [](https://matsueushi.github.io/RoundingEmulator.jl/dev/)
This package is meant to produce the exact same results of `Rounding.setrounding` ([deprecated](https://github.com/JuliaLang/julia/pull/27166)) without switching rounding modes.
## Requirements
- Julia 1.3 or higher
- `Base.Rounding.get_zero_subnormals() == true`. (See [Base.Rounding.get_zero_subnormals](https://docs.julialang.org/en/v1/base/numbers/#Base.Rounding.get_zero_subnormals))
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | docs | 109 | # Functions
```@docs
add_up
add_down
sub_up
sub_down
mul_up
mul_down
div_up
div_down
sqrt_up
sqrt_down
```
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | docs | 3062 | # RoundingEmulator.jl
Emulate directed rounding using only the default rounding mode.
This package is meant to produce the exact same results of `Rounding.setrounding` ([deprecated](https://github.com/JuliaLang/julia/pull/27166)) without switching rounding modes.
## Requirements
- Julia 1.3 or higher
- `Base.Rounding.get_zero_subnormals() == true`. (See [Base.Rounding.get_zero_subnormals](https://docs.julialang.org/en/v1/base/numbers/#Base.Rounding.get_zero_subnormals))
## Use
This package provides
* [`add_up`](@ref), [`add_down`](@ref) - Addition
* [`sub_up`](@ref), [`sub_down`](@ref) - Subtraction
* [`mul_up`](@ref), [`mul_down`](@ref) - Multiplication
* [`div_up`](@ref), [`div_down`](@ref) - Division
* [`sqrt_up`](@ref), [`sqrt_down`](@ref) - Square root
`up`: Round up,
`down`: Round down
```julia
julia> using RoundingEmulator
julia> add_up(0.1, 0.2)
0.30000000000000004
julia> bitstring(add_up(0.1, 0.2))
"0011111111010011001100110011001100110011001100110011001100110100"
julia> add_down(0.1, 0.2)
0.3
julia> bitstring(add_down(0.1, 0.2))
"0011111111010011001100110011001100110011001100110011001100110011"
julia> sub_up(-0.1, 0.2)
-0.3
julia> bitstring(sub_up(-0.1, 0.2))
"1011111111010011001100110011001100110011001100110011001100110011"
julia> sub_down(-0.1, 0.2)
-0.30000000000000004
julia> bitstring(sub_down(-0.1, 0.2))
"1011111111010011001100110011001100110011001100110011001100110100"
julia> mul_up(0.1, 0.2)
0.020000000000000004
julia> bitstring(mul_up(0.1, 0.2))
"0011111110010100011110101110000101000111101011100001010001111100"
julia> mul_down(0.1, 0.2)
0.02
julia> bitstring(mul_down(0.1, 0.2))
"0011111110010100011110101110000101000111101011100001010001111011"
julia> div_up(1.0, 3.0)
0.33333333333333337
julia> bitstring(div_up(1.0, 3.0))
"0011111111010101010101010101010101010101010101010101010101010110"
julia> div_down(1.0, 3.0)
0.3333333333333333
julia> bitstring(div_down(1.0, 3.0))
"0011111111010101010101010101010101010101010101010101010101010101"
julia> sqrt_up(2.0)
1.4142135623730951
julia> bitstring(sqrt_up(2.0))
"0011111111110110101000001001111001100110011111110011101111001101"
julia> sqrt_down(2.0)
1.414213562373095
julia> bitstring(sqrt_down(2.0))
"0011111111110110101000001001111001100110011111110011101111001100"
```
## Corner cases
```julia
julia> u = nextfloat(zero(Float64))
5.0e-324
julia> v = floatmax(Float64)
1.7976931348623157e308
julia> v + v
Inf
julia> add_up(v, v)
Inf
julia> add_down(v, v)
1.7976931348623157e308
julia> u * u
0.0
julia> mul_up(u, u)
5.0e-324
julia> mul_down(u, u)
0.0
julia> 1.0 / u
Inf
julia> div_up(1.0, u)
Inf
julia> div_down(1.0, u)
1.7976931348623157e308
```
## Signed zero
`RoundingEmulator` follows the special rules for signed zero specified in the chapter 6.3 of IEEE 754-2019.
```julia
julia> add_up(-1.0, 1.0)
0.0
julia> add_down(-1.0, 1.0)
-0.0
julia> add_up(-0.0, 0.0)
0.0
julia> add_down(-0.0, 0.0)
-0.0
julia> add_up(0.0, 0.0)
0.0
julia> add_down(0.0, 0.0)
0.0
julia> sqrt_up(-0.0)
-0.0
julia> sqrt_down(-0.0)
-0.0
```
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"MIT"
] | 0.2.1 | 40b9edad2e5287e05bd413a38f61a8ff55b9557b | docs | 948 | # References
* IEEE Computer Society, IEEE Standard for Floating-Point Arithmetic," in IEEE Std 754-2019 (Revision of IEEE 754-2008), pp.1-84, [https://doi.org/10.1109/IEEESTD.2019.8766229](https://doi.org/10.1109/IEEESTD.2019.8766229), 22 July 2019
* Masahide Kashiwagi, *Saikinten marume nomi ni yoru houkou tsuki marume no emulate* [Emulation of Rounded Arithmeticin Rounding to Nearest], [http://verifiedby.me/kv/rounding/emu.pdf](http://verifiedby.me/kv/rounding/emu.pdf), [http://verifiedby.me/adiary/pub/kashi/image/201406/nas2014-slide](http://verifiedby.me/adiary/pub/kashi/image/201406/nas2014-slide).pdf, 2014
* Masahide Kashiwagi, Error Free Transformation (EFT) is NOT error-free, [http://verifiedby.me/adiary/09](http://verifiedby.me/adiary/09), 2014
* [kv - a C++ Library for Verified Numerical Computation](https://github.com/mskashi/kv)
* [JeffreySarnoff/FastRounding.jl](https://github.com/JeffreySarnoff/FastRounding.jl)
| RoundingEmulator | https://github.com/matsueushi/RoundingEmulator.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 3239 | """
## Visualization tool for the clawpack output
VisClaw.jl is a Julia package for plotting simulation results of the clawpack.\n
https://github.com/hydrocoast/VisClaw.jl
### Examples
- pathof(VisClaw)/Examples_using_Plots.ipynb
- pathof(VisClaw)/Examples_using_GMT.ipynb
### Author
Takuya Miyashita (miyashita@hydrocoast.jp)\n
Doctoral student, Kyoto University, 2018\n
"""
module VisClaw
using Statistics: mean
using DelimitedFiles: readdlm
using Interpolations: Interpolations
using Printf
using Dates
using NetCDF: NetCDF
using GeometricalPredicates: GeometricalPredicates
using Plots: Plots
## define CLAW path from shell
include("clawpath.jl")
export CLAW
## define structs and basic functions
const KWARG = Dict{Symbol,Any}
const emptyF = Array{Float64}(undef, 0, 0)
const timedict = Dict(:second => 1.0, :minute => 60.0, :hour => 3600.0, :day => 24*3600.0)
const varnameset(D,k,v) = haskey(D,k) ? k : v
include("structclaw.jl")
include("amrutils.jl")
include("replaceunit.jl")
include("converttodatetime.jl")
include("getvarname_nctopo.jl")
## load
include("loaddata.jl")
include("loadtrack.jl")
include("loadtopo.jl")
include("loadfgmaxdata.jl")
include("loadfgmax.jl")
include("loadfort.jl")
include("loadgauge.jl")
## print
include("printtopo.jl")
## convert data
include("gaugemax.jl")
include("gaugeinterp.jl")
include("coarsegridmask.jl")
## setup
include("plotsargs.jl")
include("plotstools.jl")
## plot (using Plots)
include("plots2d.jl")
include("plotscheck.jl")
include("plotstopo.jl")
include("plotsgaugewaveform.jl")
include("plotsgaugevelocity.jl")
include("plotsgaugelocation.jl")
include("plotsfgmax.jl")
include("plotstrack.jl")
## plot (using GMT)
using GMT:GMT
include("gmttools.jl")
include("gmttopo.jl")
include("gmtgauge.jl")
include("gmtsurface.jl")
include("gmtarrows.jl")
include("gmttrack.jl")
## general functions
export geodata, amrdata, surgedata, gaugedata, fgmaxdata, regiondata
export topodata, dtopodata
export loadfgmax
export loadtopo, loaddeform, loaddtopo
export loadgauge
export loadtrack
export loadsurface, loadcurrent, loadstorm
export printtopoESRI, printtopo, printdtopo
export coarsegridmask!
export axesratio
export replaceunit!, converttodatetime!
export gaugemax, gaugeinterp
## functions with Plots.jl
export plotsamr
export plotscheck
export gridnumber!, tilebound!
export plotscoastline, plotscoastline!
export plotsfgmax, plotsfgmax!
export plotstopo, plotstopo!
export plotsdtopo, plotsdtopo!
export plotstoporange, plotstoporange!
export plotsgaugelocation, plotsgaugelocation!
export plotsgaugewaveform, plotsgaugewaveform!
export plotsgaugevelocity, plotsgaugevelocity!
export plotstrack, plotstrack!
export plotsgif, plotssavefig
## functions with GMT.jl
export getR, getR_tile, getJ, geogrd
export landmask_asc, landmask_grd
export tilegrd_xyz, tilegrd, tilegrd_mask
export arrowgrd, arrowscalegrd
export gmttopo
export gmtgaugewaveform, gmtgaugewaveform!
export gmtgaugevelocity, gmtgaugevelocity!
export gmtgaugelocation, gmtgaugelocation!
export gmtgaugeannotation!
export gmttoporange!
export gmtcoastline, gmtcoastline!
export gmttrack, gmttrack!
## uniform-grid interpolation
using PyCall: PyCall
include("scipyinterp.jl")
export interpsurface
end
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 3830 | ######################################
"""
x1, x2, y1, y2 = getlims(tiles::VisClaw.AMRGrid)
get min/max range of a tile
"""
function getlims(tile::VisClaw.AMRGrid)
return tile.xlow, tile.xlow+(tile.mx-1)*tile.dx, tile.ylow, tile.ylow+(tile.my-1)*tile.dy
end
######################################
"""
x1, x2, y1, y2 = getlims(tiles::Vector{VisClaw.AMRGrid})
get min/max range of tiles
"""
function getlims(tiles::Vector{VisClaw.AMRGrid})
x1 = minimum(getfield.(tiles, :xlow))
y1 = minimum(getfield.(tiles, :ylow))
x2 = maximum(round.(getfield.(tiles, :xlow) .+ getfield.(tiles, :mx).*getfield.(tiles, :dx), digits=4))
y2 = maximum(round.(getfield.(tiles, :ylow) .+ getfield.(tiles, :my).*getfield.(tiles, :dy), digits=4))
return x1, x2, y1, y2
end
######################################
"""
xmesh, ymesh = meshtile(tile::VisClaw.AMRGrid)
generate meshgrids of tile
"""
function meshtile(tile::VisClaw.AMRGrid)
## set the boundary
x = [tile.xlow, tile.xlow+tile.dx*tile.mx]
y = [tile.ylow, tile.ylow+tile.dy*tile.my]
## grid info
xline = collect(Float64, x[1]+0.5tile.dx:tile.dx:x[2]-0.5tile.dx+1e-4)
yline = collect(Float64, y[1]+0.5tile.dy:tile.dy:y[2]-0.5tile.dy+1e-4)
xmesh = repeat(xline', outer=(tile.my,1))
ymesh = repeat(yline, outer=(1,tile.mx))
## return values
return xmesh, ymesh
end
######################################
"""
var = keytile(tile::VisClaw.AMRGrid)
Get the main property name from VisClaw.AMRGrid
"""
function keytile(tile::VisClaw.AMRGrid)
# check
!isa(tile, VisClaw.AMRGrid) && error("Invalid input argument. It must be a type of VisClaw.AMRGrid")
# assign
varset = [:eta, :vel, :slp]
ind = map(T -> isa(tile, T), [VisClaw.SurfaceHeight, VisClaw.Velocity, VisClaw.Storm])
# return value
return varset[ind][1]
end
##########################################################
##########################################################
"""
xvec, yvec, val = tilezmargin(tile::VisClaw.AMRGrid, var::Symbol; digits=4)
Get Z-values of cells including their margins
"""
function tilezmargin(tile::VisClaw.AMRGrid, var::Symbol; digits=4)
## set the boundary
x = [tile.xlow, round(tile.xlow+tile.dx*tile.mx, digits=digits)]
y = [tile.ylow, round(tile.ylow+tile.dy*tile.my, digits=digits)]
## grid info
xvec = collect(LinRange(x[1]-0.5tile.dx, x[2]+0.5tile.dx, tile.mx+2));
yvec = collect(LinRange(y[1]-0.5tile.dy, y[2]+0.5tile.dy, tile.my+2));
xvec = round.(xvec, digits=digits)
yvec = round.(yvec, digits=digits)
## adjust data
val = zeros(tile.my+2,tile.mx+2)
val[2:end-1,2:end-1] = getfield(tile, var)
val[2:end-1,1] = val[2:end-1,2]
val[2:end-1,end] = val[2:end-1,end-1]
val[1,:] = val[2,:]
val[end,:] = val[end-1,:]
# return val
return xvec, yvec, val
end
##########################################################
##########################################################
"""
xvec, yvec, val = tilez(tile::VisClaw.AMRGrid, var::Symbol; digits=4)
Get Z-values of cells at the grid lines
"""
function tilez(tile::VisClaw.AMRGrid, var::Symbol; digits=4)
xvec, yvec, val = VisClaw.tilezmargin(tile, var, digits=digits)
itp = Interpolations.interpolate((yvec, xvec), val, Interpolations.Gridded(Interpolations.Linear()))
## set the boundary
x = [tile.xlow, round(tile.xlow+tile.dx*tile.mx, digits=digits)]
y = [tile.ylow, round(tile.ylow+tile.dy*tile.my, digits=digits)]
xvec = collect(LinRange(x[1], x[2], tile.mx+1));
yvec = collect(LinRange(y[1], y[2], tile.my+1));
xvec = round.(xvec, digits=digits)
yvec = round.(yvec, digits=digits)
val = itp(yvec,xvec);
# return val
return xvec, yvec, val
end
############################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 394 | ### Define your own clawpack path ###
if haskey(ENV, "CLAW")
const CLAW = ENV["CLAW"]
else
## CLAW="/path/to/top/level/clawpack"
println("ENV[\"CLAW\"] is not defined.")
println("Set the env like the following in your default shell:")
println("export CLAW = \"/path/to/top/level/clawpack\" ")
CLAW = ""
#if !isdir(CLAW); error("CLAW=$CLAW is not correct."); end
end
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2748 | ###############################################
"""
poly = tilepolygon(tile::VisClaw.AMRGrid)
generate a polygon data of a tile
"""
function tilepolygon(tile::VisClaw.AMRGrid)
x1, x2, y1, y2 = VisClaw.getlims(tile)
ll = GeometricalPredicates.Point(x1, y1)
lr = GeometricalPredicates.Point(x2, y1)
ur = GeometricalPredicates.Point(x2, y2)
ul = GeometricalPredicates.Point(x1, y2)
poly = GeometricalPredicates.Polygon(ll, lr, ur, ul)
return poly
end
###############################################
"""
coarsegridmask!(tiles::Vector{VisClaw.AMRGrid})
coarsegridmask!(amrs::VisClaw.AMR)
replace values at coarser grids (lower levels) into NaN
"""
function coarsegridmask!(tiles::Vector{VisClaw.AMRGrid})
## return if single tile
length(tiles)==1 && (return tiles)
## levels
level_tiles = getfield.(tiles, :AMRlevel)
maxlevel = maximum(level_tiles)
for tl in tiles
tl.AMRlevel == maxlevel && (continue)
## generate point data for all points in the target tile
x_target, y_target = VisClaw.meshtile(tl)
cellp = GeometricalPredicates.Point.(x_target, y_target)
## get the corners of the target tile
xl, xr, yb, yt = VisClaw.getlims(tl)
## find tiles: one level finer
ind_fine = tl.AMRlevel+1 .== level_tiles
tile_fine = tiles[ind_fine]
nfine = length(tile_fine)
isempty(tile_fine) && (continue)
## fine tiles: inside of the target tiles
ind_inside = trues(nfine)
for j = 1:nfine
x1, x2, y1, y2 = VisClaw.getlims(tile_fine[j])
if x2 < xl || xr < x1 || y2 < yb || yt < y1
ind_inside[j] = false
end
end
tile_fine = tile_fine[ind_inside]
nfine = length(tile_fine)
isempty(tile_fine) && (continue)
## find grid where finer grids are assigned
for j = 1:nfine
poly = VisClaw.tilepolygon(tile_fine[j])
inside = [GeometricalPredicates.inpolygon(poly, cellp[irow, jcol]) for irow=1:tl.my, jcol=1:tl.mx]
if isa(tl, VisClaw.Velocity)
tl.u[inside] .= NaN
tl.v[inside] .= NaN
tl.vel[inside] .= NaN
elseif isa(tl, VisClaw.Storm)
tl.u[inside] .= NaN
tl.v[inside] .= NaN
tl.slp[inside] .= NaN
elseif isa(tl, VisClaw.SurfaceHeight)
tl.eta[inside] .= NaN
end
end
end
return tiles
end
###############################################
function coarsegridmask!(amrs::VisClaw.AMR)
amrs.amr = map(coarsegridmask!, amrs.amr)
return amrs
end
###############################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2729 | #################################
"""
converttodatetime!(fgmax::VisClaw.FGmax, t0::Dates.DateTime)
converttodatetime!(gauge::VisClaw.Gauge, unit::Dates.DateTime)
converttodatetime!(amrs::VisClaw.AMR, unit::Dates.DateTime)
converttodatetime!(track::VisClaw.Track, unit::Dates.DateTime)
Time unit converter to Dates.DateTime
"""
function converttodatetime!(fgmax::VisClaw.FGmax, t0::Dates.DateTime)
## return if already converted
isa(fgmax.unittime, Dates.DateTime) && (return fgmax)
## factor
ratio = timedict[fgmax.unittime]
## (temporal) to avoid NaN convert error
fgmax.tD[isnan.(fgmax.tD)] .= 0.0
fgmax.tarrival[isnan.(fgmax.tarrival)] .= 0.0
fgmax.tv[isnan.(fgmax.tv)] .= 0.0
fgmax.tM[isnan.(fgmax.tM)] .= 0.0
fgmax.tMflux[isnan.(fgmax.tMflux)] .= 0.0
fgmax.tDmin[isnan.(fgmax.tDmin)] .= 0.0
## convert
fgmax.unittime = :DateTime
fgmax.tD = @. t0 + Dates.Second(round(ratio*fgmax.tD))
fgmax.tarrival = @. t0 + Dates.Second(round(ratio*fgmax.tarrival))
if !isempty(fgmax.tv)
fgmax.tv = @. t0 + Dates.Second(round(ratio*fgmax.tv))
end
if !isempty(fgmax.tM)
fgmax.tM = @. t0 + Dates.Second(round(ratio*fgmax.tM))
fgmax.tMflux = @. t0 + Dates.Second(round(ratio*fgmax.tMflux))
fgmax.tDmin = @. t0 + Dates.Second(round(ratio*fgmax.tDmin))
end
return fgmax
end
#################################
#################################
function converttodatetime!(gauge::VisClaw.Gauge, t0::Dates.DateTime)
## return if already converted
isa(gauge.unittime, Dates.DateTime) && (return gauge)
## factor
ratio = timedict[gauge.unittime]
## convert
gauge.unittime = :DateTime
gauge.time = @. t0 + Dates.Second(round(ratio*gauge.time))
# return value
return gauge
end
#################################
#################################
function converttodatetime!(amrs::VisClaw.AMR, t0::Dates.DateTime)
## return if already converted
isa(amrs.unittime, Dates.DateTime) && (return amrs)
## factor
ratio = timedict[amrs.unittime]
## convert
amrs.unittime = :DateTime
amrs.timelap = @. t0 + Dates.Second(round(ratio*amrs.timelap))
# return value
return amrs
end
#################################
#################################
function converttodatetime!(track::VisClaw.Track, t0::Dates.DateTime)
## return if already converted
isa(track.unittime, Dates.DateTime) && (return track)
## factor
ratio = timedict[track.unittime]
## convert
track.unittime = :DateTime
track.timelap = @. t0 + Dates.Second(round(ratio*track.timelap))
# return value
return track
end
#################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 416 | """
val_interp = gaugeinterp(gauge::VisClaw.Gauge, time_interp; varname=:eta::Symbol)
interp gauge values on a non-uniform time
"""
function gaugeinterp(gauge::VisClaw.Gauge, time_interp; varname=:eta::Symbol)
val = getfield(gauge,varname)
itp = Interpolations.LinearInterpolation(gauge.time, val, extrapolation_bc=Interpolations.Flat())
val_interp = itp(time_interp)
return val_interp
end
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 787 | """
gmax = gaugemax(gauge::VisClaw.Gauge)
Maximal values and their occurrence times in a gauge
"""
function gaugemax(gauge::VisClaw.Gauge)
max_AMRlevel = findmax(gauge.AMRlevel)[1]
max_eta = NaN
t_eta = NaN
if !isempty(gauge.eta)
max_eta, tind = findmax(gauge.eta)
t_eta = gauge.time[tind]
end
max_vel = NaN
t_vel = NaN
if !isempty(gauge.u)
vel = sqrt.( (gauge.u).^2 + (gauge.v).^2 )
vel[isnan.(vel)] .= 0.0
max_vel, tind = findmax(vel)
t_vel = gauge.time[tind]
end
gmax = VisClaw.Gaugemax(gauge.label, gauge.id, gauge.loc,
max_AMRlevel, max_eta, max_vel,
t_eta, t_vel,
gauge.unittime)
return gmax
end
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2121 | #########################################################
"""
varname_x, varname_y, varname_z = getvarname_nctopo(ncfilename::String)
"""
function getvarname_nctopo(ncfilename::AbstractString)
nc = NetCDF.open(ncfilename)
vardict = nc.vars
## var X
varname_x = nothing
varname_x = varnameset(vardict, "lon", varname_x)
varname_x = varnameset(vardict, "Lon", varname_x)
varname_x = varnameset(vardict, "LON", varname_x)
varname_x = varnameset(vardict, "longitude", varname_x)
varname_x = varnameset(vardict, "Longitude", varname_x)
varname_x = varnameset(vardict, "LONGITUDE", varname_x)
varname_x = varnameset(vardict, "x", varname_x)
varname_x = varnameset(vardict, "X", varname_x)
## check
varname_x == nothing && error("Variable X/LON was not found in $(ncfilename)")
## var Y
varname_y = nothing
varname_y = varnameset(vardict, "lat", varname_y)
varname_y = varnameset(vardict, "Lat", varname_y)
varname_y = varnameset(vardict, "LAT", varname_y)
varname_y = varnameset(vardict, "latitude", varname_y)
varname_y = varnameset(vardict, "Latitude", varname_y)
varname_y = varnameset(vardict, "LATITUDE", varname_y)
varname_y = varnameset(vardict, "y", varname_y)
varname_y = varnameset(vardict, "Y", varname_y)
## check
varname_y == nothing && error("Variable Y/LAT was not found in $(ncfilename)")
## var Z
varname_z = nothing
varname_z = varnameset(vardict, "elevation", varname_z)
varname_z = varnameset(vardict, "Elevation", varname_z)
varname_z = varnameset(vardict, "ELEVATION", varname_z)
varname_z = varnameset(vardict, "z", varname_z)
varname_z = varnameset(vardict, "Z", varname_z)
varname_z = varnameset(vardict, "band1", varname_z)
varname_z = varnameset(vardict, "Band1", varname_z)
varname_z = varnameset(vardict, "BAND1", varname_z)
## check
varname_z == nothing && error("Variable Z/ELEVATION was not found in $(ncfilename)")
## return
return varname_x, varname_y, varname_z
end
#########################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 1919 | """
Gu, Gv = arrowscalegrd(xloc, yloc, uscale, vscale)
create u and v grd files for an arrow scale
"""
function arrowscalegrd(xloc, yloc, uscale, vscale)
X = collect(-2:2) .+ xloc
Y = collect(-2:2) .+ yloc
U = zeros(Float64, 5,5)
V = zeros(Float64, 5,5)
U[3,3] = uscale
V[3,3] = vscale
Gu = GMT.surface([repeat(X,inner=(5,1)) repeat(Y,outer=(5,1)) vec(U)], I=1, R=[X[1],X[end],Y[1],Y[end]])
Gv = GMT.surface([repeat(X,inner=(5,1)) repeat(Y,outer=(5,1)) vec(V)], I=1, R=[X[1],X[end],Y[1],Y[end]])
return Gu, Gv
end
"""
Gu, Gv = arrowgrd(tiles::Vector{VisClaw.AMRGrid}; cutoff=0.1, kwargs...)
create u and v grd files from AMR grid tiles
"""
function arrowgrd(tiles::Vector{VisClaw.AMRGrid}; cutoff=0.1, kwargs...)
## region
region = VisClaw.getlims(tiles)
## number of tile
ntile = length(tiles)
## preallocate
xg = Vector{AbstractVector{Float64}}(undef,ntile)
yg = Vector{AbstractVector{Float64}}(undef,ntile)
ug = Vector{AbstractVector{Float64}}(undef,ntile)
vg = Vector{AbstractVector{Float64}}(undef,ntile)
## make vectors of X, Y, U, V for all tiles
for k = 1:ntile
X, Y = VisClaw.meshtile(tiles[k])
xg[k] = vec(X)
yg[k] = vec(Y)
ug[k] = vec(tiles[k].u)
vg[k] = vec(tiles[k].v)
end
## cat all tiles
xg = vcat(xg...)
yg = vcat(yg...)
ug = vcat(ug...)
vg = vcat(vg...)
## replace NaNs into 0.0
V = @. sqrt(ug^2 + vg^2)
ind = @. isnan(V) | (V<cutoff)
xg[ind] .= 0.0
yg[ind] .= 0.0
ug[ind] .= 0.0
vg[ind] .= 0.0
## makegrd
Gu = GMT.xyz2grd([xg yg ug], I=tiles[1].dx, R=region, kwargs...)
Gv = GMT.xyz2grd([xg yg vg], I=tiles[1].dy, R=region, kwargs...)
## return
return Gu, Gv
end
arrowgrd(amrs::VisClaw.AMR, istep::Integer; cutoff=0.1, kwargs...) =
arrowgrd(amrs.amr[istep]; cutoff=cutoff, kwargs...)
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2391 |
####################################################
gaugexy2mat(gauges::Vector{VisClaw.Gauge}) = permutedims(hcat(getfield.(gauges, :loc)...), [2,1])
####################################################
"""
gmtgaugelocation(gauge::VisClaw.Gauge; kwargs...)
gmtgaugelocation!(gauge::VisClaw.Gauge; kwargs...)
gmtgaugelocation(gauges::Vector{VisClaw.Gauge}; kwargs...)
gmtgaugelocation!(gauges::Vector{VisClaw.Gauge}; kwargs...)
"""
gmtgaugelocation(gauge::VisClaw.Gauge; kwargs...) = GMT.scatter([gauge.loc[1] gauge.loc[2]]; kwargs...)
####################################################
"""
$(@doc gmtgaugelocation)
"""
gmtgaugelocation!(gauge::VisClaw.Gauge; kwargs...) = GMT.scatter!([gauge.loc[1] gauge.loc[2]]; kwargs...)
####################################################
gmtgaugelocation(gauges::Vector{VisClaw.Gauge}; kwargs...) = GMT.scatter(gaugexy2mat(gauges); kwargs...)
gmtgaugelocation!(gauges::Vector{VisClaw.Gauge}; kwargs...) = GMT.scatter!(gaugexy2mat(gauges); kwargs...)
####################################################
"""
gmtgaugeannotation!(gauge::VisClaw.Gauge; kwargs...)
"""
gmtgaugeannotation!(gauge::VisClaw.Gauge, annot::AbstractString=gauge.label; R="", offset=(0.0,0.0), kwargs...) =
GMT.text!(GMT.text_record([gauge.loc[1]+offset[1] gauge.loc[2]+offset[2]], annot); R=R, kwargs...)
####################################################
####################################################
"""
gmtgaugewaveform(gauge::VisClaw.Gauge; kwargs...)
gmtgaugewaveform!(gauge::VisClaw.Gauge; kwargs...)
"""
gmtgaugewaveform(gauge::VisClaw.Gauge; kwargs...) = GMT.plot(gauge.time, gauge.eta; kwargs...)
####################################################
"""
$(@doc gmtgaugewaveform)
"""
gmtgaugewaveform!(gauge::VisClaw.Gauge; kwargs...) = GMT.plot!(gauge.time, gauge.eta; kwargs...)
####################################################
"""
gmtgaugevelocity(gauge::VisClaw.Gauge; kwargs...)
gmtgaugevelocity!(gauge::VisClaw.Gauge; kwargs...)
"""
gmtgaugevelocity(gauge::VisClaw.Gauge; kwargs...) = GMT.plot(gauge.time, sqrt.(gauge.u.^2 + gauge.v.^2); kwargs...)
####################################################
"""
$(@doc gmtgaugevelocity)
"""
gmtgaugevelocity!(gauge::VisClaw.Gauge; kwargs...) = GMT.plot!(gauge.time, sqrt.(gauge.u.^2 + gauge.v.^2); kwargs...)
####################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 6558 | default_masktxt = "topo4mask.txt"
default_maskgrd = "topomask.grd"
tmp4grd = "tmp4grd.txt"
###################################################
"""
filename = landmask_asc(topo::VisClaw.Topo, filename::AbstractString=default_masktxt)
output [x y z] data in txt for masking
"""
function landmask_asc(topo::VisClaw.Topo, filename::AbstractString=default_masktxt)
xv = vec(repeat(topo.x, inner=(topo.nrows,1)))
yv = vec(repeat(topo.y, outer=(topo.ncols,1)))
topov = vec(topo.elevation);
inds = topov .< 0.0 # ocean
deleteat!(xv, inds)
deleteat!(yv, inds)
deleteat!(topov, inds)
open(filename, "w") do file
Base.print_array(file, [xv yv topov])
end
return filename
end
###################################################
###################################################
"""
G = landmask_grd(txtfile::AbstractString=default_masktxt; grdfile::AbstractString="", kwargs...)
output masking grid
"""
function landmask_grd(txtfile::AbstractString=default_masktxt;
grdfile::AbstractString="", kwargs...)
# check
if !isfile(txtfile); error("Not found: $txtfile"); end
# keyword args
d = KWARG(kwargs)
# (part of GMT.jl surface.jl)
cmd = GMT.parse_common_opts(d, "", [:R :V_params :a :bi :di :e :f :h :i :r :yx])
cmd = *(cmd...)
#println(cmd)
# (part of GMT.jl psmask.jl)
cmd = GMT.parse_common_opts(d, cmd, [:I :UVXY :JZ :c :e :p :r :t :yx :params], true)
cmd = *(cmd...)
cmd = GMT.parse_these_opts(cmd, d, [[:C :end_clip_path], [:D :dump], [:F :oriented_polygons],
[:L :node_grid], [:N :invert], [:Q :cut_number], [:S :search_radius], [:T :tiles]])
cmd = *(cmd...)
#println(cmd)
if isempty(grdfile)
grdfile = default_maskgrd
Gout = true
else
Gout = false
end
# grid
GMT.gmt("grdmask \"$txtfile\" $cmd -G\"$grdfile\" ")
# return
if Gout
G = GMT.gmt("read -Tg $grdfile")
rm(grdfile, force=true)
return G
else
return nothing
end
end
###################################################
###################################################
"""
G = tilegrd(tile::VisClaw.AMRGrid; length_unit::AbstractString="", kwargs...)
make a grid file of VisClaw.AMRGrid with landmask
"""
function tilegrd(tile::VisClaw.AMRGrid; length_unit::AbstractString="", kwargs...)
# var
var = VisClaw.keytile(tile)
# prameters & options
R = VisClaw.getR_tile(tile)
Δ = tile.dx
xvec, yvec, zdata = VisClaw.tilez(tile, var)
xmat = repeat(xvec, inner=(length(yvec),1))
ymat = repeat(yvec, outer=(length(xvec),1))
if !isempty(length_unit)
Δ = "$(Δ)"*length_unit
# +ue?
R = length_unit*R
end
# if NaN in all
if !any(.!isnan.(zdata[:]))
tmp_eta = "eta_tile.grd"
faint="tmp.txt"
open(faint, "w") do file
Base.print_array(file, [xmat[:] ymat[:]])
end
GMT.gmt("grdmask $faint -R$R -I$Δ -S$Δ -NNaN/NaN/NaN -G$tmp_eta ")
G = GMT.gmt("read -Tg $tmp_eta")
rm(faint, force=true)
else
# eta grid
G = GMT.surface([xmat[:] ymat[:] zdata[:]]; R=R, I=Δ, kwargs...)
end
# return value (GMT.GMTgrid)
return G
end
###################################################
tilegrd(amrs::VisClaw.AMR, istep::Integer; length_unit::AbstractString="", kwargs...) =
tilegrd.(amrs.amr[istep]; length_unit=length_unit, kwargs...)
###################################################
###################################################
"""
G = tilegrd_xyz(tile::VisClaw.AMRGrid; kwargs...)
make a grid file of VisClaw.AMRGrid
"""
function tilegrd_xyz(tile::VisClaw.AMRGrid; kwargs...)
# var
var = VisClaw.keytile(tile)
# prameters & options
R = VisClaw.getR_tile(tile)
Δ = tile.dx
xvec, yvec, zdata = VisClaw.tilez(tile, var)
nx = length(xvec)
ny = length(yvec)
xvec = repeat(xvec, inner=(ny,1)) |> vec
yvec = repeat(yvec, outer=(nx,1)) |> vec
zvec = vec(zdata)
inds = isnan.(zvec)
deleteat!(xvec, inds)
deleteat!(yvec, inds)
deleteat!(zvec, inds)
tmp4grd = "tmp4grd.txt"
f = open(tmp4grd, "w"); Base.print_array(f, [xvec yvec zvec]); close(f)
G = GMT.gmt("xyz2grd $(tmp4grd) -R$(R) -I$(Δ)")
rm(tmp4grd, force=true)
# return value (GMT.GMTgrid)
return G
end
###################################################
tilegrd_xyz(amrs::VisClaw.AMR, istep::Integer; kwargs...) = tilegrd_xyz.(amrs.amr[istep]; kwargs...)
###################################################
###################################################
"""
G = tilegrd_mask(tile::VisClaw.AMRGrid, maskfile::AbstractString=""; length_unit::AbstractString="", kwargs...)
make a grid file of VisClaw.AMRGrid with landmask
"""
function tilegrd_mask(tile::VisClaw.AMRGrid, maskfile::AbstractString=""; length_unit::AbstractString="", kwargs...)
# var
var = VisClaw.keytile(tile)
# prameters & options
R = VisClaw.getR_tile(tile)
Δ = tile.dx
r = sqrt(2.0)Δ
xvec, yvec, zdata = VisClaw.tilez(tile, var)
xmat = repeat(xvec', inner=(length(yvec),1))
ymat = repeat(yvec, outer=(length(xvec),1))
tmp_mask = "mask_tile.grd"
tmp_eta = "eta_tile.grd"
eta_masked = "eta_masked.grd"
if !isempty(length_unit)
Δ = "$(Δ)"*length_unit
r = "$(r)"*length_unit
# +ue?
R = length_unit*R
end
# makegrd
# land mask grid
VisClaw.landmask_grd(maskfile; grdfile=tmp_mask, R=R, I=Δ, S=r, N="0/0/NaN", kwargs...)
# if NaN in all
if !any(.!isnan.(zdata[:]))
faint="tmp.txt"
open(faint, "w") do file
Base.print_array(file, [xmat[:] ymat[:]])
#Base.print_array(file, [xmat[:] reverse(ymat, dims=1)[:]])
end
GMT.gmt("grdmask $faint -R$R -I$Δ -S$Δ -NNaN/NaN/NaN -G$tmp_eta ")
rm(faint, force=true)
else
# eta grid
GMT.surface([xmat[:] ymat[:] zdata[:]]; R=R, I=Δ, G=tmp_eta)
end
# masking
GMT.gmt("grdmath $tmp_eta $tmp_mask OR = $eta_masked ")
# read
G = GMT.gmt("read -Tg $eta_masked ")
rm(tmp_mask)
rm(tmp_eta)
rm(eta_masked)
# return value (GMT.GMTgrid)
return G
end
###################################################
tilegrd_mask(amrs::VisClaw.AMR, istep::Integer, maskfile::AbstractString=""; length_unit::AbstractString="", kwargs...) =
tilegrd_mask.(amrs.amr[istep], maskfile; length_unit=length_unit, kwargs...)
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 5910 | ###################################################
"""
getR_tile = getR_tile(tile::VisClaw.AMRGrid; length_unit="")
Get x and y ranges of a tile in AbstractString for -R option in GMT
"""
function getR_tile(tile::VisClaw.AMRGrid; length_unit="")
xs = tile.xlow
ys = tile.ylow
xe = round(tile.xlow + tile.mx*tile.dx, digits=4)
ye = round(tile.ylow + tile.my*tile.dy, digits=4)
xyrange="$xs/$xe/$ys/$ye"
isempty(length_unit) || (xyrange = xyrange*"+u"*length_unit)
# return value
return xyrange
end
###################################################
"""
xyrange = getR(tiles::Vector{VisClaw.AMRGrid}; length_unit="")
xyrange = getR(topo::VisClaw.AbstractTopo; length_unit="")
xyrange = getR(region::VisClaw.Region; length_unit="")
xyrange = getR(G::GMT.GMTgrid; length_unit="")
Get x and y ranges in AbstractString for -R option in GMT
"""
function getR(tiles::Vector{VisClaw.AMRGrid}; length_unit="")
xs, xe, ys, ye = VisClaw.getlims(tiles)
xyrange="$xs/$xe/$ys/$ye"
isempty(length_unit) || (xyrange = xyrange*"+u"*length_unit)
# return value
return xyrange
end
###################################################
function getR(topo::VisClaw.AbstractTopo; length_unit="")
xs=topo.x[1]
xe=topo.x[end]
ys=topo.y[1]
ye=topo.y[end]
xyrange="$xs/$xe/$ys/$ye"
isempty(length_unit) || (xyrange = xyrange*"+u"*length_unit)
return xyrange
end
###################################################
function getR(G::GMT.GMTgrid; length_unit="")
x = extrema(G.x)
y = extrema(G.y)
xyrange="$(x[1])/$(x[2])/$(y[1])/$(y[2])"
isempty(length_unit) || (xyrange = xyrange*"+u"*length_unit)
return xyrange
end
###################################################
function getR(region::VisClaw.Region; length_unit="")
xyrange="$(region.xlims[1])/$(region.xlims[2])/$(region.ylims[1])/$(region.ylims[2])"
isempty(length_unit) || (xyrange = xyrange*"+u"*length_unit)
return xyrange
end
###################################################
"""
hwratio = axesratio(tiles::Vector{VisClaw.AMRGrid})
hwratio = axesratio(topo::VisClaw.AbstractTopo)
hwratio = axesratio(region::VisClaw.Region)
hwratio = axesratio(G::GMT.GMTgrid)
Get height/width ratio
"""
function axesratio(tiles::Vector{VisClaw.AMRGrid})
xs, xe, ys, ye = VisClaw.getlims(tiles)
hwratio = (ye-ys)/(xe-xs)
# return value
return hwratio
end
###################################################
function axesratio(topo::VisClaw.AbstractTopo)
xs=topo.x[1]
xe=topo.x[end]
ys=topo.y[1]
ye=topo.y[end]
hwratio = (ye-ys)/(xe-xs)
# return value
return hwratio
end
###################################################
function axesratio(G::GMT.GMTgrid)
x = extrema(G.x)
y = extrema(G.y)
hwratio = (y[2]-y[1])/(x[2]-x[1])
# return value
return hwratio
end
###################################################
axesratio(region::VisClaw.Region) = (region.ylims[2]-region.ylims[1])/(region.xlims[2]-region.xlims[1])
###################################################
###################################################
"""
G = geogrd(geo::VisClaw.Topo; kwargs...)
G = geogrd(geo::VisClaw.DTopo, itime::Integer=0; kwargs...)
Generate grd (GMT) data
"""
function geogrd(geo::VisClaw.Topo; kwargs...)
Δ = geo.dx
R = VisClaw.getR(geo)
xvec = repeat(geo.x, inner=(geo.nrows,1))
yvec = repeat(geo.y, outer=(geo.ncols,1))
G = GMT.surface([xvec[:] yvec[:] geo.elevation[:]]; R=R, I=Δ, kwargs...)
return G
end
###################################################
function geogrd(geo::VisClaw.DTopo, itime::Integer=0; kwargs...)
Δ = geo.dx
R = VisClaw.getR(geo)
xvec = repeat(geo.x, inner=(geo.my,1))
yvec = repeat(geo.y, outer=(geo.mx,1))
( (itime < 0) || (geo.mt < itime) ) && error("Invalid time")
if geo.mt == 1
G = GMT.surface([xvec[:] yvec[:] geo.deform[:]]; R=R, I=Δ, kwargs...)
elseif itime == 0
G = GMT.surface([xvec[:] yvec[:] vec(geo.deform[:,:,end])]; R=R, I=Δ, kwargs...)
else
G = GMT.surface([xvec[:] yvec[:] vec(geo.deform[:,:,itime])]; R=R, I=Δ, kwargs...)
end
return G
end
###################################################
###################################################
"""
proj = getJ(proj_base::AbstractString, hwratio::Real)
Correct J option
"""
function getJ(proj_base::AbstractString, hwratio)
# find projection specifier
J1 = match(r"^([a-zA-Z]+)", proj_base)
J2 = match(r"([a-zA-Z]+).+?([a-zA-Z]+)", proj_base)
J1 === nothing && error("Invald argument proj_base: $proj_base")
# assign figure width
# check whether variable proj_base contains any number
regex = r"([+-]?(?:\d+\.?\d*|\.\d+)(?:[eE][+-]?\d+)?)"
chkwidth = match(regex, proj_base)
fwidth = chkwidth === nothing ? fwidth=10 : parse(Float64, chkwidth.captures[1])
# assign figure height
# check whether variable proj_base contains the height
regex = r"([+-]?(?:\d+\.?\d*|\.\d+)(?:[eE][+-]?\d+)?).+?([+-]?(?:\d+\.?\d*|\.\d+)(?:[eE][+-]?\d+)?)"
chkheight = match(regex, proj_base)
fheight = chkheight === nothing ? hwratio*fwidth : parse(Float64, chkheight.captures[2])
# generate J option
if occursin("/",proj_base) && chkheight !== nothing
proj = proj_base
else
proj = J2 === nothing ? J1.captures[1]*"$fwidth"*"/$fheight" : J1.captures[1]*"$fwidth"*J2.captures[2]*"/$fheight"*J2.captures[2]
end
# return value
return proj
end
###################################################
getJ(proj_base::AbstractString, topo::VisClaw.Topo) = getJ(proj_base, VisClaw.axesratio(topo))
###################################################
getJ(proj_base::AbstractString, amr::Vector{VisClaw.AMRGrid}) = getJ(proj_base, VisClaw.axesratio(amr))
###################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2590 | ####################################################
function makecptfromgrd(G::GMT.GMTgrid; factor_lims=0.8, sigdigits_lims=2,
T=[], kwargs...)
isempty(T) && ( T=round.(factor_lims.*extrema(G.z), sigdigits=sigdigits_lims) )
return GMT.makecpt(; T=T, kwargs...)
end
####################################################
####################################################
function gmttopo(G::GMT.GMTgrid; factor_lims=0.8, sigdigits_lims=2,
C=:geo, T=[], D::Bool=true, J="", R="", kwargs...)
## cpt
cptout = false
if !isa(C, GMT.GMTcpt)
cptout = true
C = makecptfromgrd(G; factor_lims=factor_lims, sigdigits_lims=sigdigits_lims, C=C, T=T, D=D)
end
# options
isempty(J) && ( J=getJ("X10", axesratio(G)) )
isempty(R) && ( R=getR(G) )
## plot
GMT.grdimage(G; C=C, J=J, R=R, Q=true, kwargs...)
## return
if cptout; return C; else return nothing; end
end
####################################################
gmttopo(topo::VisClaw.Topo; kwargs...) = gmttopo(geogrd(topo); kwargs...)
####################################################
####################################################
"""
gmttoporange!(geo::VisClaw.AbstractTopo; kwargs...)
plot a range of topo/bath using GMT
"""
function gmttoporange!(geo::VisClaw.AbstractTopo; kwargs...)
# set square
xp = [geo.x[1], geo.x[1] , geo.x[end], geo.x[end], geo.x[1]]
yp = [geo.y[1], geo.y[end], geo.y[end], geo.y[1] , geo.y[1]]
# plot
GMT.plot!(xp, yp; marker=:none, kwargs...)
end
####################################################
####################################################
"""
gmtcoastline(topo::VisClaw.Topo; kwargs...)
gmtcoastline(G::GMT.GMTgrid; kwargs...)
gmtcoastline!(topo::VisClaw.Topo; kwargs...)
gmtcoastline!(G::GMT.GMTgrid; kwargs...)
plot coastlines from topography and bathymetry data using GMT
"""
gmtcoastline!(topo::VisClaw.Topo; kwargs...) = GMT.grdcontour!(geogrd(topo); C="-1e10,0,1e10", kwargs...)
####################################################
gmtcoastline!(G::GMT.GMTgrid; kwargs...) = GMT.grdcontour!(G; C="-1e10,0,1e10", kwargs...)
####################################################
####################################################
"""
$(@doc gmtcoastline!)
"""
gmtcoastline(topo::VisClaw.Topo; kwargs...) = GMT.grdcontour(geogrd(topo); C="-1e10,0,1e10", kwargs...)
####################################################
gmtcoastline(G::GMT.GMTgrid; kwargs...) = GMT.grdcontour(G; C="-1e10,0,1e10", kwargs...)
####################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 541 | ####################################################
"""
gmttrack(track::VisClaw.Track; kwargs...)
gmttrack!(track::VisClaw.Track; kwargs...)
"""
gmttrack(track::VisClaw.Track, index=1:length(track.lon); kwargs...) =
GMT.plot(track.lon[index], track.lat[index]; kwargs...)
####################################################
"""
$(@doc gmttrack)
"""
gmttrack!(track::VisClaw.Track, index=1:length(track.lon); kwargs...) =
GMT.plot!(track.lon[index], track.lat[index]; kwargs...)
####################################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 5864 | # Read configration files
###################################
"""
params = geodata("simlation/path/_output"::AbstractString)
params = geodata("simlation/path/_output/geoclaw.data"::AbstractString)
Function: geoclaw.data reader
"""
function geodata(outdir::AbstractString)
## set filename
fname = occursin("geoclaw.data", basename(outdir)) ? outdir : joinpath(outdir, "geoclaw.data")
## check whether it exists
if !isfile(fname); error("File $fname is not found."); end
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
# parameters (mandatory?)
cs = parse(Float64,split(txt[occursin.("coordinate",txt)][1],r"\s+")[1])
p0 = parse(Float64,split(txt[occursin.("ambient_pressure",txt)][1],r"\s+")[1])
R = parse(Float64,split(txt[occursin.("earth_radius",txt)][1],r"\s+")[1])
eta0 = parse(Float64,split(txt[occursin.("sea_level",txt)][1],r"\s+")[1])
dmin = parse(Float64,split(txt[occursin.("dry_tolerance",txt)][1],r"\s+")[1])
# parameters (optional?)
if any(occursin.("manning_coefficient",txt))
n = parse(Float64,split(txt[occursin.("manning_coefficient",txt)][1],r"\s+")[1])
else
n = 0.0
end
## instance
params = VisClaw.GeoParam(cs,p0,R,eta0,n,dmin)
## return values
return params
end
###################################
###################################
"""
surgeparams = surgedata("simlation/path/_output"::AbstractString)
surgeparams = surgedata("simlation/path/_output/surge.data"::AbstractString)
Function: surge.data reader
"""
function surgedata(outdir::AbstractString)
## set filename
fname = occursin("surge.data", basename(outdir)) ? outdir : joinpath(outdir, "surge.data")
## check whether it exists
if !isfile(fname); error("File $fname is not found."); end
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
windindex = parse(Int64,split(txt[occursin.("wind_index",txt)][1],r"\s+")[1])
slpindex = parse(Int64,split(txt[occursin.("pressure_index",txt)][1],r"\s+")[1])
stormtype = parse(Int64,split(txt[occursin.("storm_specification_type",txt)][1],r"\s+")[1])
## instance
surgeparams = VisClaw.SurgeParam(windindex,slpindex,stormtype)
## return values
return surgeparams
end
###################################
###################################
"""
gaugeinfo = gaugedata("simlation/path/_output"::AbstractString)
gaugeinfo = gaugedata("simlation/path/_output/gauges.data"::AbstractString)
Function: gauges.data reader
"""
function gaugedata(outdir::AbstractString)
## set filename
fname = occursin("gauges.data", basename(outdir)) ? outdir : joinpath(outdir,"gauges.data")
## check whether it exists
if !isfile(fname); error("File $fname is not found."); end
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
ngauges = parse(Int64, split(txt[occursin.("ngauges",txt)][1],r"\s+")[1])
## preallocate
gaugeinfo = Vector{VisClaw.Gauge}(undef,ngauges)
## read gauge info
baseline = findfirst(x->occursin("ngauges", x), txt)
for i = 1:ngauges
txtline = split(strip(txt[baseline+i]), r"\s+", keepempty=false)
label = txtline[1]
id = parse(Int64,txtline[1])
loc = [parse(Float64,txtline[2]), parse(Float64,txtline[3])]
time = [parse(Float64,txtline[4]), parse(Float64,txtline[5])]
# instance
gaugeinfo[i] = VisClaw.Gauge(label,id,0,loc,[],time,[])
end
## return values
return gaugeinfo
end
###################################
###################################
"""
amrparam = amrdata("simlation/path/_output"::AbstractString)
amrparam = amrdata("simlation/path/_output/amr.data"::AbstractString)
Function: amr.data reader
"""
function amrdata(outdir::AbstractString)
## set filename
fname = occursin("amr.data", basename(outdir)) ? outdir : joinpath(outdir, "amr.data")
## check whether it exists
if !isfile(fname); error("File $fname is not found."); end
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
maxlevel = parse(Int64,split(txt[occursin.("amr_levels_max",txt)][1],r"\s+")[1])
## instance
amrparam = VisClaw.AMRParam(maxlevel)
## return values
return amrparam
end
###################################
###################################
"""
regions = regiondata("simlation/path/_output"::AbstractString)
regions = regiondata("simlation/path/_output/region.data"::AbstractString)
Function: region.data reader
"""
function regiondata(outdir::AbstractString)
## set filename
fname = occursin("regions.data", basename(outdir)) ? outdir : joinpath(outdir, "regions.data")
## check whether it exists
if !isfile(fname); error("File $fname is not found."); end
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
nregion = parse(Int64, split(txt[occursin.("num_regions",txt)][1],r"\s+")[1])
## preallocate
regions = Vector{VisClaw.Region}(undef,nregion)
## read gauge info
baseline = findfirst(x->occursin("num_regions", x), txt)
for i = 1:nregion
txtline = split(strip(txt[baseline+i]), r"\s+", keepempty=false)
minlevel = parse(Int64,txtline[1])
maxlevel = parse(Int64,txtline[2])
tl = (parse(Float64,txtline[3]), parse(Float64,txtline[4]))
xl = (parse(Float64,txtline[5]), parse(Float64,txtline[6]))
yl = (parse(Float64,txtline[7]), parse(Float64,txtline[8]))
## instance
regions[i] = VisClaw.Region(minlevel:maxlevel, tl, xl, yl)
end
return regions
end
###################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2562 | #################################
"""
fgmax = loadfgmax(outputdir::AbstractString, fg::VisClaw.FixedGrid; nval_save::Integer=fg.nval)
Function: fgmaxXXXX.txt reader
"""
function loadfgmax(outputdir::AbstractString, fg::VisClaw.FixedGrid; nval_save::Integer=fg.nval)
## check nval_save
nval_save > fg.nval && (nval_save=fg.nval)
## fgmaxXXXX.txt
filename = "fgmax"*@sprintf("%04d",fg.id)*".txt"
## load
dat = readdlm(joinpath(outputdir, filename), Float64)
dat[dat.<-1e10] .= NaN
ncol = 4 + 2(fg.nval) + 1
## assign
if fg.style == 0 || fg.style == 1
topo = dat[:,4]
D = dat[:,5]
tD = dat[:,5+fg.nval]
tarrival = dat[:,end]
if fg.nval >= 2
v = dat[:,6]
tv = dat[:,6+fg.nval]
end
if fg.nval >= 5
M = dat[:,7]
tM = dat[:,7+fg.nval]
Mflux = dat[:,8]
tMflux = dat[:,8+fg.nval]
hmin = dat[:,9]
thmin = dat[:,9+fg.nval]
end
elseif fg.style == 2 || fg.style == 3
valall = permutedims(reshape(dat, (fg.nx, fg.ny, ncol)), [2 1 3])
topo = valall[:,:,4]
D = valall[:,:,5]
tD = valall[:,:,5+fg.nval]
tarrival = valall[:,:,end]
if fg.nval >= 2
v = valall[:,:,6]
tv = valall[:,:,6+fg.nval]
end
if fg.nval >= 5
M = valall[:,:,7]
tM = valall[:,:,7+fg.nval]
Mflux = valall[:,:,8]
tMflux = valall[:,:,8+fg.nval]
hmin = valall[:,:,9]
thmin = valall[:,:,9+fg.nval]
end
elseif fg.style == 4
indc = [[fg.flag[i][1] fg.flag[i][2]] for i=1:fg.npts]
indc = vcat(indc...)
ind = sortperm(indc[:,1])
topo = dat[ind,4]
D = dat[ind,5]
tD = dat[ind,5+fg.nval]
tarrival = dat[ind,end]
if fg.nval >= 2
v = dat[ind,6]
tv = dat[ind,6+fg.nval]
end
if fg.nval >= 5
M = dat[ind,7]
tM = dat[ind,7+fg.nval]
Mflux = dat[ind,8]
tMflux = dat[ind,8+fg.nval]
hmin = dat[ind,9]
thmin = dat[ind,9+fg.nval]
end
end
if nval_save == 1; fgmax = VisClaw.FGmax(topo,D,tD,tarrival)
elseif nval_save == 2; fgmax = VisClaw.FGmax(topo,D,v,tD,tv,tarrival)
elseif nval_save == 5; fgmax = VisClaw.FGmax(topo,D,v,M,Mflux,hmin,tD,tv,tM,tMflux,thmin,tarrival)
end
# return
return fgmax
end
#################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 4317 | ###################################
"""
fgmaxgrids = fgmaxdata("simlation/path/_output"::AbstractString)
fgmaxgrids = fgmaxdata("simlation/path/_output/fgmax_grids.data"::AbstractString)
Function: fgmax_grids.data reader
"""
function fgmaxdata(outdir::AbstractString)
## definition of filename
fname = occursin("fgmax_grids.data", basename(outdir)) ? outdir : joinpath(outdir, "fgmax_grids.data")
## check
isfile(fname) || error("File $fname is not found.")
## read all lines
open(fname,"r") do f
global txt = readlines(f)
end
## parse parameters
num_fgmax_val = parse(Int64, split(txt[occursin.("num_fgmax_val",txt)][1],r"\s+")[1])
num_fgmax_grids = parse(Int64, split(txt[occursin.("num_fgmax_grids",txt)][1],r"\s+")[1])
## preallocate
fg = Vector{VisClaw.FixedGrid}(undef, num_fgmax_grids)
## load
global baseline = 10
for i = 1:num_fgmax_grids
## basic parameters
FGid = parse(Int64, split(strip(txt[baseline+1]), r"\s+")[1])
point_style = parse(Int64, split(strip(txt[baseline+8]), r"\s+")[1])
## check point style
(point_style > 4 || point_style < 0) && error("point_style $point_style is not supported yet.")
## 0
if point_style == 0
## npts
npts = parse(Int64, split(strip(txt[baseline+9]), r"\s+")[1])
## x, y
if npts == 0
fgmax_deffile = String(strip(txt[baseline+10])[2:end-1])
open(fgmax_deffile,"r") do ff; global fgtxt = readlines(ff); end
npts = parse(Int64, split(strip(fgtxt[1]), r"\s+")[1])
x = zeros(Float64,npts)
y = zeros(Float64,npts)
for ip = 1:npts
x[ip], y[ip] = parse.(Float64, split(strip(fgtxt[1+ip]), r"\s+")[1:2])
end
baseline += 11
else
x = zeros(Float64,npts)
y = zeros(Float64,npts)
for ip = 1:npts
x[ip], y[ip] = parse.(Float64, split(strip(txt[baseline+9+ip]), r"\s+")[1:2])
end
baseline += 10 + npts
end
## instance
fg[i] = VisClaw.FixedGrid(FGid, point_style, num_fgmax_val, npts, x, y)
## 1
elseif point_style == 1
npts = parse(Int64, split(strip(txt[baseline+9]), r"\s+")[1])
x1, y1 = parse.(Float64, split(strip(txt[baseline+10]), r"\s+")[1:2])
x2, y2 = parse.(Float64, split(strip(txt[baseline+11]), r"\s+")[1:2])
x = LinRange(x1, x2, npts)
y = LinRange(y1, y2, npts)
# instance
fg[i] = VisClaw.FixedGrid(FGid, point_style, num_fgmax_val, npts, x, y)
baseline += 12
## 2
elseif point_style == 2
nx, ny = parse.(Int64, split(strip(txt[baseline+9]), r"\s+")[1:2])
x1, y1 = parse.(Float64, split(strip(txt[baseline+10]), r"\s+")[1:2])
x2, y2 = parse.(Float64, split(strip(txt[baseline+11]), r"\s+")[1:2])
# instance
fg[i] = VisClaw.FixedGrid(FGid, point_style, num_fgmax_val, nx, ny, (x1,x2), (y1,y2))
baseline += 12
## 3
elseif point_style == 3
error("point_style $point_style is not supported yet.")
## 4
elseif point_style == 4
fgmax_deffile = String(strip(txt[baseline+9])[2:end-1])
topoflag = loadtopo(fgmax_deffile, 3)
flag = findall(convert(BitArray, topoflag.elevation))
X = repeat(topoflag.x', outer=(topoflag.nrows,1))
Y = repeat(topoflag.y, outer=(1,topoflag.ncols))
x = vec(X[flag])
y = vec(Y[flag])
npts = length(x)
ind = sortperm([flag[i][1] for i=1:npts], rev=true)
flag = flag[ind]
x = x[ind]
y = y[ind]
## instance
fg[i] = VisClaw.FixedGrid(FGid, point_style, num_fgmax_val,
topoflag.ncols, topoflag.nrows,
extrema(topoflag.x), extrema(topoflag.y),
npts, x, y, flag)
baseline += 10
end
end
## return
return fg
end
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 9694 | """
amr = loadfortq(filename::AbstractString, ncol::Integer; vartype::Symbol=:surface,
params::VisClaw.GeoParam=VisClaw.GeoParam(), runup::Bool=true,
xlims=(-Inf,Inf), ylims=(-Inf,Inf), region="", AMRlevel=[])
Function: fort.qxxxx reader.
"""
function loadfortq(filename::AbstractString, ncol::Integer; vartype::Symbol=:surface,
params::VisClaw.GeoParam=VisClaw.GeoParam(), runup::Bool=true,
xlims=(-Inf,Inf), ylims=(-Inf,Inf), region="", AMRlevel=[])
## check
!any(map(sym -> vartype == sym, [:surface, :current, :storm])) && error("kwarg 'vartype' is invalid")
## set range
if isa(region, VisClaw.AbstractTopo); xlims=extrema(region.x); ylims=extrema(region.y); end
if isa(region, VisClaw.Region); xlims=region.xlims; ylims=region.ylims; end
## file open
f = open(filename,"r")
txtorg = readlines(f)
close(f) #close
## count the number of lines and grids
nlineall = length(txtorg)
idx = occursin.("grid_number",txtorg)
ngrid = length(txtorg[idx])
if vartype==:surface
amr = Array{VisClaw.SurfaceHeight}(undef,ngrid) ## preallocate
elseif vartype==:current
amr = Array{VisClaw.Velocity}(undef,ngrid)
elseif vartype==:storm
amr = Array{VisClaw.Storm}(undef,ngrid)
end
l = 1
i = 1
while l < nlineall
## read header
#header = txtorg[1:8]
header = txtorg[l:l+7]
header = map(strip,header)
gridnumber = parse(Int64, split(header[1],r"\s+")[1])
AMRlevel_load = parse(Int64, split(header[2],r"\s+")[1])
mx = parse(Int64, split(header[3],r"\s+")[1])
my = parse(Int64, split(header[4],r"\s+")[1])
xlow = parse(Float64, split(header[5],r"\s+")[1])
ylow = parse(Float64, split(header[6],r"\s+")[1])
dx = parse(Float64, split(header[7],r"\s+")[1])
dy = parse(Float64, split(header[8],r"\s+")[1])
## read variables
body = txtorg[l+9:l+9+(mx+1)*my-1]
# the next tile
l = l+9+(mx+1)*my
## check AMRlevel
if !isempty(AMRlevel); if isempty(findall(AMRlevel .== AMRlevel_load)); i += 1; continue; end; end
## check whether the tile is on the domain
if (xlow+dx*mx < xlims[1]) | (xlims[2] < xlow); i += 1; continue; end
if (ylow+dy*my < ylims[1]) | (ylims[2] < ylow); i += 1; continue; end
if vartype==:surface
elev = [parse(Float64, body[(i-1)*(mx+1)+j][26*(ncol-1)+1:26*ncol]) for i=1:my, j=1:mx]
depth = [parse(Float64, body[(i-1)*(mx+1)+j][1:26]) for i=1:my, j=1:mx]
# wet condition
land = (elev-depth) .>= params.dmin
# sea surface anomaly
(params.eta0 != 0.0) && (elev[.!land] = elev[.!land].-params.eta0)
# inundation depth if wet
runup && (elev[land] = depth[land])
# NaN if dry
elev[depth.<=0.0] .= NaN
## array
amr[i] = VisClaw.SurfaceHeight(gridnumber,AMRlevel_load,mx,my,xlow,ylow,dx,dy,elev)
elseif vartype==:current
ucol = ncol
vcol = ncol+1
# read
depth = [parse(Float64, body[(i-1)*(mx+1)+j][1:26]) for i=1:my, j=1:mx]
u = [parse(Float64, body[(i-1)*(mx+1)+j][26*(ucol-1)+1:26*ucol]) for i=1:my, j=1:mx]
v = [parse(Float64, body[(i-1)*(mx+1)+j][26*(vcol-1)+1:26*vcol]) for i=1:my, j=1:mx]
# replace to NaN
mask = depth.<=0.0
depth[mask] .= NaN
u[mask] .= NaN
v[mask] .= NaN
# calc
u = u./depth
v = v./depth
vel = sqrt.(u.^2 .+ v.^2)
## array
amr[i] = VisClaw.Velocity(gridnumber,AMRlevel_load,mx,my,xlow,ylow,dx,dy,u,v,vel)
elseif vartype==:storm
ucol = ncol
vcol = ncol+1
pcol = ncol+2
u = [parse(Float64, body[(i-1)*(mx+1)+j][26*(ucol-1)+1:26*ucol]) for i=1:my, j=1:mx]
v = [parse(Float64, body[(i-1)*(mx+1)+j][26*(vcol-1)+1:26*vcol]) for i=1:my, j=1:mx]
p = [parse(Float64, body[(i-1)*(mx+1)+j][26*(pcol-1)+1:26*pcol]) for i=1:my, j=1:mx]
p = p./1e+2
# u[(abs.(u).<=1e-2) .& (abs.(v).<=1e-2)] .= NaN
# v[(abs.(u).<=1e-2) .& (abs.(v).<=1e-2)] .= NaN
## array
amr[i] = VisClaw.Storm(gridnumber,AMRlevel_load,mx,my,xlow,ylow,dx,dy,u,v,p)
end
## print
#@printf("%d, ",gridnumber)
## counter; go to the next grid
i += 1
end
amr = amr[filter(i -> isassigned(amr, i), 1:length(amr))]
## return
return amr
end
#################################
"""
amr = loadforta(filename::AbstractString, ncol::Integer; kwargs...)
Function: fort.axxxx reader. See also [`loadfortq`](@ref).
"""
loadforta(filename::AbstractString, ncol::Integer; kwargs...) = loadfortq(filename, ncol; vartype=:storm, kwargs...)
#################################
#################################
"""
timelaps = loadfortt(filename::AbstractString)
Function: fort.txxxx reader.
"""
function loadfortt(filename::AbstractString)
## file open
f = open(filename,"r")
txtorg = readlines(f)
close(f) #close
## parse timelaps from the 1st line
timelaps = parse(Float64, txtorg[1][1:18])
## return
return timelaps
end
#################################
#######################################
"""
amrs = loadsurface(outputdir::AbstractString, filesequence::AbstractVector; kwargs...)
amrs = loadsurface(outputdir::AbstractString, fileid::Integer; kwargs...)
Function: load time-series of water surface.
The keyword arguments follow [`loadfortq`](@ref).
See also: [`loadfortt`](@ref).
"""
function loadsurface(outputdir::AbstractString, filesequence::AbstractVector=0:0; vartype::Symbol=:surface, kwargs...)
# check
!any(map(sym -> vartype == sym, [:surface, :current, :storm])) && error("kwarg 'vartype' is invalid")
## define the filepath & filename
if vartype==:surface
fnamekw = r"^fort\.q\d+$"
col=4
elseif vartype==:current
fnamekw = r"^fort\.q\d+$"
col=2
elseif vartype==:storm
fnamekw = r"^fort\.a\d+$"
col=5
end
## make a list
isdir(outputdir) || error("Directory $outputdir doesn't exist")
flist = readdir(outputdir)
filter!(x->occursin(fnamekw, x), flist)
isempty(flist) && error("File named $fnamekw was not found")
# load geoclaw.data
params = VisClaw.geodata(outputdir)
## the number of files
nfile = length(flist)
# file sequence to be loaded
if filesequence==0:0; filesequence = 1:nfile; end
(any(filesequence .< 1) || any(filesequence .> nfile)) && error("Incorrect file sequence was specified. (This must be from 1 to $nfile)")
## the number of files (to be loaded)
nfile = length(filesequence)
## preallocate
if vartype==:surface
amr = Vector{AbstractVector{VisClaw.SurfaceHeight}}(undef,nfile)
elseif vartype==:current
amr = Vector{AbstractVector{VisClaw.Velocity}}(undef,nfile)
elseif vartype==:storm
amr = Vector{AbstractVector{VisClaw.Storm}}(undef,nfile)
end
## load all files
tlap = vec(zeros(nfile,1))
cnt = 0
for it = filesequence
cnt += 1
if vartype==:surface
amr[cnt] = VisClaw.loadfortq(joinpath(outputdir,flist[it]), col; vartype=vartype, kwargs...)
elseif vartype==:current
amr[cnt] = VisClaw.loadfortq(joinpath(outputdir,flist[it]), col; vartype=vartype, kwargs...)
elseif vartype==:storm
amr[cnt] = VisClaw.loadforta(joinpath(outputdir,flist[it]), col; kwargs...)
end
tlap[cnt] = VisClaw.loadfortt(joinpath(outputdir,replace(flist[it],r"\.." => ".t")))
end
## AMR Array
amrs = VisClaw.AMR(nfile,tlap,amr)
## return value
return amrs
end
#######################################
loadsurface(outputdir::AbstractString, fileid::Integer; kwargs...) =
loadsurface(outputdir, fileid:fileid; kwargs...)
#######################################
######################################
"""
amrs = loadstorm(outputdir::AbstractString, filesequence::AbstractVector=0:0; kwargs...)
amrs = loadstorm(outputdir::AbstractString, fileid::Integer; kwargs...)
Function: load time-series of storm data.
The keyword arguments follow [`loadfortq`](@ref).
See also: [`loadfortt`](@ref).
"""
loadstorm(outputdir::AbstractString, filesequence::AbstractVector=0:0; kwargs...) =
loadsurface(outputdir, filesequence; vartype=:storm, kwargs...)
#######################################
loadstorm(outputdir::AbstractString, fileid::Integer; kwargs...) =
loadsurface(outputdir, fileid:fileid; vartype=:storm, kwargs...)
#######################################
#######################################
"""
amrs = loadcurrent(outputdir::AbstractString, filesequence::AbstractVector=0:0; kwargs...)
amrs = loadcurrent(outputdir::AbstractString, fileid::Integer; kwargs...)
Function: load time-series of ocean current data.
The keyword arguments follow [`loadfortq`](@ref).
See also: [`loadfortt`](@ref).
"""
loadcurrent(outputdir::AbstractString, filesequence::AbstractVector=0:0; kwargs...) =
loadsurface(outputdir, filesequence, vartype=:current, kwargs...)
#######################################
loadcurrent(outputdir::AbstractString, fileid::Integer; kwargs...) =
loadsurface(outputdir, fileid:fileid; vartype=:current, kwargs...)
#######################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 2487 | #################################
"""
gauges = loadgauge(outputdir::AbstractString; eta0::Float64=0.0, labelhead::AbstractString="Gauge ", loadeta::Bool=true, loadvel::Bool=false)
gauge*.txt reader
"""
function loadgauge(outputdir::AbstractString, gaugeid::AbstractVector{Int64}=0:0;
eta0=0.0, labelhead::AbstractString="Gauge ", loadeta::Bool=true, loadvel::Bool=false)
# check args
if !isdir(outputdir); error("$outputdir is not found or directory"); end
files = readdir(outputdir)
filter!(x->occursin(r"^gauge\d+\.txt$", x), files)
if isempty(files); println("No gauge file"); return empty([], VisClaw.Gauge) end;
nf = length(files)
gaugeid == 0:0 && (gaugeid = 1:nf)
nfload = length(gaugeid)
# preallocate
gauges = Vector{VisClaw.Gauge}(undef,nf)
for k in gaugeid
filename=joinpath(outputdir,files[k])
# read header
f = open(filename,"r")
header1 = readline(f)
close(f)
id = parse(Int64,header1[13:17])
loc = [parse(Float64,header1[30:46]), parse(Float64,header1[48:64])]
# label
label = labelhead*@sprintf("%d",id)
if length(readlines(filename))<5
nt = 0
gauges[k] = VisClaw.Gauge(label,id,nt,loc)
@warn @sprintf("No gauge data was found in Gauge %d. This may cause some errors when plotting.", id)
continue
end
# read time-series of vars in the certain colmns
dataorg = readdlm(filename, comments=true, comment_char='#')
AMRlevel = convert.(Int64,dataorg[:,1])
time = convert.(Float64,dataorg[:,2])
D = convert.(Float64,dataorg[:,3])
nt = length(time)
if loadvel
u = convert.(Float64,dataorg[:,4])./D
v = convert.(Float64,dataorg[:,5])./D
else
u = v = empty([0.0])
end
if loadeta
eta = convert.(Float64,dataorg[:,6])
eta[D.<=1e-3] .= 0.0
eta = eta.-eta0
else
eta = empty([0.0])
end
# instance
gauges[k] = VisClaw.Gauge(label,id,nt,loc,AMRlevel,time,eta,u,v)
end
return gauges
end
#################################
loadgauge(outputdir::AbstractString, gaugeid::Integer; eta0=0.0, labelhead::AbstractString="Gauge ", loadeta::Bool=true, loadvel::Bool=false) =
loadgauge(outputdir, gaugeid:gaugeid; eta0=eta0, labelhead=labelhead, loadeta=loadeta, loadvel=loadvel)
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
|
[
"BSD-3-Clause"
] | 0.7.8 | cef0143fee828ea605960108f18eed15efb232ef | code | 8944 | """
topofile, topotype, ntopo = topodata("simlation/path/_output"::AbstractString)
topofile, topotype, ntopo = topodata("simlation/path/_output/topo.data"::AbstractString)
read topo.data
"""
function topodata(outdir::AbstractString)
# filename
filename = occursin("topo.data", basename(outdir)) ? outdir : joinpath(outdir,"topo.data")
## check args
if !isfile(filename); error("$filename is not found."); end;
## read ascdata
f = open(filename,"r")
ascdata = readlines(f)
close(f)
baseline = findfirst(x->occursin("ntopofiles", x), ascdata)
ntopo = parse(Int64, split(ascdata[baseline], r"\s+", keepempty=false)[1])
if ntopo == 1
topofile = replace(ascdata[baseline+2], r"[\'\s]" => "")
topotype = parse(Int64, split(ascdata[baseline+3], r"\s+", keepempty=false)[1])
else
# preallocate
topofile = Vector{String}(undef, ntopo)
topotype = Vector{Int64}(undef, ntopo)
# filename
for i = 1:ntopo
topofile[i] = replace(ascdata[baseline-1+3i], r"[\'\s]" => "")
topotype[i] = parse(Int64, split(ascdata[baseline+3i], r"\s+", keepempty=false)[1])
end
end
# return
return topofile, topotype, ntopo
end
#################################
#################################
"""
dtopofile, dtopotype, ndtopo = dtopodata("simlation/path/_output"::AbstractString)
dtopofile, dtopotype, ndtopo = dtopodata("simlation/path/_output/dtopo.data"::AbstractString)
read dtopo.data
"""
function dtopodata(outdir::AbstractString)
# filename
filename = occursin("dtopo.data", basename(outdir)) ? outdir : joinpath(outdir,"dtopo.data")
## check args
if !isfile(filename); error("$filename is not found."); end;
# read
f = open(filename,"r")
ascdata = readlines(f)
close(f)
baseline = findfirst(x->occursin("mdtopofiles", x), ascdata)
ndtopo = parse(Int64, split(ascdata[baseline], r"\s+", keepempty=false)[1])
if ndtopo == 0; println("No mdtopofile"); return nothing, nothing, ndtopo
elseif ndtopo == 1
dtopofile = replace(ascdata[baseline+3], r"[\'\s]" => "")
dtopotype = parse(Int64, split(ascdata[baseline+4], r"\s+", keepempty=false)[1])
else
# preallocate
dtopofile = Vector{String}(undef, ndtopo)
dtopotype = Vector{Int64}(undef, ndtopo)
# filename
for i = 1:ndtopo
dtopofile[i] = replace(ascdata[baseline+3i], r"[\'\s]" => "")
dtopotype[i] = parse(Int64, split(ascdata[baseline+3i+1], r"\s+", keepempty=false)[1])
end
end
# return
return dtopofile, dtopotype, ndtopo
end
#################################
#################################
"""
bathtopo = loadtopo(outdir::AbstractString)
bathtopo = loadtopo(filename::AbstractString, topotype=3::Integer)
load topography data
"""
function loadtopo(filename::AbstractString, topotype=3::Integer)
## from _output directory
if isdir(filename)
topofile, topotype, ntopo = VisClaw.topodata(filename)
return VisClaw.loadtopo.(topofile, topotype)
end
## check args
isfile(filename) || error("file $filename is not found.")
any(topotype .== [2,3,4]) || error("unsupported topotype")
## NetCDF
( filename[end-2:end] == ".nc" && topotype != 4 ) && (topotype=4)
if topotype == 4
var_x, var_y, var_z = VisClaw.getvarname_nctopo(filename)
x = NetCDF.ncread(filename, var_x)
y = NetCDF.ncread(filename, var_y)
topo = permutedims(NetCDF.ncread(filename, var_z), [2,1])
bathtopo = VisClaw.Topo(length(x), length(y), x, y, mean(diff(x)), mean(diff(x)), topo)
return bathtopo
end
## separator in regular expression
regex = r"([+-]?(?:\d+\.?\d*|\.\d+)(?:[eE][+-]?\d+)?)"
## open topofile
f = open(filename,"r")
## read header
ncols = parse(Int64, match(regex, readline(f)).captures[1])
nrows = parse(Int64, match(regex, readline(f)).captures[1])
xll = parse(Float64, match(regex, readline(f)).captures[1])
yll = parse(Float64, match(regex, readline(f)).captures[1])
cellsize = parse(Float64, match(regex, readline(f)).captures[1])
nodata = match(regex, readline(f)).captures[1]
## read topography in asc
dataorg = readlines(f)
## close topofile
close(f)
## meshgrid
x = collect(Float64, LinRange(xll, round(xll+(ncols-1)*cellsize, digits=3), ncols))
y = collect(Float64, LinRange(yll, round(yll+(nrows-1)*cellsize, digits=3), nrows))
# check topotype
tmp = replace(dataorg[1], r"^\s+|,?\s+$" => "")
tmp = replace(tmp, "," => " ") # for csv data
tmp = split(tmp, r"\s+",keepempty=false)
tmp = parse.(Float64, tmp)
# topotype 2?
if length(tmp) == 1
if topotype == 3; println("topotype 2?"); end
topotype = 2
end
# topotype 3?
if length(tmp) > 1
if topotype == 2; println("topotype 3?"); end
topotype = 3
end
## assign topography
if topotype == 2
topo = parse.(Float64, dataorg)
topo = reshape(topo, (ncols, nrows))
topo = permutedims(topo,[2 1])
elseif topotype == 3
topo = zeros(nrows, ncols)
for k = 1:nrows
line = replace(dataorg[k], r"^\s+|,?\s+$" => "")
line = replace(line, "," => " ") # for csv data
line = split(line, r"\s+",keepempty=false)
topo[k,:] = parse.(Float64, line)
end
end
topo[topo.==nodata] .= NaN ## replace nodate to NaN
topo = reverse(topo, dims=1) ## flip
bathtopo = VisClaw.Topo(ncols, nrows, x, y, cellsize, cellsize, topo)
return bathtopo
end
#################################
#########################################
"""
dtopo = loaddtopo(outdir::AbstractString)
dtopo = loaddtopo(filename::AbstractString, topotype=3::Integer)
dtopo = loaddeform(outdir::AbstractString)
dtopo = loaddeform(filename::AbstractString, topotype=3::Integer)
load spatial distribution of seafloor deformation (dtopo)
"""
function loaddeform(filename::AbstractString, topotype=3::Integer)
## from _output directory
if isdir(filename)
dtopofile, topotype, ntopo = VisClaw.dtopodata(filename)
return VisClaw.loaddeform.(dtopofile, topotype)
end
## check args
if !isfile(filename); error("file $filename is not found."); end
if (topotype!=2) & (topotype!=3); error("Invalid topotype"); end
## separator in regular expression
regex = r"([+-]?(?:\d+\.?\d*|\.\d+)(?:[eE][+-]?\d+)?)"
## open topofile
f = open(filename,"r")
## read header
mx = parse(Int64, match(regex, readline(f)).captures[1])
my = parse(Int64, match(regex, readline(f)).captures[1])
mt = parse(Int64, match(regex, readline(f)).captures[1])
xlow = parse(Float64, match(regex, readline(f)).captures[1])
ylow = parse(Float64, match(regex, readline(f)).captures[1])
t0 = parse(Float64, match(regex, readline(f)).captures[1])
dx = parse(Float64, match(regex, readline(f)).captures[1])
dy = parse(Float64, match(regex, readline(f)).captures[1])
dt = parse(Float64, match(regex, readline(f)).captures[1])
## read topography in asc
dataorg = readlines(f)
## close topofile
close(f)
## meshgrid
x = collect(Float64, LinRange(xlow, round(xlow+(mx-1)*dx, digits=3), mx))
y = collect(Float64, LinRange(ylow, round(ylow+(my-1)*dy, digits=3), my))
# check topotype
tmp = replace(dataorg[1], r"^\s+|,?\s+$" => "") # equivalent to strip?
tmp = replace(tmp, "," => " ") # for csv data
tmp = split(tmp, r"\s+", keepempty=false)
tmp = parse.(Float64, tmp)
# topotype 2?
if length(tmp) == 1
if topotype == 3; println("topotype 2?"); end
topotype = 2
end
# topotype 3?
if length(tmp) > 1
if topotype == 2; println("topotype 3?"); end
topotype = 3
end
## assign topography
if topotype == 2
deform = parse.(Float64, dataorg)
deform = reshape(topo, (mx, my, mt))
deform = permutedims(topo,[2 1 3])
elseif topotype == 3
deform = zeros(my, mx, mt)
for k = 1:mt
for i = 1:my
line = replace(dataorg[i+(k-1)my], r"^\s+|,?\s+$" => "")
line = replace(line, "," => " ") # for csv data
line = split(line, r"\s+",keepempty=false)
deform[i,:,k] = parse.(Float64, line)
end
end
end
if mt==1; deform = dropdims(deform; dims=3); end
deform = reverse(deform, dims=1) ## flip
#deform[abs.(deform).<1e-2] .= NaN
dtopo = VisClaw.DTopo(mx,my,x,y,dx,dy,mt,t0,dt,deform)
return dtopo
end
#########################################
const loaddtopo = loaddeform
#########################################
| VisClaw | https://github.com/hydrocoast/VisClaw.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.