licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 906 | # [Standard Errors](@id standard_errors)
### Setup some data
```@Example main
using Unfold
using UnfoldMakie, CairoMakie
using UnfoldSim
dat, evts = UnfoldSim.predef_eeg(; noiselevel = 10, return_epoched = true)
f = @formula 0 ~ 1 + condition + continuous
designDict = Dict(Any => (f, range(0, 1, length = size(dat, 1))))
```
It is possible to specify a solver that calculates the standard errors of the estimates for a single subject as it possible for [custom solvers](@ref custom_solvers).
```@Example main
se_solver = (x, y) -> Unfold.solver_default(x, y, stderror = true)
m = Unfold.fit(UnfoldModel, designDict, evts, dat, solver = se_solver)
results = coeftable(m)
plot_erp(results; stderror = true)
```
!!! warning
**In case of overlap-correction:** Use single-subject standard errors on your own risk. EEG data is autocorrelated, which means that standard errors are typically too small.
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 3205 | # Basis Functions
```@setup main
using CairoMakie
```
This document will give you an explanation of basis functions. We start with basis functions for fMRI because they are very popular.
#### HRF / BOLD
We want to define a basis function. There are currently only few basisfunctions implemented in Unfold.jl, but your imagination knows no borders!
We first have a look at the BOLD-HRF basisfunction aka [Blood Oxygenation Level Dependent Hemodynamic Response Function](https://en.wikipedia.org/wiki/Blood-oxygen-level-dependent_imaging):
```@example main
using Unfold, DSP
TR = 1.5 # the sampling rate
bold = hrfbasis(TR) # using default SPM parameters
eventonset = 1.3
bold_kernel = e -> Unfold.kernel(bold, e)
lines(bold_kernel(eventonset)[:,1]) # returns a matrix, thus [:, 1]
```
This is the shape that is assumed to reflect the activity for an event. Generally, we would like to know how much to scale this response shape per condition, e.g. in `condA` we might scale it by 0.7, in `condB` by 1.2.
But let's start at the beginning and first simulate an fMRI signal. Then you will also appreciate why we need to deconvolve it later.
### Convolving a response shape to get a "recorded" fMRI signal
We start by convolving this HRF function with an impulse vector at event onsets.
```@example main
y = zeros(100) # signal length = 100
y[[10, 30, 45]] .= 0.7 # 3 events at given for condition A
y[[37]] .= 1.2 # 1 events at given for condition B
y_conv = conv(y, bold_kernel(0)) # convolve!
lines(y_conv[:,1])
```
Next, we would add some noise:
```@example main
using Random
y_conv += randn(size(y_conv))
lines(y_conv[:,1])
```
๐ - we did it, we simulated fMRI data.
Now you can see that the conditions overlap in time. To get back to the original amplitude values, we need to specify a basis function and use Unfold to deconvolve the signals.
!!! note
Events can fall between TR (the sampling rate). Some packages subsample the time signal, but in `Unfold` we can call the `bold.kernel` function directly at a given event time, which allows us to use non-TR multiples.
### FIR Basis Function
Okay, let's have a look at a different basis function: The FIR basisfunction. FIR stands for [Finite-Impulse-Response](https://en.wikipedia.org/wiki/Finite_impulse_response) and is a term taken from the filtering literature.
```@example main
using Unfold #hide
basisfunction = firbasis(ฯ=(-0.4,.8), sfreq=50, name="myFIRbasis")
fir_kernel = e -> Unfold.kernel(basisfunction, e)
m = fir_kernel(0)
f = Figure()
f[1,1] = Axis(f)
for col = 1:size(m, 2)
lines!(m[:,col])
end
current_figure()
```
The first thing to notice is that it is not a single basisfunction, but a set of basisfunctions. So every condition is explained by several basis functions!
To make it clear better show it in 2D:
```@example main
fir_kernel(0)[1:10,1:10]
```
(all `.` are `0`'s)
The FIR basis set consists of multiple basis functions. That is, each event is now *time-expanded* to multiple predictors, each with a certain time delay to the event onset.
This allows us to model any linear overlap shape, and doesn't force us to make assumptions about the convolution kernel, as we had to do in the BOLD case.
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 600 |
## Install a dev-version of Unfold
In order to see and change the tutorials, you have to install a local dev-version of Unfold via:
`]dev --local Unfold`
This clones the `git#main` into `./dev/Unfold`
### Instantiating the documentation environment
To generate documentation, we recommend to install LiveServer.jl - then you can do:
```julia
using LiveServer
servedocs(skip_dirs=joinpath("docs","src","generated"),literate_dir=joinpath("docs","literate"))
```
If you prefer a one-off:
- activate the `./docs` folder (be sure to `]instantiate` the first time!)
- run `include("docs/make.jl")`
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 1167 | # Package-extensions
In Julia 1.9 Package Extensions were introduced. Unfold.jl is making use of them in four ways.
Prior to using some functionality, you have to add + load specific package(s) for the functionality to be available. The reason for this is, that if you don't need e.g. GPU-support, you also will not need to install it.
## MixedModels
To use formulas like `@formula(0~1+condition+(1+condition|subject))` you have to load MixedModels. e.g.
```julia
using MixedModels
using Unfold
```
## GPU: Krylov,CUDA
To use gpu support as described in @Ref(custom_solvers) you have to:
```julia
using Krylov,CUDA
using Unfold
```
## RobustSolvers.jl
To use robust (outlier-"safe") solvers support as described in @Ref(custom_solvers) you have to:
```julia
using RobustSolvers
using Unfold
```
## Non-linear effects: BSplineKit.jl
Finally to use non-linear effects/splines like in `@formula 0~1+spl(continuous,5)` you have to use:
```julia
using BSplineKit
using Unfold
```
!!! note
In principle you should be able to load the package after loading Unfold. But sometimes this doesnt work, a `Base.retry_load_extensions()` call might help in these situations. | Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 57 | ```@autodocs
Modules = [Unfold]
Order = [:function]
``` | Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 53 | ```@autodocs
Modules = [Unfold]
Order = [:type]
``` | Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 4618 | # [Mass Univariate Linear Models (no overlap correction)](@id lm_massunivariate)
In this notebook we will fit regression models to simulated EEG data. We will see that we need some type of overlap correction, as the events are close in time to each other, so that the respective brain responses overlap.
If you want more detailed introduction to this topic check out [our paper](https://peerj.com/articles/7838/).
## Setting up & loading the data
```@example Main
using DataFrames
using Unfold
using UnfoldMakie, CairoMakie # for plotting
using UnfoldSim
using DisplayAs # hide
nothing # hide
```
## Load Data
We'll start with some predefined simulated continuos EEG data. We have 2000 events, 1 channel and one condition with two levels
```@example Main
data, evts = UnfoldSim.predef_eeg()
nothing # hide
```
## Inspection
The data has only little noise. The underlying signal pattern is a positive-negative-positive spike.
```@example Main
times_cont = range(0,length=200,step=1/100) # we simulated with 100hz for 0.5 seconds
f,ax,h = plot(times_cont,data[1:200])
vlines!(evts[evts.latency .<= 200, :latency] ./ 100;color=:black) # show events, latency in samples!
ax.xlabel = "time [s]"
ax.ylabel = "voltage [ยตV]"
f
```
To inspect the event dataframe we use
```@example Main
show(first(evts, 6), allcols = true)
```
Every row is an experimental event. Note that `:latency` refers to time in samples, (in BIDS-specification, `:onset` would typically refer to seconds).
## Traditional Mass Univariate Analysis
To perform a mass univariate analysis, you must complete the following steps:
1. Split data into epochs
2. Specify a formula
3. Fit a linear model to each time point & channel
4. Visualize the results.
#### 1. Split data into epochs
Initially, you have data with a duration that represents the whole experimental trial. You need to cut the data into small regular epochs related to the some event, e.g. start of fixation.
```@example Main
# Unfold supports multi-channel, so we could provide matrix ch x time, which we can create like this from a vector:
data_r = reshape(data, (1,:))
# cut the data into epochs
data_epochs, times = Unfold.epoch(data = data, tbl = evts, ฯ = (-0.4, 0.8), sfreq = 100); # channel x timesteps x trials
size(data_epochs)
```
- `ฯ` specifies the epoch size.
- `sfreq` - sampling rate, converts `ฯ` to samples.
```@example Main
typeof(data_epochs)
```
!!! note
In julia, `missing` is supported throughout the ecosystem. Thus, we can have partial trials and they will be incorporated / ignored at the respective functions. Helpful functions are the julia-base `disallowmissing` and the internal `Unfold.drop_missing_epochs` functions
#### 2. Specify a formula
Define a formula to be applied to each time point (and each channel) relative to the event. `condition` and `continuous` are the names of the event-describing columns in `evts` that we want to use for modelling.
```@example Main
f = @formula 0 ~ 1 + condition + continuous # note the formulas left side is `0 ~ ` for technical reasons`
nothing # hide
```
#### 3. Fit a linear model to each time point & channel
Fit the "`UnfoldModel`" (the `fit` syntax is used throughout the Julia ecosystem, with the first element indicating what kind of model to fit)
```@example Main
m = fit(UnfoldModel, f, evts, data_epochs, times);
nothing #hide
```
Alternative way to call this model is below. This syntax allows you to fit multiple events at once. For example, replacing `Any` with `:fixation =>...` will fit this model specifically to the fixation event type.
```@example Main
m = fit(UnfoldModel, [Any=>(f, times)], evts, data_epochs);
nothing #hide
```
Inspect the fitted model:
```@example Main
m
m|> DisplayAs.withcontext(:is_pluto=>true) # hide
```
Note these functions to discover the model: `design`, `designmatrix`, `modelfit` and most importantly, `coeftable`.
!!! info
There are of course further methods, e.g. `coef`, `ranef`, `Unfold.formula`, `modelmatrix` which might be helpful at some point, but not important now.
Using `coeftable`, we can get a *tidy* DataFrames, very useful for your further analysis.
```@example Main
first(coeftable(m), 6)
```
#### 4. Visualize the results
Tidy DataFrames are easy to visualize using e.g. AlgebraOfGraphics.jl. Function `plot_erp` from `UnfoldMakie`makes it even easier.
```@example Main
results = coeftable(m)
plot_erp(results)
```
As you can see, there is a lot going on, even in the baseline period! This is because the signal was simulated with overlapping events. In the next tutorial you will learn how to fix this.
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 2687 | # [Linear Model with Overlap Correction](@id lm_overlap)
!!! note
We recommend you briefly go over the mass-univariate linear modelling tutorial
In this notebook we will fit regression models to (simulated) EEG data. We will see that we need some type of overlap correction, as the events are close in time to each other, so that the respective brain responses overlap.
If you want more detailed introduction to this topic check out [our paper](https://peerj.com/articles/7838/).
## Setting up & loading the data
```@example Main
using Unfold
using UnfoldSim
using UnfoldMakie,CairoMakie
using DataFrames
using DisplayAs # hide
data, evts = UnfoldSim.predef_eeg()
nothing # hide
```
## Overlap Correction
For an overlap correction analysis we will do one additional step: define a temporal basisfunction. The steps are as following:
1. specify a temporal basisfunction
2. specify a formula
3. fit a linear model for each channel (one for all timepoints!)
4. visualize the results.
## Timeexpanded / Deconvolved ModelFit
#### 1. specify a temporal basisfunction
By default, we would want to use a FIR basisfunction. See [Basis Functions](@ref) for more details.
```@example Main
basisfunction = firbasis(ฯ=(-0.4,.8),sfreq=100)
nothing #hide
```
#### 2. specify a formula
We specify the same formula as before
```@example Main
f = @formula 0~1+condition+continuous
nothing #hide
```
#### 3. fit the linear model
The formula and basisfunction is not enough on their own. We also need to specify which event and which formula matches - this is important in cases where there are multiple events with different formulas
```@example Main
bf_vec = [Any=>(f,basisfunction)]
bf_vec|> DisplayAs.withcontext(:is_pluto=>true) # hide
```
!!! note
The `Any` means to use all rows in `evts`. In case you have multiple events, you'd want to specify multiple basisfunctions e.g.
```
bfDict = ["stimulus"=>(f1,basisfunction1),
"response"=>(f2,basisfunction2)]
```
You likely have to specify a further argument to `fit`: `eventcolumn="type"` with `type` being the column in `evts` that codes for the event (stimulus / response in this case)
Now we are ready to fit a `UnfoldLinearModel`. Not that instead of `times` as in the mass-univariate case, we have to provide the `BasisFunction` type now.
```@example Main
m = fit(UnfoldModel,bf_vec,evts,data);
nothing #hide
```
#### 4. Visualize the model
Similarly to the previous tutorial, we can visualize the model
```@example Main
results = coeftable(m)
plot_erp(results)
```
Cool! All overlapping activity has been removed and we recovered the simulated underlying signal.
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 2335 | # [Mass Univariate Linear Mixed Models](@id lmm_massunivariate)
```@example Main
using Unfold
using UnfoldSim
using MixedModels # important to load to activate the UnfoldMixedModelsExtension
using UnfoldMakie, CairoMakie # plotting
using DataFrames
using CategoricalArrays
nothing;#hide
```
!!! important
You have to run `using MixedModels` before or after loading Unfold to activate the MixedModels abilities!
This notebook is similar to the [Mass Univariate Linear Models (no overlap correction) tutorial](@ref lm_massunivariate), but fits mass-univariate *mixed* models - that is, one model over all subjects, instead of one model per subject. This allows to include item effects, for example.
## Mass Univariate **Mixed** Models
Again we have 4 steps:
1. Split data into epochs
2. Specify a formula
3. Fit a linear model to each time point & channel
4. Visualize the results.
#### 1. Epoching
```@example Main
data, evts = UnfoldSim.predef_eeg(10; return_epoched = true) # simulate 10 subjects
data = reshape(data, 1, size(data, 1), :) # concatenate the data into a long EEG dataset
times = range(0, length = size(data, 2), step = 1 / 100)
transform!(evts, :subject => categorical => :subject); # :subject must be categorical, otherwise MixedModels.jl complains
nothing #hide
```
The `events` dataFrame has an additional column (besides being much taller): `subject`
```@example Main
first(evts, 6)
```
#### 2. Formula specification
We define the formula. Importantly, we need to specify a random effect. We use `zerocorr` to speed up the calculation.
```@example Main
f = @formula 0 ~ 1 + condition * continuous + zerocorr(1 + condition * continuous | subject);
nothing #hide
```
#### 3. Model fitting
We can now run the LinearMixedModel at each time point.
```@example Main
m = fit(UnfoldModel, f, evts, data, times)
nothing #hide
```
#### 4. Visualization of results
Let's start with the **fixed** effects.
We see the condition effects and some residual overlap activity in the fixed effects.
```@example Main
results = coeftable(m)
res_fixef = results[isnothing.(results.group), :]
plot_erp(res_fixef)
```
And now comes the **random** effect:
```@example Main
res_ranef = results[results.group .== :subject, :]
plot_erp(res_ranef)
```
### Statistics
Check out the [LMM p-value tutorial](@ref lmm_pvalues)
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 2161 | # [Overlap Correction with Linear Mixed Models](@id lmm_overlap)
```@example Main
using Unfold
using UnfoldSim
using CategoricalArrays
using MixedModels
using UnfoldMakie, CairoMakie
using DataFrames
nothing;#hide
```
This notebook is similar to the Linear Model with Overlap Correction tutorial, but fits **mixed** models with overlap correction
!!! warning
**Limitation**: This functionality is not ready for general use. There are still a lot of things to find out and tinker with. Don't use this if you haven't looked under the hood of the toolbox! Be aware of crashes / timeouts for non-trivial problems
## Get some data
```@example Main
dat, evts = UnfoldSim.predef_2x2(; signalsize=20, n_items=16, n_subjects=16)
# We also need to fix the latencies, they are now relative to 1:size(data, 1), but we want a continuous long EEG.
subj_idx = [parse(Int, split(string(s), 'S')[2]) for s in evts.subject]
evts.latency .+= size(dat, 1) .* (subj_idx .- 1)
dat = dat[:] # we need all data concatenated over subjects
evts.subject = categorical(Array(evts.subject))
nothing #hide
```
## Linear **Mixed** Model Continuous Time
Again we have 4 steps:
1. Specify a temporal basisfunction
2. Specify a formula
3. Fit a linear model for each channel (one model for all timepoints!)
4. Visualize the results.
#### 1. Specify a temporal basisfunction
By default, we would want to use a FIR basis function. See [Basis Functions](@ref) for more details.
```@example Main
basisfunction = firbasis(ฯ=(-0.4, .8), sfreq=20, name="stimulus")
nothing #hide
```
#### 2. Specify the formula
Define the formula and specify a random effect.
!!! note
We use `zerocorr` to prevent the model from computing all correlations between all timepoints and factors.
```@example Main
f = @formula 0 ~ 1 + A *B + zerocorr(1 + A*B|subject);
```
#### 3. Fit the model
```@example Main
bfDict = Dict(Any=>(f, basisfunction))
# Skipping this tutorial for now due to a significant error.
m = fit(UnfoldModel, bfDict, evts, dat)
results = coeftable(m)
first(results, 6)
```
#### 4. Visualize results
```@example Main
plot_erp(results; mapping=(; col = :group))
```
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.7.6 | 14faed094d3728f4e549b6fbc4b38b9b8c6a4a99 | docs | 46 | - Case 1a: eventA: y~1 (without overlap)
-
| Unfold | https://github.com/unfoldtoolbox/Unfold.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 642 | execute = isempty(ARGS) || ARGS[1] == "run"
using Literate
lit = joinpath(@__DIR__, "lit")
src = joinpath(@__DIR__, "src")
gen = joinpath(@__DIR__, "src/generated")
for (root, _, files) in walkdir(lit), file in files
splitext(file)[2] == ".jl" || continue # process .jl files only
ipath = joinpath(root, file)
opath = splitdir(replace(ipath, lit => gen))[1]
Literate.markdown(ipath, opath, documenter = execute)
Literate.notebook(ipath, opath; execute = false)
end
# functions
ismd(f) = splitext(f)[2] == ".md"
pages(folder) =
[joinpath("generated/", folder, f) for f in readdir(joinpath(gen, folder)) if ismd(f)]
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 947 | using EPGsim
using Documenter, Literate
# Generates examples with literate (removed)
#include("generate_lit.jl")
DocMeta.setdocmeta!(EPGsim, :DocTestSetup, :(using EPGsim); recursive=true)
makedocs(;
modules=[EPGsim],
authors="aTrotier <a.trotier@gmail.com> and contributors",
repo="https://github.com/aTrotier/EPGsim.jl/blob/{commit}{path}#{line}",
sitename="EPGsim.jl",
doctest = true,
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://aTrotier.github.io/EPGsim.jl",
edit_link="main",
assets=String[],
),
pages=[
"Home" => "index.md",
"Regular EPG " => "regular.md",
#"Test literate" => "generated/01-autoDiff.md", # generated from literate
"Automatic Differentiation" => "AD.md",
"API" => "API.md",
],
)
deploydocs(;
repo="github.com/aTrotier/EPGsim.jl",
devbranch="main",
)
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 2438 | #---------------------------------------------------------
# # [Automatic Differentiation](@id 01-autoDiff)
#---------------------------------------------------------
#=
## Description
This example described how to use Automatic Differentiation with the package
**ForwardDiff.jl** on a Multi-Echo Spin-Echo (MESE) sequence.
=#
# ## Setup
using CairoMakie
using EPGsim
using ForwardDiff
# ## MESE function
# First we need a function that returns the echo amplitudes at n*TE.
# We need to make sure that the EPGStates object will have a type Complex{T} where T can
# be a Float or a Dual number used by `ForwardDiff`
function MESE_EPG(T2,T1,TE,ETL,delta)
T = complex(eltype(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
E = epgRotation(E,pi/2*delta, pi/2)
##loop over refocusing-pulses
for i = 1:ETL
E = epgDephasing(E,1)
E = epgRelaxation(E,TE,T1,T2)
E = epgRotation(E,pi*delta,0.0)
E = epgDephasing(E,1)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end
# Let's see if we can see a Tโ decaying exponential curve with Bโ=1.0
T2 = 60.0
T1 = 1000.0
TE = 7
ETL = 50
deltaB1 = 1
TE_vec = range(7,50*7,50)
amp = MESE_EPG(T2,T1,TE,ETL,deltaB1)
j = ForwardDiff.jacobian(x -> MESE_EPG(x,T1,TE,ETL,deltaB1),[60.0])
lines(TE_vec,amp)
#=
The derivative of the function f:
$$f(x) = \exp(-\frac{TE}{T_2})$$
according to the variable Tโ gives :
=#
df = TE_vec .* exp.(-TE_vec./T2)./(T2^2)
lines(TE_vec,df,axis =(;title = "dS/dT2", xlabel="TE [ms]"))
# ## perform AD
j = ForwardDiff.jacobian(x -> MESE_EPG(x,T1,TE,ETL,deltaB1),[T2])
# ## Reproducibility
# ## Derivation along multiple parameters
function MESE_EPG2(T2T1,TE,ETL,delta)
T2,T1 = T2T1
T = complex(eltype(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
E = epgRotation(E,pi/2*delta, pi/2)
##loop over refocusing-pulses
for i = 1:ETL
E = epgDephasing(E,1)
E = epgRelaxation(E,TE,T1,T2)
E = epgRotation(E,pi*delta,0.0)
E = epgDephasing(E,1)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end
j2 = ForwardDiff.jacobian(x -> MESE_EPG2(x,TE,ETL,deltaB1),[T2,T1])
# This page was generated with the following version of Julia:
using InteractiveUtils
io = IOBuffer();
versioninfo(io);
split(String(take!(io)), '\n')
# And with the following package versions
import Pkg; Pkg.status()
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 44 | module EPGsim
include("RegularEPG.jl")
end
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 4642 | export epgDephasing!,epgRelaxation!,epgRotation!, epgThreshold!
export rfRotation
export EPGStates, getStates
"""
EPGStates{T <: Real}
Stores the EPG states in 3 vectors Fp,Fn and Z.
# Constructors :
EPGStates(Fp::Vector{Complex{S}},Fn::Vector{Complex{S}},Z::Vector{Complex{S}}) where {S <: Real}
EPGStates(Fp::T=0,Fn::T=0,Z::T=1) where T <: Number
# Fields
- `Fp::Vector{Complex{T}}`
- `Fn::Vector{Complex{T}}`
- `Z::Vector{Complex{T}}`
# Related functions
- `getStates(E::EPGStates)` : extract EPG states as matrix 3xN
"""
mutable struct EPGStates{T <: Real}
Fp::Vector{Complex{T}}
Fn::Vector{Complex{T}}
Z::Vector{Complex{T}}
function EPGStates(Fp::Vector{Complex{S}},Fn::Vector{Complex{S}},Z::Vector{Complex{S}}) where {S <: Real}
if Fp[1] != conj(Fn[1])
error("Fp[1] should be complex conjugate to Fn[1]")
end
if imag(Z[1]) != 0
error("imaginary part of Z[1] should be equal to 0")
end
return new{S}(Fp,Fn,Z)
end
end
"""
getStates(E::EPGStates)
Extract EPG states as matrix 3xN
"""
function EPGStates(Fp::T=0,Fn::T=0,Z::T=1) where T <: Number
T2 = ComplexF64
return EPGStates(T2.([Fp]),T2.([Fn]),T2.([Z]))
end
function getStates(E::EPGStates)
return stack([E.Fp,E.Fn,E.Z],dims=1)
end
function Base.show(io::IO, E::EPGStates{T} ) where {T}
println(io, "EPGStates struct with fields : Fp, Fn, Z")
display(getStates(E))
end
"""
epgDephasing!(E::EPGStates, n=1) where T
shifts the transverse dephasing states `F` corresponding to n dephasing-cycles.
n can be any integer
"""
function epgDephasing!(E::EPGStates, n::Int=1,threshold::Real=10e-6)
if(abs(n)>1)
for i in 1:abs(n)
E = epgDephasing!(E, (n > 0 ? +1 : -1))
end
elseif(n == 1 || n == -1)
push!(E.Fp,0)
push!(E.Fn,0)
push!(E.Z,0)
if n == 1
E.Fp[:] = circshift(E.Fp,+1)# Shift positive F states right
E.Fn[:] = circshift(E.Fn,-1) # Shift negative F* states left
# Update extremal states: F_{+0} using F*_{-0}, F*_{-max+1}=0
E.Fp[1] = conj(E.Fn[1]);
E.Fn[end] = 0;
else #
E.Fp[:] = circshift(E.Fp,-1)# Shift positive F states right
E.Fn[:] = circshift(E.Fn,+1) # Shift negative F* states left
# Update extremal states: F_{+0} using F*_{-0}, F*_{-max+1}=0
E.Fn[1] = conj(E.Fp[1]);
E.Fp[end] = 0;
end
end
E = epgThreshold!(E,threshold)
return E
end
#=
function epgDephasing(E::EPGStates, n::Int,threshold::Real)
E = epgDephasing(E, n)
E = epgThreshold(E,threshold)
end
=#
function epgThreshold!(E::EPGStates,threshold::Real)
thresholdยฒ=threshold^2
for i in length(E.Fp):-1:2
if abs.(E.Fp[i]^2 + E.Fn[i]^2 + E.Z[i]^2) < thresholdยฒ
pop!(E.Fp)
pop!(E.Fn)
pop!(E.Z)
else
return E
end
end
return E
end
"""
epgRelaxation!(E::EPGStates,t,T1, T2)
applies relaxation matrices to a set of EPG states.
# Arguments
* `E::EPGStates`
* `t::AbstractFloat` - length of time interval
* `T1::AbstractFloat` - T1
* `T2::AbstractFloat` - T2
"""
function epgRelaxation!(E::EPGStates,t,T1, T2)
@. E.Fp = exp(-t/T2) * E.Fp
@. E.Fn = exp(-t/T2) * E.Fn
@. E.Z[2:end] = exp(-t/T1) * E.Z[2:end]
E.Z[1] = exp.(-t./T1) * (E.Z[1]-1.0) + 1.0
return E
end
"""
rfRotation(alpha, phi=0.)
returns the rotation matrix for a pulse with flip angle `alpha` and phase `phi`.
# Arguments
* `alpha` - flip angle (radian)
* `phi=0.` - phase of the flip angle (radian)
"""
function rfRotation(alpha, phi=0.)
R = [ cos(alpha/2.)^2 exp(2*im*phi)*sin(alpha/2.)^2 -im*exp(im*phi)*sin(alpha);
exp(-2*im*phi)*sin(alpha/2.)^2 cos(alpha/2.)^2 im*exp(-im*phi)*sin(alpha);
-im/2 .*exp(-im*phi)*sin(alpha) im/2 .*exp(im*phi)*sin(alpha) cos(alpha) ]
end
"""
epgRotation!(E::EPGStates, alpha::Float64, phi::Float64=0.0)
applies Bloch-rotation (<=> RF pulse) to a set of EPG states.
# Arguments
* `E::EPGStates``
* `alpha::Float64` - flip angle of the RF pulse (rad)
* `phi::Float64=0.0` - phase of the RF pulse (rad)
"""
function epgRotation!(E::EPGStates, alpha::Real, phi::Real=0.0)
R = rfRotation(alpha, phi)
epgRotation!(E, R)
return E
end
"""
epgRotation!(E::EPGStates, R::Matrix)
applies rotation matrix from `rfRotation` function to the EPGStates
# Arguments
* `E::EPGStates``
* `R::Matrix` - rotation Matrix (rad)
"""
function epgRotation!(E::EPGStates, R::Matrix)
# apply rotation to all states per default
n = length(E.Z) # numStates
for i = 1:n
E.Fp[i],E.Fn[i],E.Z[i] = R*[E.Fp[i]; E.Fn[i]; E.Z[i]]
end
return E
end
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 156 | using EPGsim
using Test
using ForwardDiff
using BenchmarkTools
@testset "EPGsim.jl" begin
include("epg/test_regular.jl")
include("test_AD.jl")
end
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 787 | using ForwardDiff
function MESE_EPG(T2,T1,TE,ETL,delta)
T = eltype(complex(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
epgRotation!(E,pi/2*delta, pi/2)
# loop over refocusing-pulses
for i = 1:ETL
epgDephasing!(E,1)
epgRelaxation!(E,TE,T1,T2)
epgRotation!(E,pi*delta,0.0)
epgDephasing!(E,1)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end
@testset "EPG-AD" begin
#amp = MESE_EPG(60.0,1000.0,7,50,1) # Not used
T2 = 60.0
T1 = 1000.0
TE = 7.0
ETL = 50
deltaB1 = 1.0
# analytic gradient
TE_vec = TE:TE:TE*50
df = TE_vec .* exp.(-TE_vec./60.0)./(60^2)
# Automatic differentiation
j = ForwardDiff.jacobian(x -> MESE_EPG(x,T1,TE,ETL,deltaB1),[60.0])
@test vec(abs.(j)) โ df
end | EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | code | 2753 | function MESE_EPG_thresh(T2,T1,TE,ETL,delta,thresh)
T = eltype(complex(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
epgRotation!(E,pi/2*delta, pi/2)
# loop over refocusing-pulses
R = rfRotation(pi*delta,0.0)
for i = 1:ETL
epgDephasing!(E,1,thresh)
epgRelaxation!(E,TE,T1,T2)
epgRotation!(E,R)
epgDephasing!(E,1,thresh)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end
@testset "EPG" begin
# test empty
E=EPGStates()
@test E.Fp == [0] && E.Fn == [0] && E.Z == [1]
# test initialization
@test_throws "Fp[1] should be complex conjugate to Fn[1]" E=EPGStates(1+2im,1+0im,1+0im)
@test_throws "imaginary part of Z[1] should be equal to 0" E=EPGStates(1+2im,1-2im,1+2im)
# test pulse
E=EPGStates()
epgRotation!(E,deg2rad(47),deg2rad(23))
@test getStates(E) โ [
0.2857626571584661 - im*0.6732146319308543,
0.2857626571584661 + im*0.6732146319308543,
0.6819983600624985]
#test positive gradient
epgDephasing!(E,1)
@test getStates(E) โ [[0, 0, 0.6819983600624985];;
[0.2857626571584661 - im * 0.6732146319308543, 0, 0]]
# test negative gradient
E = EPGStates()
epgRotation!(E,deg2rad(47),deg2rad(23))
epgDephasing!(E,-1)
@test getStates(E) โ [[0, 0, 0.6819983600624985];;
[0, 0.2857626571584661 + im * 0.6732146319308543, 0]]
# test multiple gradient
E = EPGStates()
epgRotation!(E,deg2rad(47),deg2rad(23))
epgDephasing!(E,-2)
epgRotation!(E,deg2rad(47),deg2rad(23))
epgDephasing!(E,1)
@test getStates(E) โ [[0, 0, 0.4651217631279373];;
[0.19488966354917586-im*0.45913127494692113, 0.240326160353821+im*0.5661729534388877,0];;
[0, 0, -0.26743911843603135];;
[-0.045436496804645087+im*0.10704167849196657, 0, 0]]
# test relaxation
E = EPGStates()
epgRotation!(E,deg2rad(47),deg2rad(23))
epgDephasing!(E,1)
epgRelaxation!(E,10,1000,100)
@test getStates(E) โ [[0, 0, 0.6851625292479138];;
[0.2585687448743616 - im*0.6091497893403431, 0, 0]]
# test threshold
E = EPGStates([0+0*im,0+0.5im,0+0.01im],[0+0*im,0+0.5im,0+0.01im],[1+0*im,0+0im,0+0.0im])
epgDephasing!(E,1,10e-2)
@test getStates(E) โ [[0-0.5im, 0+0.5im, 1];;
[0, 0+0.01im, 0];;
[0.5im, 0,0]]
# benchmark
b = @benchmark MESE_EPG_thresh(60.0,1000.0,7.0,50,0.9,0.0)
@info "without threshold (0.0) :\n
time = $(median(b).time/1000) us\n
memory = $(median(b).memory)\n
allocs = $(median(b).allocs)\n
gctimes = $(median(b).gctime) ns\n"
b = @benchmark MESE_EPG_thresh(60.0,1000.0,7.0,50,0.9,10e-6)
@info "With threshold 10e-6 :\n
time = $(median(b).time/1000) us\n
memory = $(median(b).memory)\n
allocs = $(median(b).allocs)\n
gctimes = $(median(b).gctime) ns\n"
end | EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | docs | 1062 | # EPGsim
[](https://aTrotier.github.io/EPGsim.jl/stable/)
[](https://aTrotier.github.io/EPGsim.jl/dev/)
[](https://github.com/aTrotier/EPGsim.jl/actions/workflows/CI.yml?query=branch%3Amain)
[](https://codecov.io/gh/aTrotier/EPGsim.jl)
[](https://github.com/invenia/BlueStyle)
This package is inspired by [SYCOMORE](https://sycomore.readthedocs.io/) with the intention to use Automatic differentiation (with ForwardDiff.jl)
Take a look at the [documentation](https://atrotier.github.io/EPGsim.jl/dev/).
## TO DO list :
- Delete states with amplitude < defined threshold or number of states (implemented in epgDephasing ??)
- discrete / discrete 3D
- Diffusion
- Magnetization Transfer | EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | docs | 4276 | # Automatic differentiation
This page shows how to use Automatic Differentiation in combination with an EPG
simulation.
The AD package tested is
[ForwardDiff.jl](https://github.com/JuliaDiff/ForwardDiff.jl), maybe it works with others
with some minor modification to the following code.
## Load package
```@example AD
using EPGsim, ForwardDiff, CairoMakie
```
## Building signal function
```@example AD
function MESE_EPG(T2,T1,TE,ETL,delta)
T = eltype(complex(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
E = epgRotation(E,pi/2*delta, pi/2)
# loop over refocusing-pulses
for i = 1:ETL
E = epgDephasing(E,1)
E = epgRelaxation(E,TE,T1,T2)
E = epgRotation(E,pi*delta,0.0)
E = epgDephasing(E,1)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end;
```
!!! warning "Specific types with AD"
ForwardDiff use a specific type : `Dual <: Real`. The target function must be written
generically enough to accept numbers of type T<:Real as input (or arrays of these
numbers).
We also need to create an EPGStates that is of that type. We need to force it to be complex :
```julia
T = eltype(complex(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
```
## Define parameters for simulation and run it
```@example AD
T2 = 60.0
T1 = 1000.0
TE = 7
ETL = 50
deltaB1 = 1
TE_vec = range(TE,TE*ETL,ETL)
amp = MESE_EPG(T2,T1,TE,ETL,deltaB1)
lines(TE_vec,abs.(amp),axis =(;title = "MESE Signal", xlabel="TE [ms]"))
```
As expected, we get a standard T2 decaying exponential curve :
$$S(TE) = exp(-TE/T_2)$$
We can analytically derive the equation according to $T_2$ :
$$\frac{\partial S}{\partial T_2} = \frac{TE}{T_2^2} exp(-TE/T_2)$$
which give the following curves:
```@example AD
df = TE_vec .* exp.(-TE_vec./T2)./(T2^2)
lines(TE_vec,abs.(df),axis =(;title = "dS/dT2", xlabel="TE [ms]"))
```
## Find the derivative with Automatic Differentiation
Because we want to obtain the derivate at multiple time points (TE), we will use `ForwardDiff.jacobian` :
```@example AD
j = ForwardDiff.jacobian(x -> MESE_EPG(x,T1,TE,ETL,deltaB1),[T2])
```
Let's compare it to the analytical equation :
```@example AD
f=Figure()
ax = Axis(f[1,1],title ="Analytic vs Automatic Differentiation")
lines!(ax,TE_vec,abs.(df),label = "Analytic Differentiation",linewidth=3)
lines!(ax,TE_vec,abs.(vec(j)),label = "Automatic Differentiation",linestyle=:dash,linewidth=3)
axislegend(ax)
f
```
Of course, in that case we don't really need the AD possibility. But if we reduce the B1+ value the equation becomes complicated enough and might lead to error during derivation if we don't use AD.
```@example AD
deltaB1 = 0.8
amp = MESE_EPG(T2,T1,TE,ETL,deltaB1)
j = ForwardDiff.jacobian(x -> MESE_EPG(x,T1,TE,ETL,deltaB1),[T2])
f = Figure()
ax = Axis(f[1,1], title = "MESE signal with B1 = $(deltaB1)",xlabel="TE [ms]")
lines!(ax,TE_vec,abs.(amp))
ax = Axis(f[1,2], title = "AD of MESE signal with B1 = $(deltaB1)",xlabel="TE [ms]")
lines!(ax,TE_vec,df)
f
```
## Differentiation along multiple variables
If we want to obtain the derivation along T1 and T2 we need to change the EPG_MESE function. The function should take as input a vector containing T2 and T1 (here noted T2/T1) :
```@example AD
function MESE_EPG2(T2T1,TE,ETL,delta)
T2,T1 = T2T1
T = complex(eltype(T2))
E = EPGStates([T(0.0)],[T(0.0)],[T(1.0)])
echo_vec = Vector{Complex{eltype(T2)}}()
E = epgRotation(E,pi/2*delta, pi/2)
##loop over refocusing-pulses
for i = 1:ETL
E = epgDephasing(E,1)
E = epgRelaxation(E,TE,T1,T2)
E = epgRotation(E,pi*delta,0.0)
E = epgDephasing(E,1)
push!(echo_vec,E.Fp[1])
end
return abs.(echo_vec)
end
j2 = ForwardDiff.jacobian(x -> MESE_EPG2(x,TE,ETL,deltaB1),[T2,T1])
```
Here we can see that the second column corresponding to T1 is equal to 0 which is expected for a MESE sequence and the derivative along T2 gives the same results :
```@example AD
j2[:,1] โ vec(j)
```
## Reproducibility
This page was generated with the following version of Julia:
```@example AD
using InteractiveUtils
io = IOBuffer();
versioninfo(io);
split(String(take!(io)), '\n')
```
And with the following package versions
```@example AD
import Pkg; Pkg.status()
```
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | docs | 51 | ```@index
```
```@autodocs
Modules = [EPGsim]
```
| EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | docs | 1749 | ```@meta
CurrentModule = EPGsim
```
# EPGsim
*Extended Phase Graph simulation*
## Introduction
EPGsim is a Julia packet for magnetic resonance imaging signal simulation based on the Extended Phase Graph (EPG) concept.
The principal aspect of this package was to make it compatible with Automatic Differentiation using `ForwardDiff.jl` in order to compute [Cramรฉr-Rao Lower Bound](https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao_bound) metrics which is used to optimized sequence protocol.
!!! note
EPGsim.jl is work in progress and in some parts not entirely optimized. The interface is susceptible to change between version
## EPG concept
Introduction to the physics concepts behing EPG as well as their usage can be found on the rad229 youtube channels by Brian Hargreaves and Daniel Ennis :
- [Lecture-04A: Definition of the Extended Phase Graph Basis](https://www.youtube.com/watch?v=bskhnaoJVNY)
- [Lecture-04B: Sequence Operations in the Extended Phase Graph Domain](https://www.youtube.com/watch?v=kToL-9ZTzCs)
- [Lecture-04C: Examples using Extended Phase Graphs](https://www.youtube.com/watch?v=O9JH2f6c3cs)
## Installation
This package is currently not registered.
Start julia and open the package mode by entering `]`. Then enter
```julia
add https://github.com/aTrotier/EPGsim.jl
```
This will install `EPGsim` and all its dependencies. If you want to develop
`EPGsim` itself you can checkout `EPGsim` by calling
```julia
dev https://github.com/aTrotier/EPGsim.jl
```
More information on how to develop a package can be found in the Julia documentation.
## Tutorial
You can find an example about simulation of a Multi-Echo Spin-Echo sequence and its derivation [here](https://atrotier.github.io/EPGsim.jl/dev/AD/) | EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 0.1.1 | e69f401fe78acaeed205e459c4b513335283fb1c | docs | 1962 | EPG implementation that mimics the regular implementation from Julien Lamy in
[Sycomore](https://github.com/lamyj/sycomore/blob/master/src/sycomore/epg/Regular.cpp#L342)
# Short description
Regular implementation use a constant positive or negative gradient dephasing.
We use a vector Fp, Fn and Z to store the states.
# Initialization
EPG states are stored as a structure :
```
mutable struct EPGStates{T <: Real}
Fp::Vector{Complex{T}}
Fn::Vector{Complex{T}}
Z::Vector{Complex{T}}
end
```
which can be initialized with default parameters Fp = 0, Fn = 0 and Z = 1 states using :
```@example Regular
using EPGsim
E = EPGStates()
```
or by :
```@example Regular
E = EPGStates(0,0,1)
```
which convert any numbers of the same types in `Vector{ComplexF64}`
or directly by passing `Vector{Complex{T}} where {T <: Real}` which means it can accept a complex{dual} type :
```@example Regular
T = ComplexF32
E = EPGStates(T.([0.5+0.5im,1]),T.([0.5-0.5im,0]),T.([1,0]))
```
!!! Note
Julia is a one based indexing language.
Fp[1]/Fn[1]/Z[1] store the echo and correspond to the states commonly named $F_0^+$ / $F_0^-$ $Z_0$
!!! warning
the F+[1] and F-[1] states should be complex conjugate and imag(Z[1])=0
# EPG simulation
3 functions are used to simulate a sequence :
- epgDephasing
- epgRelaxation
- epgRotation
They take an `EPGStates` struct as first parameter.
```@example Regular
E = EPGStates()
E = epgRotation(E,deg2rad(60),0)
E = epgDephasing(E,1)
E = epgRotation(E,deg2rad(60),deg2rad(117))
```
!!! note
Currently, all the EPGstates are stored and used for calculation.
The states equal or really close to zero are not deleted
# Accessing states
States can seen directly as a vector :
```@example Regular
E.Fp
```
or by elements :
```@example Regular
E.Fp[2]
```
`getStates` is also available to create a 3xN matrix where 3 corresponds to Fp,Fn,Z and N is the number of states.
```@example Regular
getStates(E)
``` | EPGsim | https://github.com/aTrotier/EPGsim.jl.git |
|
[
"MIT"
] | 1.0.0 | d7c29668b00abd0ee5f28180a10b7fe8b8cd8155 | code | 2069 | module SuffixArrays
export suffixsort, lcp
const CodeUnits = Union{UInt8,UInt16}
const IndexTypes = Union{Int8,Int16,Int32,Int64}
const IndexVector = AbstractVector{<:IndexTypes}
include("sais.jl")
function suffixsort(V::AbstractVector{U}, base::Integer=1) where {U<:CodeUnits}
0 โคย base || throw(ArgumentError("unsupported negative indexing base: $base"))
n = length(V)
# unsigned index type to return
T = n+base-1 โค typemax(UInt8) ? UInt8 :
n+base-1 โค typemax(UInt16) ? UInt16 :
n+base-1 โค typemax(UInt32) ? UInt32 : UInt64
n โคย 1 && return fill(T(base), n)
# signed index type for algorithm
S = n โค typemax(Int8) ? Int8 :
n โค typemax(Int16) ? Int16 :
n โค typemax(Int32) ? Int32 : Int64
if sizeof(T) == sizeof(S)
I = zeros(T, n)
sais(V, reinterpret(S, I), 0, n, Int(typemax(U))+1, false)
base โ 0 && (I .+= base)
return I
else
I = zeros(S, n)
sais(V, I, 0, n, Int(typemax(U))+1, false)
Iโฒ = Vector{T}(undef, n)
@inbounds for (i, x) in enumerate(I)
Iโฒ[i] = (x + base) % T
end
return Iโฒ
end
end
function suffixsort(s::AbstractString, base::Integer=1)
return suffixsort(codeunits(s), base)
end
"""
lcp(sa, s[, base])
Compute the longest common prefix (LCP) array from the suffix array `sa`
associated with sequence `s`.
reference:
Linear-Time Longest-Common-Prefix Computation in Suffix Arrays and Its Applications
Kasai et. al.
http://web.cs.iastate.edu/~cs548/references/linear_lcp.pdf
"""
function lcp(sa, V::AbstractVector{U}, base::Integer=1) where {U<:CodeUnits}
pos = sa .+ (1-base)
n = length(pos)
lcparr = similar(pos)
rank = invperm(pos)
h = 0
for i in 1:n
if rank[i] == 1
continue
end
j = pos[rank[i]-1]
maxh = n - max(i, j)
while h <= maxh && V[i+h] == V[j+h]
h += 1
end
lcparr[rank[i]] = h
h = max(h-1, 0)
end
lcparr[1] = 0
lcparr
end
end # module
| SuffixArrays | https://github.com/JuliaCollections/SuffixArrays.jl.git |
|
[
"MIT"
] | 1.0.0 | d7c29668b00abd0ee5f28180a10b7fe8b8cd8155 | code | 10310 | #=
* sais
* Copyright (c) 2008-2010 Yuta Mori All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
=#
struct IntVector <: AbstractVector{Int}
vec::Array{Int,1}
off::Int
end
Base.size(v::IntVector) = (length(v.vec)-v.off,)
Base.getindex(v::IntVector, key) = v.vec[v.off+Int(key)]
Base.setindex!(v::IntVector, value, key) = v.vec[v.off+Int(key)] = value
# TODO:
# - refactor code to simplify
# - build user interface for string operations
function getcounts(T::AbstractVector{<:Integer}, C::IntVector, n::Int, k::Int)
for i = 1:k
C[i] = 0
end
for i = 1:n
C[T[i]+1] += 1
end
end
function getbuckets(C::IntVector, B::IntVector, k::Int, isend::Bool)
s = 0
if isend != false
for i = 1:k
s += C[i]
B[i] = s
end
else
for i = 1:k
s += C[i]
B[i] = s - C[i]
end
end
end
function sais(
T::AbstractVector{<:Integer},
SA::IndexVector,
fs::Int,
n::Int,
k::Int,
isbwt::Bool,
)
pidx = 0
flags = 0
if k <= 256
C = IntVector(zeros(Int, k), 0)
if k <= fs
B = IntVector(SA, n + fs - k)
flags = 1
else
B = IntVector(zeros(Int, k), 0)
flags = 3
end
elseif k <= fs
C = IntVector(SA, n + fs - k)
if k <= fs - k
B = IntVector(SA, n + fs - 2k)
flags = 0
elseif k <= 1024
B = IntVector(zeros(Int, k), 0)
flags = 2
else
B = C
flags = 8
end
else
C = B = IntVector(zeros(Int, k), 0)
flags = 4 | 8
end
# stage 1
getcounts(T, C, n, k)
getbuckets(C, B, k, true)
for i = 1:n
SA[i] = 0
end
b = -1
i = j = n
m = 0
c0 = c1 = T[n]
i -= 1
while 1 <= i && ((c0 = T[i]) >= c1)
c1 = c0
i -= 1
end
while 1 <= i
c1 = c0
i -= 1
while 1 <= i && ((c0 = T[i]) <= c1)
c1 = c0
i -= 1
end
if 1 <= i
0 <= b && (SA[b+1] = j)
b = (B[c1+1] -= 1)
j = i - 1
m += 1
c1 = c0
i -= 1
while 1 <= i && ((c0 = T[i]) >= c1)
c1 = c0
i -= 1
end
end
end
if 1 < m
LMSsort(T, SA, C, B, n, k)
name = LMSpostproc(T, SA, n, m)
elseif m == 1
SA[b+1] = j + 1
name = 1
else
name = 0
end
# stage 2
if name < m
newfs = n + fs - 2m
if flags & (1 | 4 | 8) == 0
if (k + name) <= newfs
newfs -= k
else
flags |= 8
end
end
j = 2m + newfs
for i = (m+(n>>1)):-1:(m+1)
if SA[i] != 0
SA[j] = SA[i] - 1
j -= 1
end
end
RA = IntVector(SA, m + newfs)
sais(RA, SA, newfs, m, name, false)
i = n
j = 2m
c0 = c1 = T[n]
while 1 <= (i -= 1) && ((c0 = T[i]) >= c1)
c1 = c0
end
while 1 <= i
c1 = c0
while 1 <= (i -= 1) && ((c0 = T[i]) <= c1)
c1 = c0
end
if 1 <= i
SA[j] = i
j -= 1
c1 = c0
while 1 <= (i -= 1) && ((c0 = T[i]) >= c1)
c1 = c0
end
end
end
for i = 1:m
SA[i] = SA[m + SA[i] + 1]
end
if flags & 4 != 0
C = B = IntVector(zeros(Int, k), 0)
end
if flags & 2 != 0
B = IntVector(zeros(Int, k), 0)
end
end
# stage 3
flags & 8 != 0 && getcounts(T, C, n, k)
if 1 < m
getbuckets(C, B, k, true)
i = m - 1
j = n
p = SA[m]
c1 = T[p+1]
while true
c0 = c1
q = B[c0+1]
while q < j
j -= 1
SA[j+1] = 0
end
while true
j -= 1
SA[j+1] = p
i -= 1
i < 0 && break
p = SA[i+1]
c1 = T[p+1]
c1 != c0 && break
end
i < 0 && break
end
while 0 < j
j -= 1
SA[j+1] = 0
end
end
if isbwt == false
induceSA(T, SA, C, B, n, k)
else
computeBWT(T, SA, C, B, n, k)
end
return SA
end
function LMSsort(
T::AbstractVector{<:Integer},
SA::IndexVector,
C::IntVector,
B::IntVector,
n::Int,
k::Int,
)
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, false)
j = n - 1
c1 = T[j+1]
b = B[c1+1]
j -= 1
SA[b+1] = T[j+1] < c1 ? ~j : j
b += 1
for i = 1:n
if 0 < (j = SA[i])
if (c0 = T[j+1]) != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
j -= 1
SA[b+1] = T[j+1] < c1 ? ~j : j
b += 1
SA[i] = 0
elseif j < 0
SA[i] = ~j
end
end
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, true)
c1 = 0
b = B[c1+1]
for i = n:-1:1
if 0 < (j = SA[i])
c0 = T[j+1]
if c0 != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
j -= 1
b -= 1
SA[b+1] = T[j+1] > c1 ? ~(j + 1) : j
SA[i] = 0
end
end
end
function LMSpostproc(T::AbstractVector{<:Integer}, SA::IndexVector, n::Int, m::Int)
i = 1
while (p = SA[i]) < 0
SA[i] = ~p
i += 1
end
if i - 1 < m
j = i
i += 1
while true
if (p = SA[i]) < 0
SA[j] = ~p
j += 1
SA[i] = 0
j - 1 == m && break
end
i += 1
end
end
i = j = n
c0 = c1 = T[n]
while 1 <= (i -= 1) && ((c0 = T[i]) >= c1)
c1 = c0
end
while 1 <= i
c1 = c0
while 1 <= (i -= 1) && ((c0 = T[i]) <= c1)
c1 = c0
end
if 1 <= i
SA[m + (i >> 1) + 1] = j - i
j = i + 1
c1 = c0
while 1 <= (i -= 1) && ((c0 = T[i]) >= c1)
c1 = c0
end
end
end
name = 0
q = n
qlen = 0
for i = 1:m
p = SA[i]
plen = SA[m + (p >> 1) + 1]
diff = true
if plen == qlen && (q + plen < n)
j = 0
while j < plen && T[p+j+1] == T[q+j+1]
j += 1
end
j == plen && (diff = false)
end
if diff != false
name += 1
q = p
qlen = plen
end
SA[m + (p >> 1) + 1] = name
end
return name
end
function induceSA(
T::AbstractVector{<:Integer},
SA::IndexVector,
C::IntVector,
B::IntVector,
n::Int,
k::Int,
)
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, false)
j = n - 1
c1 = T[j+1]
b = B[c1+1]
SA[b+1] = 0 < j && T[j] < c1 ? ~j : j
b += 1
for i = 1:n
j = SA[i]
SA[i] = ~j
if 0 < j
j -= 1
if (c0 = T[j+1]) != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
SA[b+1] = 0 < j && T[j] < c1 ? ~j : j
b += 1
end
end
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, true)
c1 = 0
b = B[c1+1]
for i = n:-1:1
if 0 < (j = SA[i])
j -= 1
c0 = T[j+1]
if c0 != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
b -= 1
SA[b+1] = j == 0 || T[j] > c1 ? ~j : j
else
SA[i] = ~j
end
end
end
function computeBWT(
T::AbstractVector{<:Integer},
SA::IndexVector,
C::IntVector,
B::IntVector,
n::Int,
k::Int,
)
pidx = -1
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, false)
j = n - 1
c1 = T[j+1]
b = B[c1+1]
SA[b+1] = 0 < j && T[j] < c1 ? ~j : j
b += 1
for i = 1:n
if 0 < (j = SA[i])
j -= 1
c0 = T[j+1]
SA[i] = ~c0
if c0 != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
SA[b+1] = 0 < j && T[j] < c1 ? ~j : j
b += 1
elseif j != 0
SA[i] = ~j
end
end
C == B && getcounts(T, C, n, k)
getbuckets(C, B, k, true)
c1 = 0
b = B[c1+1]
for i = n:-1:1
if 0 < (j = SA[i])
j -= 1
c0 = T[j+1]
SA[i] = c0
if c0 != c1
B[c1+1] = b
c1 = c0
b = B[c1+1]
end
b -= 1
SA[b+1] = 0 < j && T[j] > c1 ? ~(T[j]) : j
elseif j != 0
SA[i] = ~j
else
pidx = i - 1
end
end
return pidx
end
| SuffixArrays | https://github.com/JuliaCollections/SuffixArrays.jl.git |
|
[
"MIT"
] | 1.0.0 | d7c29668b00abd0ee5f28180a10b7fe8b8cd8155 | code | 4605 | using Test
using SuffixArrays
function test_suffix(args)
for file in args
data = codeunits(read(file, String))
t = @elapsed suffixes = suffixsort(data, 0)
println("Sorting '$file' took: $t")
@test sufcheck(data, suffixes) == 0
end
end
function sufcheck(T, SA)
n = length(T)
n == 0 && (println("Done."); return 0)
n < 0 && (println("Invalid length $n"); return -1)
C = zeros(Int, 256)
for i = 1:n
if SA[i] < 0 || n <= SA[i]
println("Out of range $n")
println("SA[$i] = $(SA[i])")
return -2
end
end
for i = 2:n
if T[SA[i-1]+1] > T[SA[i]+1]
println("Suffixes in wrong order")
println("T[SA[$(i-1)]+1] = $(T[SA[(i-1)]+1])")
println("T[SA[$i]+1] = $(T[SA[i]+1])")
return -3
end
end
for i = 1:n
C[Int(T[i])+1] += 1
end
p = 0
for i = 1:256
t = C[i]
C[i] = p
p += t
end
q = C[Int(T[n])+1]
C[Int(T[n])+1] += 1
for i = 1:n
p = SA[i]
if 0 < p
p -= 1
c = T[p+1]
t = C[Int(c)+1]
else
p = n - 1
c = T[p+1]
t = q
end
if t < 0 || p != SA[t+1]
println("Suffixes in wrong position")
return -4
end
if t != q
C[Int(c)+1] += 1
if n <= C[Int(c)+1] || T[SA[C[Int(c)+1]+1]+1] != c
C[Int(c)+1] = -1
end
end
end
println("Done.")
return 0
end
function initwalk(dir, files)
files = walkdir("$dir/src", files)
files = walkdir("$dir/test", files)
files
end
function walkdir(dir, files)
t = readdir(dir)
for f in t
f == ".git" && continue
j = joinpath(dir, f)
if isdir(j)
append!(files, walkdir(j, files))
else
push!(files, j)
end
end
return unique(files)
end
@testset "source files" begin
files = initwalk(dirname(dirname(@__FILE__)), [])
test_suffix(files)
end
@testset "UTF-8 strings" begin
s = "ยกHello, ๐ world!"
sa = suffixsort(s)
suffixes = [String(codeunits(s)[i:end]) for i in sa]
@test issorted(suffixes)
end
## define a simple UTF-16 string type ##
struct UTF16 <: AbstractString
codeunits::Vector{UInt16}
end
UTF16(s::String) = UTF16(Base.transcode(UInt16, s))
Base.codeunits(s::UTF16) = s.codeunits
Base.ncodeunits(s::UTF16) = length(s.codeunits)
Base.isvalid(s::UTF16, i::Int) = isvalid(iterate(s, i)[1])
Base.isless(s::UTF16, t::UTF16) = s.codeunits < t.codeunits
function Base.iterate(s::UTF16, i::Int=1)
i โคย length(s.codeunits) || return
u = s.codeunits[i]
0xD800 โคย u โค 0xDBFF || return Char(u), i+1
# otherwise is a high surrogate
v = s.codeunits[i+1]
# not followed by low surrogate
0xDC00 โคย v โคย 0xDFFF || return Char(u), i+1
# u, v are high/low surrogate pair
Char(0x10000 + (UInt32(u & 0x03ff) << 10) | (v & 0x03ff)), i+2
end
@testset "UTF-16 strings" begin
s = UTF16("ยกHello, ๐ world!")
sa = suffixsort(s)
suffixes = [UTF16(codeunits(s)[i:end]) for i in sa]
@test issorted(suffixes)
end
function commonprefixlen(s1, s2)
h = 0
maxh = min(length(s1), length(s2))
for i in 1:maxh
if s1[i] != s2[i]
break
end
h += 1
end
h
end
@testset "Longest common prefix" begin
s = rand(0x00:0xff, 100)
sa = suffixsort(s)
suff = [s[i:end] for i in sa]
lcparr = lcp(sa, s)
# LCP the hard way
lcpref = [commonprefixlen(suff[i], suff[i+1]) for i in 1:length(sa)-1]
@test lcparr[1] == 0
@test lcparr[2:end] == lcpref
# retest with base != 1
base = 0
sa = suffixsort(s, base)
suff = [s[1-base+i:end] for i in sa]
lcparr = lcp(sa, s, base)
@test lcparr[1] == 0
@test lcparr[2:end] == lcpref
# sequence where common prefix reaches the end
s = [0x01, 0x02, 0x03, 0x04, 0x01, 0x02, 0x03]
sa = suffixsort(s)
suff = [s[i:end] for i in sa]
lcparr = lcp(sa, s)
# LCP the hard way
lcpref = [commonprefixlen(suff[i], suff[i+1]) for i in 1:length(sa)-1]
@test lcparr[1] == 0
@test lcparr[2:end] == lcpref
end
@testset "Issue #15 fix" begin
s = join(rand('a':'z', 10000)) * '\$'
sa = suffixsort(s)
suffixes = [String(codeunits(s)[i:end]) for i in sa]
@test issorted(suffixes)
utext = rand(0x0001:0x0020, 4000)
sa = suffixsort(utext)
suffixes = [utext[i:end] for i in sa]
@test issorted(suffixes)
end
| SuffixArrays | https://github.com/JuliaCollections/SuffixArrays.jl.git |
|
[
"MIT"
] | 1.0.0 | d7c29668b00abd0ee5f28180a10b7fe8b8cd8155 | docs | 1582 | # SuffixArrays
[](https://travis-ci.org/JuliaCollections/SuffixArrays.jl)
A Julia package for computing [Suffix Arrays](http://en.wikipedia.org/wiki/Suffix_array).
The underlying suffix array sorting implementation is a pure Julia port of [sais](https://sites.google.com/site/yuta256/sais), by Yuta Mori.
You can use the package by running:
```julia
julia> using SuffixArrays
julia> s = "banana"
"banana"
julia> sa = suffixsort(s)
6-element Array{UInt8,1}:
0x06
0x04
0x02
0x01
0x05
0x03
julia> [s[i:end] for i in sa]
6-element Array{String,1}:
"a"
"ana"
"anana"
"banana"
"na"
"nana"
julia> issorted(ans)
true
```
The `suffixsort` function can compute a suffix array for vectors of `UInt8` or `UInt16` values, or for strings with code units that are one of these two types.
When generating a suffix array for a string, the suffix indices are in terms of code units, not characters, which means that some indices will be into the middle of characters that span multiple code units.
For UTF-8 and UTF-16 this doesn't affect using the suffix array as search index since a valid substring cannot start in the middle of a character anyway.
In other words, invalid substrings occuring in the suffix array will simply not match.
By default, `suffixsort(v)` produces an array of 1-based indices, but it can be called as `suffixsort(v, 0)` in order to produce an array of 0-based indices, which may be desirable to interface with 0-based libraries (or to save a tiny bit of space).
| SuffixArrays | https://github.com/JuliaCollections/SuffixArrays.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 318 | using Documenter, CALCEPH
makedocs(
format = :html,
sitename = "CALCEPH.jl",
authors = "Bernard Godard and the CALCEPH.jl contributors",
pages = [
"Home" => "index.md",
"Tutorial" => "tutorial.md",
"API" => "api.md"
],
)
deploydocs(
repo = "github.com/JuliaAstro/CALCEPH.jl.git",
target = "build",
)
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 875 | """
CALCEPH
This module is a wrapper of CALCEPH, IMCCE planetary ephemeris access
library. It supports INPOPxx, JPL DExxx and SPICE ephemeris.
https://www.imcce.fr/inpop/calceph
"""
module CALCEPH
using CALCEPH_jll: libcalceph
struct CALCEPHException <: Exception
msg::String
end
include("ephem.jl")
export Ephem, prefetch, CALCEPHException
include("compute.jl")
export compute
include("timespan.jl")
export timespan
include("bodyId.jl")
export naifId
include("units.jl")
export unitAU, unitKM, unitDay, unitSec, unitRad, useNaifId, outputEulerAngles, outputNutationAngles
include("orient.jl")
export orient
include("rotAngMom.jl")
export rotAngMom
include("constants.jl")
export constants
include("introspection.jl")
export timeScale, positionRecords, orientationRecords
include("fivePointStencil.jl")
include("errorHandling.jl")
end # module
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 2714 | """
BodyId
Body identifiers.
"""
mutable struct BodyId
"names from ID"
names :: Dict{Int,Set{Symbol}}
"ID from names"
id :: Dict{Symbol,Int}
function BodyId()
new(Dict{Int,Set{Symbol}}(),Dict{Symbol,Int}())
end
end
"""
add!(bid,name,id)
Add a new mapping name->id into BodyId instance bid.
Example:
```jldoctest
using CALCEPH
bid=CALCEPH.BodyId()
CALCEPH.add!(bid,:tatooine,1000001)
CALCEPH.add!(bid,:dagobah,1000002)
CALCEPH.add!(bid,:endor,1000003)
CALCEPH.add!(bid,:deathstar,1000004)
CALCEPH.add!(bid,:endor_deathstar_system_barycenter,1000005)
CALCEPH.add!(bid,:edsb,1000005)
# output
```
"""
function add!(bid::BodyId,name::Symbol,id::Int)
if (name โ keys(bid.id))
if bid.id[name] != id
throw(CALCEPHException("Cannot map already defined identifier [$name] to a different ID [$id]"))
else
return
end
end
if id โ keys(bid.names)
bid.names[id] = Set{Symbol}()
end
push!(bid.names[id],name)
bid.id[name]=id
nothing
end
"""
loadData!(bid,filename)
Load mapping (body name,body ID) from file into BodyId instance bid.
Names from the file are converted to lower case and have spaces replaced by
underscores before being converted to symbols/interned strings.
Example file [https://github.com/bgodard/CALCEPH.jl/blob/master/data/NaifIds.txt](https://github.com/bgodard/CALCEPH.jl/blob/master/data/NaifIds.txt)
"""
function loadData!(bid::BodyId,filename::AbstractString)
pattern1 = r"^\s*([-+]{0,1}\d+)\s+\'(.*)\'.*$"
pattern2 = r"[\s-]"
f = open(filename);
cnt=0
for ln0 in eachline(f)
cnt += 1
ln1=strip(ln0)
if length(ln1)>0
if ln1[1] != '#'
m = match(pattern1,ln1)
if m === nothing
throw(CALCEPHException("parsing line $cnt in data input file: $filename:\n$ln0"))
end
id = parse(Int,m.captures[1])
name = Symbol(lowercase(replace(strip(m.captures[2]), pattern2 => "_")))
add!(bid,name,id)
end
end
end
close(f)
nothing
end
"""
naifId
NAIF identification numbers
Examples:
```jldoctest
julia> using CALCEPH
julia> naifId.id[:sun]
10
julia> naifId.id[:mars]
499
julia> naifId.names[0]
Set(Symbol[:ssb, :solar_system_barycenter])
```
"""
const naifId = BodyId()
import CALCEPH
loadData!(naifId,joinpath(dirname(pathof(CALCEPH)), "..", "data", "NaifIds.txt"))
# NAIF IDs for Hyperbolic Asteroid 'Oumuamua (1I/2017 U1)
add!(naifId,:oumuamua,3788040)
# NAIF IDs for CALCEPH time ephemeris
add!(naifId,:timecenter,1000000000)
add!(naifId,:ttmtdb ,1000000001)
add!(naifId,:tcgmtcb ,1000000002)
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 11798 | """
compute(eph,jd0,time,target,center)
Compute position and velocity of target with respect to center at epoch
jd0+time. This method does not support the NAIF numbering scheme.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
This method does not support the NAIF body identification scheme.
Output units are:
* AU and AU/day for position and velocity
* rad and rad/day for librations
* second and unitless for time ephemeris and time ephemeris rate
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required.
- `center::Integer`: The origin of the coordinate system.
The possible values for target and center are :
* 1 : Mercury Barycenter
* 2 : Venus Barycenter
* 3 : Earth
* 4 : Mars Barycenter
* 5 : Jupiter Barycenter
* 6 : Saturn Barycenter
* 7 : Uranus Barycenter
* 8 : Neptune Barycenter
* 9 : Pluto Barycenter
* 10 : Moon
* 11 : Sun
* 12 : Solar Sytem barycenter
* 13 : Earth-moon barycenter
* 14 : Nutation angles
* 15 : Librations
* 16 : TT-TDB
* 17 : TCG-TCB
* asteroid number + 2000000 : asteroid
"""
function compute(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
result = Array{Float64,1}(undef,6)
stat = unsafe_compute!(result,eph,jd0,time,target,center)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_compute!(eph,jd0,time,target,center)
In place version of the compute function. Does not perform any checks!
Compute position and velocity of target with respect to center at epoch
jd0+time. This method does not support the NAIF numbering scheme.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
This method does not support the NAIF body identification scheme.
Output units are:
* AU and AU/day for position and velocity
* rad and rad/day for librations
* second and unitless for time ephemeris and time ephemeris rate
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required.
- `center::Integer`: The origin of the coordinate system.
# Return:
- status integer from CALCEPH: 0 if an error occured
The possible values for target and center are :
* 1 : Mercury Barycenter
* 2 : Venus Barycenter
* 3 : Earth
* 4 : Mars Barycenter
* 5 : Jupiter Barycenter
* 6 : Saturn Barycenter
* 7 : Uranus Barycenter
* 8 : Neptune Barycenter
* 9 : Pluto Barycenter
* 10 : Moon
* 11 : Sun
* 12 : Solar Sytem barycenter
* 13 : Earth-moon barycenter
* 14 : Nutation angles
* 15 : Librations
* 16 : TT-TDB
* 17 : TCG-TCB
* asteroid number + 2000000 : asteroid
"""
function unsafe_compute!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer)
stat = ccall((:calceph_compute, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,center,result)
return stat
end
"""
compute(eph,jd0,time,target,center,unit)
Compute position and velocity of target with respect to center
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required. The numbering system depends on the parameter unit.
- `center::Integer`: The origin of the coordinate system. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center.
"""
function compute(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer,unit::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
result = Array{Float64,1}(undef,6)
stat = unsafe_compute!(result,eph,jd0,time,target,center,unit)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_compute!(result,eph,jd0,time,target,center,unit)
In place version of the compute function. Does not perform any checks!
Compute position and velocity of target with respect to center
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required. The numbering system depends on the parameter unit.
- `center::Integer`: The origin of the coordinate system. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_compute!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer,unit::Integer)
stat = ccall((:calceph_compute_unit, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,center,unit,result)
return stat
end
"""
compute(eph,jd0,time,target,center,unit,order)
Compute position and derivatives up to order of target with respect to center
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required. The numbering system depends on the parameter unit.
- `center::Integer`: The origin of the coordinate system. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center.
- `order::Integer` : The order of derivatives
* 0: only the position is computed.
* 1: only the position and velocity are computed.
* 2: only the position, velocity and acceleration are computed.
* 3: the position, velocity and acceleration and jerk are computed.
"""
function compute(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer,unit::Integer,order::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
@_checkOrder order
result = Array{Float64,1}(undef,3+3order)
stat = unsafe_compute!(result,eph,jd0,time,target,center,unit,order)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_compute!(result,eph,jd0,time,target,center,unit,order)
In place version of the compute function. Does not perform any checks!
Compute position and derivatives up to order of target with respect to center
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body or reference point whose coordinates are required. The numbering system depends on the parameter unit.
- `center::Integer`: The origin of the coordinate system. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center.
- `order::Integer` : The order of derivatives
* 0: only the position is computed.
* 1: only the position and velocity are computed.
* 2: only the position, velocity and acceleration are computed.
* 3: the position, velocity and acceleration and jerk are computed.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_compute!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,center::Integer,unit::Integer,order::Integer)
stat = ccall((:calceph_compute_order, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,center,unit,order,result)
return stat
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 2092 |
const _maxConstName = 33
const _maxConstValue = 1024
"""
constants(eph)
Retrieve the constants stored in the ephemeris associated to handle eph as a dictionary
"""
function constants(eph::Ephem)
res = Dict{Symbol,Any}()
@_checkPointer eph.data "Ephemeris is not properly initialized!"
NC::Int = ccall((:calceph_getconstantcount , libcalceph), Cint,
(Ptr{Cvoid},),eph.data)
value = Ref{Cdouble}(0.0)
name = Vector{UInt8}(undef,_maxConstName)
for i=1:NC
numberOfValues = ccall((:calceph_getconstantindex , libcalceph), Cint,
(Ptr{Cvoid},Cint,Ptr{UInt8},Ref{Cdouble}),
eph.data, i ,name ,value)
if (numberOfValues==1)
res[Symbol(strip(unsafe_string(pointer(name))))] = value[]
elseif (numberOfValues>1)
values = Array{Float64,1}(undef,numberOfValues)
stat = ccall((:calceph_getconstantvd , libcalceph), Cint,
(Ptr{Cvoid},Ptr{UInt8},Ptr{Cdouble}, Cint),
eph.data, name ,values, numberOfValues)
if (stat>0)
res[Symbol(strip(unsafe_string(pointer(name))))] = values
end
else
numberOfValues = ccall((:calceph_getconstantvs , libcalceph), Cint,
(Ptr{Cvoid},Ptr{UInt8},Ptr{Ptr{Char}}, Cint),
eph.data, name ,C_NULL, 0)
if (numberOfValues>0)
storage = Array{UInt8}(undef,_maxConstValue,numberOfValues)
stat = ccall((:calceph_getconstantvs , libcalceph), Cint,
(Ptr{Cvoid},Ptr{UInt8},Ptr{UInt8}, Cint),
eph.data, name ,storage, numberOfValues)
if (stat>0)
values = [ strip(unsafe_string(pointer(storage,i))) for i in 1:_maxConstValue:length(storage) ]
if (numberOfValues==1)
res[Symbol(strip(unsafe_string(pointer(name))))] = values[1]
else
res[Symbol(strip(unsafe_string(pointer(name))))] = values
end
end
end
end
end
return res
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 2033 |
macro _checkStatus(stat,msg)
return quote
if ($(esc(stat)) == 0)
throw(CALCEPHException($(esc(msg))))
end
end
end
macro _checkPointer(ptr,msg)
return quote
if ($(esc(ptr)) == C_NULL)
throw(CALCEPHException($(esc(msg))))
end
end
end
macro _checkOrder(order)
return quote
local or = $(esc(order))
if (or<0) || (or>3)
throw(CALCEPHException("Order must be between 0 and 3."))
end
end
end
"""
Ephem
Ephemeris descriptor. Create with:
eph = Ephem(filename)
eph = Ephem([filename1,filename2...])
The ephemeris descriptor will be used to access the ephemeris and related
data stored in the specified files.
Because, Julia GC is lazy, you may want to free the memory managed by eph
before you get rid of the reference to eph with:
finalize(eph)
or after by forcing the GC to run:
gc()
"""
mutable struct Ephem
data :: Ptr{Cvoid}
function Ephem(files::Vector{<:AbstractString})
ptr = ccall((:calceph_open_array, libcalceph), Ptr{Cvoid},
(Int, Ptr{Ptr{UInt8}}), length(files), files)
@_checkPointer ptr "Unable to open ephemeris file(s)!"
obj = new(ptr)
finalizer(_ephemDestructor,obj) # register object destructor
return obj
end
end
# to be called by gc when cleaning up
# not in the exposed interface but can be called with finalize(e)
function _ephemDestructor(eph::Ephem)
if (eph.data == C_NULL)
return
end
ccall((:calceph_close, libcalceph), Cvoid, (Ptr{Cvoid},), eph.data)
eph.data = C_NULL
return
end
Ephem(file::AbstractString) = Ephem([file])
"""
prefetch(eph)
This function prefetches to the main memory all files associated to the ephemeris descriptor eph.
"""
function prefetch(eph::Ephem)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
stat = ccall((:calceph_prefetch, libcalceph), Int, (Ptr{Cvoid},), eph.data)
@_checkStatus stat "Unable to prefetch ephemeris!"
return
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 1363 |
"""
disableCustomHandler()
Disables the user custom error handler.
"""
function disableCustomHandler()
ccall((:calceph_seterrorhandler, libcalceph), Cvoid,
(Cint, Ptr{Cvoid}), 1, C_NULL)
end
mutable struct UserHandlerContainer
f::Function
end
const userHandlerContainerInstance = UserHandlerContainer(s::String->Nothing)
function userHandlerWrapper(msg::Cstring)::Cvoid
global userHandlerContainerInstance
s = unsafe_string(msg)
userHandlerContainerInstance.f(s)
return
end
# see https://discourse.julialang.org/t/cfunction-error-handler/20678
#userHandlerCWrapper = @cfunction(userHandlerWrapper, Cvoid, (Cstring,))
userHandlerCWrapper = Nothing
"""
setCustomHandler(f::Function)
Sets the user custom error handler.
# Arguments
- `f`: function taking a single argument of type String which will contain the CALCEPH error message. f should return Nothing.
Use setCustomHandler(s->Nothing) to disable CALCEPH error messages printout to the console.
"""
function setCustomHandler(f::Function)
global userHandlerContainerInstance
global userHandlerCWrapper
userHandlerCWrapper = @cfunction(userHandlerWrapper, Cvoid, (Cstring,))
userHandlerContainerInstance.f = f
ccall((:calceph_seterrorhandler, libcalceph), Cvoid,
(Cint, Ptr{Cvoid}), 3, userHandlerCWrapper)
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 1100 |
"""
fivePointStencil(f,x,n::Integer,h)
Evaluates function f and its derivatives up to order n โ [0,4] at x:
``f(x),f'(x),...,f^{(n)}(x)``
The result is an array of length n+1.
Derivatives are numerically computed using the 5-point stencil method
with hโ 0 being the grid spacing:
[https://en.wikipedia.org/wiki/Five-point_stencil](https://en.wikipedia.org/wiki/Five-point_stencil)
"""
function fivePointStencil(f,x,n::Integer,h)
if ((n<0) || (n>4))
error("In fivePointStencil: Invalid order $n")
end
if (h==0.0)
error("In fivePointStencil: Invalid grid spacing $h")
end
fm2 = f(x-2h)
fm1 = f(x-h)
fn0 = f(x)
fp1 = f(x+h)
fp2 = f(x+2h)
res = Vector{typeof(fn0)}(undef,n+1)
res[1] = fn0
(n==0) && return res
res[2] = (-fp2+8fp1-8fm1+fm2)/(12*h)
(n==1) && return res
h2 = h * h
res[3] = (-fp2+16fp1-30fn0+16fm1-fm2)/(12*h2)
(n==2) && return res
h3 = h2 * h
res[4] = (fp2-2fp1+2fm1-fm2)/(2*h3)
(n==3) && return res
h4 = h3 * h
res[5] = (fp2-4fp1+6fn0-4fm1+fm2)/h4
return res
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 2859 | """
timeScale(eph)
Retrieve the timescale associated with ephemeris handler eph
Returns 1 for TDB and 2 for TCB.
"""
function timeScale(eph::Ephem)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
return ccall((:calceph_gettimescale , libcalceph), Cint,
(Ptr{Cvoid},),eph.data)
end
"""
PositionRecord
stores position record metadata.
"""
struct PositionRecord
" Naif Id of target "
target::Int
" Naif Id of center "
center::Int
" start of epoch span "
startEpoch::Float64
"end of epoch span"
stopEpoch::Float64
"frame : 1 for ICRF"
frame::Int
end
"""
positionRecords(eph)
Retrieve position records metadata in ephemeris associated to
handler eph .
"""
function positionRecords(eph::Ephem)
res = Array{PositionRecord,1}()
@_checkPointer eph.data "Ephemeris is not properly initialized!"
NR::Int = ccall((:calceph_getpositionrecordcount , libcalceph), Cint,
(Ptr{Cvoid},),eph.data)
(NR == 0) && throw(CALCEPHException("Could not find any position records!"))
target = Ref{Cint}(0)
center = Ref{Cint}(0)
startEpoch = Ref{Cdouble}(0.0)
stopEpoch = Ref{Cdouble}(0.0)
frame = Ref{Cint}(0)
for i=1:NR
stat = ccall((:calceph_getpositionrecordindex , libcalceph), Cint,
(Ptr{Cvoid},Cint,Ref{Cint},Ref{Cint},Ref{Cdouble},Ref{Cdouble},Ref{Cint}),
eph.data, i ,target,center,startEpoch,stopEpoch,frame)
if (stat!=0)
push!(res,PositionRecord(target[],center[],startEpoch[],stopEpoch[],frame[]))
end
end
return res
end
"""
OrientationRecord
stores orientation record metadata.
"""
struct OrientationRecord
" Naif Id of target "
target::Int
" start of epoch span "
startEpoch::Float64
"end of epoch span"
stopEpoch::Float64
"frame : 1 for ICRF"
frame::Int
end
"""
orientationRecords(eph)
Retrieve orientation records metadata in ephemeris associated to
handler eph .
"""
function orientationRecords(eph::Ephem)
res = Array{OrientationRecord,1}()
@_checkPointer eph.data "Ephemeris is not properly initialized!"
NR::Int = ccall((:calceph_getorientrecordcount , libcalceph), Cint,
(Ptr{Cvoid},),eph.data)
(NR == 0) && throw(CALCEPHException("Could not find any orientation records!"))
target = Ref{Cint}(0)
startEpoch = Ref{Cdouble}(0.0)
stopEpoch = Ref{Cdouble}(0.0)
frame = Ref{Cint}(0)
for i=1:NR
stat = ccall((:calceph_getorientrecordindex , libcalceph), Cint,
(Ptr{Cvoid},Cint,Ref{Cint},Ref{Cdouble},Ref{Cdouble},Ref{Cint}),
eph.data, i ,target,startEpoch,stopEpoch,frame)
if (stat!=0)
push!(res,OrientationRecord(target[],startEpoch[],stopEpoch[],frame[]))
end
end
return res
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 7893 | """
orient(eph,jd0,time,target,unit)
Compute Euler angles and first derivative for the orientation of target at
epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose orientation is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). The angles are expressed in radians if unit contains unitRad. If the unit contains outputNutationAngles, the nutation angles are computed rather than the Euler angles.
"""
function orient(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
result = Array{Float64,1}(undef,6)
stat = unsafe_orient!(result,eph,jd0,time,target,unit)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_orient!(result,eph,jd0,time,target,unit)
In place version of the orient function. Does not perform any checks!
Compute Euler angles and first derivative for the orientation of target
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose orientation is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). If the unit contains outputNutationAngles, the nutation angles are computed rather than the Euler angles.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_orient!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer)
stat = ccall((:calceph_orient_unit, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,unit,result)
return stat
end
"""
orient(eph,jd0,time,target,unit,order)
Compute Euler angles and derivatives up to order for the orientation of target
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose orientation is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). If the unit contains outputNutationAngles, the nutation angles are computed rather than the Euler angles.
- `order::Integer` : The order of derivatives
* 0: only the angles are computed.
* 1: only the angles and 1st derivatives are computed.
* 2: only the angles, the 1st derivatives and 2nd derivatives are computed.
* 3: the angles, the 1st derivatives, 2nd derivatives and 3rd derivatives are computed.
"""
function orient(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer,order::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
@_checkOrder order
result = Array{Float64,1}(undef,3+3order)
stat = unsafe_orient!(result,eph,jd0,time,target,unit,order)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_orient!(result,eph,jd0,time,target,unit,order)
In place version of the orient function. Does not perform any checks!
Compute Euler angles and derivatives up to order for the orientation of target
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose orientation is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). If the unit contains outputNutationAngles, the nutation angles are computed rather than the Euler angles.
- `order::Integer` : The order of derivatives
* 0: only the angles are computed.
* 1: only the angles and 1st derivatives are computed.
* 2: only the angles, the 1st derivatives and 2nd derivatives are computed.
* 3: the angles, the 1st derivatives, 2nd derivatives and 3rd derivatives are computed.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_orient!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer,order::Integer)
stat = ccall((:calceph_orient_order, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,unit,order,result)
return stat
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 7593 | """
rotAngMom(eph,jd0,time,target,unit)
Compute angular momentum due to rotation and first derivative of target at
epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose angular momentum is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). The angles are expressed in radians if unit contains unitRad.
"""
function rotAngMom(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
result = Array{Float64,1}(undef,6)
stat = unsafe_rotAngMom!(result,eph,jd0,time,target,unit)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_rotAngMom!(result,eph,jd0,time,target,unit)
In place version of the rotAngMom function. Does not perform any checks!
Compute angular momentum due to rotation and first derivative of target at
epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose angular momentum is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute). The angles are expressed in radians if unit contains unitRad.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_rotAngMom!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer)
stat = ccall((:calceph_rotangmom_unit, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,unit,result)
return stat
end
"""
rotAngMom(eph,jd0,time,target,unit,order)
Compute angular momentum due to rotation and derivatives up to order of target
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose angular momentum is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute).
- `order::Integer` : The order of derivatives
* 0: only the angles are computed.
* 1: only the angles and 1st derivatives are computed.
* 2: only the angles, the 1st derivatives and 2nd derivatives are computed.
* 3: the angles, the 1st derivatives, 2nd derivatives and 3rd derivatives are computed.
"""
function rotAngMom(eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer,order::Integer)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
@_checkOrder order
result = Array{Float64,1}(undef,3+3order)
stat = unsafe_rotAngMom!(result,eph,jd0,time,target,unit,order)
@_checkStatus stat "Unable to compute ephemeris"
return result
end
"""
unsafe_rotAngMom!(result,eph,jd0,time,target,unit,order)
In place version of the rotAngMom function. Does not perform any checks!
Compute angular momentum due to rotation and derivatives up to order of target
at epoch jd0+time.
To get the best precision for the interpolation, the time is split in two
floating-point numbers. The argument jd0 should be an integer and time should
be a fraction of the day. But you may call this function with time=0 and jd0,
the desired time, if you don't take care about precision.
# Arguments
- `result`: container for result. It is not checked if it is sufficiently large enough!
- `eph`: ephemeris
- `jd0::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `time::Float64`: jd0+time must be equal to the Julian Day for the time coordinate corresponding to the ephemeris (usually TDB or TCB)
- `target::Integer`: The body whose angular momentum is required. The numbering system depends on the parameter unit.
- `unit::Integer` : The units of the result. This integer is a sum of some unit constants (unit*) and/or the constant useNaifId. If the unit contains useNaifId, the NAIF identification numbering system is used for the target and the center. If the unit does not contain useNaifId, the old number system is used for the target and the center (see the list in the documentation of function compute).
- `order::Integer` : The order of derivatives
* 0: only the angles are computed.
* 1: only the angles and 1st derivatives are computed.
* 2: only the angles, the 1st derivatives and 2nd derivatives are computed.
* 3: the angles, the 1st derivatives, 2nd derivatives and 3rd derivatives are computed.
# Return:
- status integer from CALCEPH: 0 if an error occured
"""
function unsafe_rotAngMom!(result,eph::Ephem,jd0::Float64,time::Float64,
target::Integer,unit::Integer,order::Integer)
stat = ccall((:calceph_rotangmom_order, libcalceph), Cint,
(Ptr{Cvoid},Cdouble,Cdouble,Cint,Cint,Cint,Ref{Cdouble}),
eph.data,jd0,time,target,unit,order,result)
return stat
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 1496 | """
timespan(eph::Ephem)
This function returns the first and last time available in the ephemeris file associated to eph.
# Arguments:
- `eph` : ephemeris
# Return:
a tuple containing:
* firsttime: Julian date of the first time
* lasttime: Julian date of the last time
* continuous: information about the availability of the quantities over the time span
It returns the following value in the parameter continuous :
1 if the quantities of all bodies are available for any time between the first and last time.
2 if the quantities of some bodies are available on discontinuous time intervals between the first and last time.
3 if the quantities of each body are available on a continuous time interval between the first and last time,
but not available for any time between the first and last time.
See: https://www.imcce.fr/content/medias/recherche/equipes/asd/calceph/html/c/calceph.multiple.html#menu-calceph-gettimespan
"""
function timespan(eph::Ephem)
@_checkPointer eph.data "Ephemeris is not properly initialized!"
firsttime = Ref{Cdouble}(0)
lasttime = Ref{Cdouble}(0)
continous = Ref{Cint}(0)
stat = ccall((:calceph_gettimespan, libcalceph), Cint, (Ptr{Cvoid},Ref{Cdouble},Ref{Cdouble},Ref{Cint}),
eph.data,firsttime,lasttime,continous)
@_checkStatus stat "Unable to compute ephemeris"
return firsttime[],lasttime[],continous[]
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 733 |
"""
unitAU
Astronomical unit: distance unit
"""
const unitAU = 1
"""
unitKM
kilometer: distance unit
"""
const unitKM = 2
"""
unitDay
day: time unit
"""
const unitDay = 4
"""
unitSec
second: time unit
"""
const unitSec = 8
"""
unitRad
radian: angle unit
"""
const unitRad = 16
"""
useNaifId
has to be added to the unit argument when using NAIF integer codes for identification of center and target
"""
const useNaifId = 32
"""
outputEulerAngles
has to be added to the unit argument for orient to output Euler angles
"""
const outputEulerAngles = 64
"""
outputNutationAngles
has to be added to the unit argument for orient to output nutation angles
"""
const outputNutationAngles = 128
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 5693 | using CALCEPH
using Test
CALCEPH.disableCustomHandler()
CALCEPH.setCustomHandler(s::String->Nothing )
testpath = joinpath(dirname(pathof(CALCEPH)), "..", "test")
@testset "CALCEPH" begin
@testset "Naif Id" begin
# NAIF ID tests
for (name,id) โ naifId.id
@test name โ naifId.names[id]
end
for (id,names) โ naifId.names
for name โ names
@test naifId.id[name] == id
end
end
@test naifId.id[:ssb] == naifId.id[:solar_system_barycenter] == 0
@test naifId.id[:sun] == 10
@test naifId.id[:mercury_barycenter] == 1
@test naifId.id[:mercury] == 199
@test naifId.id[:venus_barycenter] == 2
@test naifId.id[:venus] == 299
@test naifId.id[:emb] == naifId.id[:earth_barycenter] == 3
@test naifId.id[:moon] == 301
@test naifId.id[:earth] == 399
@test naifId.id[:mars_barycenter] == 4
@test naifId.id[:phobos] == 401
@test naifId.id[:deimos] == 402
@test naifId.id[:mars] == 499
@test naifId.id[:jupiter_barycenter] == 5
@test naifId.id[:io] == 501
@test naifId.id[:europa] == 502
@test naifId.id[:ganymede] == 503
@test naifId.id[:callisto] == 504
@test naifId.id[:jupiter] == 599
@test naifId.id[:saturn_barycenter] == 6
@test naifId.id[:titan] == 606
@test naifId.id[:saturn] == 699
@test naifId.id[:uranus_barycenter] == 7
@test naifId.id[:uranus] == 799
@test naifId.id[:neptune_barycenter] == 8
@test naifId.id[:triton] == 801
@test naifId.id[:neptune] == 899
@test naifId.id[:pluto_barycenter] == 9
@test naifId.id[:charon] == 901
@test naifId.id[:pluto] == 999
end
@testset "Core" begin
# test error case: changing name->id mapping
@test_throws CALCEPHException CALCEPH.add!(naifId,:jupiter,1)
# test error case: parsing wrongly formatted body id input file
bid = CALCEPH.BodyId()
@test_throws CALCEPHException CALCEPH.loadData!(bid,joinpath(testpath,"badIds.txt"))
# check memory management
eph1 = Ephem(joinpath(testpath,"example1.dat"))
eph2 = Ephem([joinpath(testpath,"example1.bsp"),
joinpath(testpath,"example1.tpc")])
@test eph1.data != C_NULL
finalize(eph1)
@test eph1.data == C_NULL
@test eph2.data != C_NULL
finalize(eph2)
@test eph2.data == C_NULL
finalize(eph2)
CALCEPH._ephemDestructor(eph2)
# Opening invalid ephemeris
@test_throws CALCEPHException Ephem(String[])
# test error case wrong order
eph1 = Ephem(joinpath(testpath,"example1.bsp"))
@test_throws CALCEPHException compute(eph1,0.0,0.0,1,0,0,4)
@test_throws CALCEPHException compute(eph1,0.0,0.0,1,0,0,-1)
# test error case:
@test_throws CALCEPHException compute(eph1,0.0,0.0,-144,0,0)
# Five-Point Stencil
f(x)=x^8
@test_throws ErrorException CALCEPH.fivePointStencil(f,1.5,5,0.001)
@test_throws ErrorException CALCEPH.fivePointStencil(f,1.5,-1,0.001)
@test_throws ErrorException CALCEPH.fivePointStencil(f,1.5,4,0.0)
val = CALCEPH.fivePointStencil(f,1.5,4,0.001)
ref = [25.62890625,136.6875,637.875,2551.5,8505.0]
@test ref[1] โ val[1] atol=1e-10
@test ref[2] โ val[2] atol=1e-8
@test ref[3] โ val[3] atol=1e-5
@test ref[4] โ val[4] atol=1e-2
@test ref[5] โ val[5] atol=1e-2
end
@testset "Constants" begin
# check constants
eph1 = Ephem(joinpath(testpath,"example1.dat"))
eph2 = Ephem([joinpath(testpath,"example1.bsp"),
joinpath(testpath,"example1.tpc")])
eph3 = Ephem([joinpath(testpath,"checktpc_11627.tpc")])
eph4 = Ephem([joinpath(testpath,"checktpc_str.tpc")])
con1 = constants(eph1)
con2 = constants(eph2)
con3 = constants(eph3)
con4 = constants(eph4)
@test isa(con1,Dict{Symbol,Any})
@test length(con1) == 402
@test con1[:EMRAT] โ 81.30056
@test isa(con2,Dict{Symbol,Any})
@test length(con2) == 313
@test con2[:AU] โ 1.49597870696268e8
@test isa(con3,Dict{Symbol,Any})
@test length(con3) == 3
@test con3[:BODY000_GMLIST4] == [ 199.0 ; 299.0 ; 301.0 ; 399.0 ]
@test con3[:BODY000_GMLIST2] == [ 499 ; 599 ]
@test con3[:BODY000_GMLIST1] == 699
@test isa(con4,Dict{Symbol,Any})
@test length(con4) == 4
@test con4[:MESSAGE] == "You can't always get what you want."
@test con4[:DISTANCE_UNITS] == "KILOMETERS"
@test con4[:MISSION_UNITS] == [ "KILOMETERS" ; "SECONDS" ; "KILOMETERS/SECOND" ]
@test con4[:CONTINUED_STRINGS] == ["This //", "is //", "just //", "one long //", "string.", "Here's a second //", "continued //", "string."]
# Retrieving constants from closed ephemeris
finalize(eph2)
@test_throws CALCEPHException constants(eph2)
end
# test compute*
# test data and thresholds from CALCEPH C library tests
inpop_files = [joinpath(testpath,"example1.dat")]
spk_files = [joinpath(testpath,"example1.bsp"),
joinpath(testpath,"example1.tpc"),
joinpath(testpath,"example1.tf"),
joinpath(testpath,"example1.bpc"),
joinpath(testpath,"example1spk_time.bsp")]
testfile = joinpath(testpath,"example1_tests.dat")
testfile2 = joinpath(testpath,"example1_tests_naifid.dat")
test_data = [
(inpop_files,false),
(spk_files,false),
(inpop_files,true),
(spk_files,true)
]
include("testfunction1.jl")
include("testfunction2.jl")
@testset "Compute" begin
for (ephFiles,prefetch) in test_data
testFunction1(testfile,ephFiles,prefetch)
end
for (ephFiles,prefetch) in test_data
testFunction2(testfile,testfile2,ephFiles,prefetch)
end
end
@testset "Introspection" begin
# introspection
eph = Ephem(inpop_files)
@test timeScale(eph) == 1
records = positionRecords(eph)
@test length(records) == 12
records = orientationRecords(eph)
@test length(records) == 1
@test timespan(eph) == (2.442457e6, 2.451545e6, 1)
end
@testset "Angular Momentum" begin
# rotangmom
eph = Ephem(joinpath(testpath,"example2_rotangmom.dat"))
a = rotAngMom(eph,2.4515e6,0.0,399,useNaifId+unitSec)
b = rotAngMom(eph,2.4515e6,0.0,399,useNaifId+unitSec,1)
@test a == b
@test length(a) == 6
end
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 3824 |
# This test is derived from test cmcompute.c in Calceph version 2.3.2
# the test data files are copied from calceph-2.3.2.tar.gz
function testFunction1(testFile,ephFiles,pflag)
eph = Ephem(ephFiles)
if pflag
prefetch(eph)
end
con = constants(eph)
AU = con[:AU]
f = open(testFile);
for ln in eachline(f)
elts=split(ln)
jd0=parse(Float64,elts[1])
target=parse(Int,elts[2])
center=parse(Int,elts[3])
dt = jd0 - trunc(Int,jd0);
jd0 = trunc(Int,jd0) + 2.4515450000000000000E+06
ref = [parse(Float64, x) for x in elts[4:end]]
val = compute(eph,jd0,dt,target,center)
ฯต = 1.0e-8
val0 = val[:]
if (target==15)
ฯต = 1.0e-7
while val[3]>2ฯ
val[3]-=2ฯ
end
while val[3]<=0
val[3]+=2ฯ
end
end
for i in 1:6
@test abs(ref[i]-val[i]) < ฯต
end
ref = val0
ฯต = 3.0e-15
if target โ [15,16,17]
val = compute(eph,jd0,dt,target,center,unitAU+unitDay)
for i in 1:6
@test abs(ref[i]-val[i]) < ฯต
end
val = compute(eph,jd0,dt,target,center,unitAU+unitSec)
for i in 1:6
if i>3
val[i]*=86400
end
@test abs(ref[i]-val[i]) < ฯต
end
ฯต = 3.0e-14;
val = compute(eph,jd0,dt,target,center,unitKM+unitDay)
for i in 1:6
@test abs(ref[i]-val[i]/AU) < ฯต
end
val = compute(eph,jd0,dt,target,center,unitKM+unitSec)
for i in 1:6
if i>3
val[i]*=86400
end
@test abs(ref[i]-val[i]/AU) < ฯต
end
ฯต = 3.0e-15
val = compute(eph,jd0,dt,target,center,unitDay+unitAU,3)
@test length(val)==12
for i in 1:6
@test abs(ref[i]-val[i]) < ฯต
end
ref = val
val = compute(eph,jd0,dt,target,center,unitDay+unitAU,2)
@test length(val)==9
for i in 1:9
@test abs(ref[i]-val[i]) < ฯต
end
val = compute(eph,jd0,dt,target,center,unitDay+unitAU,1)
@test length(val)==6
for i in 1:6
@test abs(ref[i]-val[i]) < ฯต
end
val = compute(eph,jd0,dt,target,center,unitDay+unitAU,0)
@test length(val)==3
for i in 1:3
@test abs(ref[i]-val[i]) < ฯต
end
ฯต = 3.0e-14
val = compute(eph,jd0,dt,target,center,unitSec+unitKM,3)
@test length(val)==12
for i in 1:12
if i>3
val[i]*=86400
end
if i>6
val[i]*=86400
end
if i>9
val[i]*=86400
end
@test abs(ref[i]-val[i]/AU) < ฯต
end
elseif target == 15
val = compute(eph,jd0,dt,target,center,unitRad+unitDay)
for i in 1:6
@test abs(ref[i]-val[i]) < ฯต
end
val = compute(eph,jd0,dt,target,center,unitRad+unitSec)
for i in 1:6
if i>3
val[i]*=86400
end
@test abs(ref[i]-val[i]) < ฯต
end
elseif target โ [16,17]
ฯต = 1e-18
val = compute(eph,jd0,dt,target,center,unitSec)
@test abs(ref[1]-val[1]) < ฯต
val = compute(eph,jd0,dt,target,center,unitDay)
@test abs(ref[1]-val[1]*86400) < ฯต*86400
end
end
close(f)
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | code | 2435 |
# This test is derived from test cmcompute_naifid.c in Calceph version 2.3.2
# the test data files are copied from calceph-2.3.2.tar.gz
function testFunction2(testFile,testFile2,ephFiles,pflag)
eph = Ephem(ephFiles)
if pflag
prefetch(eph)
end
con = constants(eph)
AU = con[:AU]
f = open(testFile);
f2 = open(testFile2);
for (ln,ln2) in zip(eachline(f),eachline(f2))
elts=split(ln)
elts2=split(ln2)
jd0=parse(Float64,elts[1])
@test parse(Float64,elts2[1]) == jd0
target=parse(Int,elts2[2])
center=parse(Int,elts2[3])
targetold=parse(Int,elts[2])
centerold=parse(Int,elts[3])
dt = jd0 - trunc(Int,jd0)
jd0 = trunc(Int,jd0) + 2.4515450000000000000E+06
ref = [parse(Float64, x) for x in elts[4:end]]
@test [parse(Float64, x) for x in elts2[4:end]] == ref
if (target != naifId.id[:ttmtdb] && target != 15)
for (unitold,ฯต) in [ (unitAU+unitDay,0.0),
(unitKM+unitDay,0.0),
(unitAU+unitSec,0.0),
(unitKM+unitSec,0.0)]
unit = unitold + useNaifId
val = compute(eph, jd0, dt, target, center, unit)
ref = compute(eph, jd0, dt, targetold, centerold, unitold)
[(@test ref[i] โ val[i] atol=ฯต) for i in 1:6]
end
elseif (target == 15)
targetN = 301
for (unitold,ฯต) in [ (unitRad+unitDay,0.0),
(unitRad+unitSec,0.0)]
unit = unitold + useNaifId
val = orient(eph, jd0, dt, targetN, unit)
ref = compute(eph, jd0, dt, targetold, centerold, unitold)
[(@test ref[i] โ val[i] atol=ฯต) for i in 1:6]
val2 = orient(eph, jd0, dt, targetN, unit,3)
@test length(val2) == 12
[(@test val2[i] โ val[i] atol=ฯต) for i in 1:6]
end
elseif (target == naifId.id[:ttmtdb])
for (unitold,ฯต) in [ (unitDay,0.0),
(unitSec,0.0)]
unit = unitold + useNaifId
val = compute(eph, jd0, dt, target, center, unit)
ref = compute(eph, jd0, dt, targetold, centerold, unitold)
@test ref[1] โ val[1] atol=ฯต
end
end
end
close(f)
close(f2)
end
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | docs | 2941 | [](https://github.com/JuliaAstro/CALCEPH.jl/actions/workflows/CI.yml)
[](https://JuliaAstro.github.io/CALCEPH.jl/stable/)
[](https://JuliaAstro.github.io/CALCEPH.jl/dev/)
This is a julia wrapper for [CALCEPH](https://www.imcce.fr/inpop/calceph/) a C library for reading planetary ephemeris files, such as [INPOPxx](https://www.imcce.fr/inpop), JPL DExxx and SPICE ephemeris files.
[CALCEPH](https://www.imcce.fr/inpop/calceph/) C library is developped by [IMCCE](https://www.imcce.fr/).
# Quick start
In the Julia interpreter, run:
```julia
using Pkg
Pkg.add("CALCEPH")
using CALCEPH
# ephemeris kernels can be downloaded from many different sources
download("ftp://ftp.imcce.fr/pub/ephem/planets/inpop13c/inpop13c_TDB_m100_p100_tt.dat","planets.dat")
# create an ephemeris context
eph = Ephem("planets.dat")
# prefetch ephemeris files data to main memory for faster access
prefetch(eph)
# retrieve constants from ephemeris as a dictionary
con = constants(eph)
# list the constants
keys(con)
# get the sun J2
J2sun = con[:J2SUN]
# retrieve the position, velocity and acceleration of Earth (geocenter) relative
# to the Earth-Moon system barycenter in kilometers, kilometers per second and
# kilometers per second square at JD= 2451624.5 TDB timescale
# for best accuracy the first time argument should be the integer part and the
# delta the fractional part
# when using NAIF identification numbers, useNaifId has to be added to
# the units argument.
pva=compute(eph,2451624.0,0.5,naifId.id[:earth],naifId.id[:emb],
useNaifId+unitKM+unitSec,2)
position=pva[1:3]
velocity=pva[4:6]
acceleration=pva[7:end]
# what is the NAIF identification number for Deimos
id_deimos = naifId.id[:deimos]
# what does NAIF ID 0 correspond to?
names_0 = naifId.names[0]
```
# Why use CALCEPH?
CALCEPH functionality is also provided by [NAIF SPICE Toolkit](https://naif.jpl.nasa.gov/naif/toolkit.html). However CALCEPH has several advantages over the SPICE toolkit, mainly:
- It can handle multiple ephemeris contexts.
- It is thread safe (if using one context per thread).
- It can compute higher order derivatives (acceleration and jerk) approximation using differentiation of the interpolation polynomials.
- Its ephemeris computation interface expects the time separated in two double precision floating point numbers. This can be used to achieve higher precision in timetag (this can have a significant impact when modeling Doppler observations from a deep space probe).
But CALCEPH does not support all functions of the SPICE toolkit. If you need more functionalities [SPICE.jl](https://github.com/JuliaAstrodynamics/SPICE.jl) is a Julia wrapper for the SPICE toolkit.
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | docs | 117 | # API
```@meta
DocTestSetup = quote
using CALCEPH
end
```
```@autodocs
Modules = [CALCEPH]
Private = false
```
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | docs | 2534 | # CALCEPH
This is a julia wrapper for [CALCEPH](https://www.imcce.fr/inpop/calceph/) a C library for reading planetary ephemeris files, such as [INPOPxx](https://www.imcce.fr/inpop), JPL DExxx and SPICE ephemeris files.
[CALCEPH](https://www.imcce.fr/inpop/calceph/) C library is developped by [IMCCE](https://www.imcce.fr/).
## Quick start
In the Julia interpreter, run:
```julia
using Pkg
Pkg.add("CALCEPH")
using CALCEPH
# ephemeris kernels can be downloaded from many different sources
download("ftp://ftp.imcce.fr/pub/ephem/planets/inpop13c/inpop13c_TDB_m100_p100_tt.dat","planets.dat")
# create an ephemeris context
eph = Ephem("planets.dat")
# prefetch ephemeris files data to main memory for faster access
prefetch(eph)
# retrieve constants from ephemeris as a dictionary
con = constants(eph)
# list the constants
keys(con)
# get the sun J2
J2sun = con[:J2SUN]
# retrieve the position, velocity and acceleration of Earth (geocenter) relative
# to the Earth-Moon system barycenter in kilometers, kilometers per second and
# kilometers per second square at JD= 2451624.5 TDB timescale
# for best accuracy the first time argument should be the integer part and the
# delta the fractional part
# when using NAIF identification numbers, useNaifId has to be added to
# the units argument.
pva=compute(eph,2451624.0,0.5,naifId.id[:earth],naifId.id[:emb],
useNaifId+unitKM+unitSec,2)
position=pva[1:3]
velocity=pva[4:6]
acceleration=pva[7:end]
# what is the NAIF identification number for Deimos
id_deimos = naifId.id[:deimos]
# what does NAIF ID 0 correspond to?
names_0 = naifId.names[0]
```
## Why use CALCEPH?
CALCEPH functionality is also provided by [NAIF SPICE Toolkit](https://naif.jpl.nasa.gov/naif/toolkit.html). However CALCEPH has several advantages over the SPICE toolkit, mainly:
- It can handle multiple ephemeris contexts.
- It is thread safe (if using one context per thread).
- It can compute higher order derivatives (acceleration and jerk) approximation using differentiation of the interpolation polynomials.
- Its ephemeris computation interface expects the time separated in two double precision floating point numbers. This can be used to achieve higher precision in timetag (this can have a significant impact when modeling Doppler observations from a deep space probe).
But CALCEPH does not support all functions of the SPICE toolkit. If you need more functionalities [SPICE.jl](https://github.com/JuliaAstrodynamics/SPICE.jl) is a Julia wrapper for the SPICE toolkit.
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 1.2.0 | 6d04b1804e66962763b920130f14ba1105313bf2 | docs | 15340 | # Tutorial
This tutorial will walk you through the features and functionality of [CALCEPH.jl](https://github.com/JuliaAstro/CALCEPH.jl)
## Ephemerides sources
The supported sources of ephemerides are:
- JPL DExxx binary ephemerides files: [https://ssd.jpl.nasa.gov/?planet_eph_export](https://ssd.jpl.nasa.gov/?planet_eph_export)
- IMCCE INPOP ephemerides files: [https://www.imcce.fr/inpop/](https://www.imcce.fr/inpop/)
- some NAIF SPICE kernels: [https://naif.jpl.nasa.gov/naif/data.html](https://naif.jpl.nasa.gov/naif/data.html)
Example:
```julia
download("https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/planets/de435.bsp","planets.dat")
# WARNING this next file is huge (Jupiter Moons ephemerides)
download("https://naif.jpl.nasa.gov/pub/naif/generic_kernels/spk/satellites/jup310.bsp","jupiter_system.bsp")
```
## Ephemerides context
The user first need to load the ephemerides files into an ephemerides context object that will be used later to retrieve position and velocities of celestial objects.
A context can be made from one or several files:
```julia
using CALCEPH
# load a single file in context eph1
eph1 = Ephem("planets.dat")
# load multiple files in context eph2
eph2 = Ephem(["planets.dat","jupiter_system.bsp"])
```
You must specify the relative or absolute path(s) of the file(s) to load.
You can prefetch the ephemerides data into main memory for faster access:
```julia
prefetch(eph2)
```
## Epoch arguments
CALCEPH function takes the epoch as the sum of two double precision floating arguments jd1 and jd2.
The sum jd1 + jd2 is interpreted as the julian date in the timescale of the ephemerides context (usually TDB or sometimes TCB).
For maximum accuracy, it is recommended to set jd2 to the fractional part of the julian date and jd1 to the difference: jd2 magnitude should be less than one while jd1 should have an integer value.
If a high accuracy in timetag is not needed, jd1 can be set to the full julian date and jd2 to zero.
## Options
Many CALCEPH function takes an integer argument to store options. The value of this argument is the sum of the option to enable (each option actually corresponds to a single bit of that integer). Each option to enable can appear only once in the sum!
The following options are available:
- unitAU = 1: set distance units it to Astronomical Unit.
- unitKM = 2: set distance units to kilometers.
- unitDay = 4: set time units to days.
- unitSec = 8: set time units to seconds.
- unitRad = 16: set angle units to radians.
- useNaifId = 32: set the body identification scheme to NAIF body identification scheme.
- outputEulerAngles = 64: when using body orientation ephemerides, this allows to choose Euler angle output.
- outputNutationAngles = 128: when using body orientation ephemerides, this allows to choose nutation angle output (if available).
The useNaifId option controls the identification scheme for the input arguments: target and center.
The units options controls the units of the outputs. It is compulsory to set the output units if the routine has the input argument options.
For example to compute the position and velocity in kilometers and kilometers per second of body target (given as its NAIF identification number) with respect to center (given as its NAIF identification number), the options argument should be set as such:
```julia
options = unitKM + unitSec + useNaifId
```
## Body identification scheme
CALCEPH has the following identification scheme for bodies:
- 1 : Mercury Barycenter
- 2 : Venus Barycenter
- 3 : Earth
- 4 : Mars Barycenter
- 5 : Jupiter Barycenter
- 6 : Saturn Barycenter
- 7 : Uranus Barycenter
- 8 : Neptune Barycenter
- 9 : Pluto Barycenter
- 10 : Moon
- 11 : Sun
- 12 : Solar Sytem barycenter
- 13 : Earth-moon barycenter
- 14 : Nutation angles
- 15 : Librations
- 16 : difference TT-TDB
- 17 : difference TCG-TCB
- asteroid number + 2000000 : asteroid
If target is 14, 15, 16 or 17 (nutation, libration, TT-TDB or TCG-TCB), center must be 0.
The more complete NAIF identification scheme can be used if the value useNaifId is added to the options argument.
## NAIF body identification scheme
See [https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/naif_ids.html](https://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/naif_ids.html)
CALCEPH uses this identification scheme only when the value useNaifId is added to the options argument.
The CALCEPH julia wrapper comes with the naifId object which contains the mapping between NAIF identification numbers and names:
```julia
julia> naifId.id[:sun]
10
julia> naifId.id[:mars]
499
julia> naifId.names[0]
Set(Symbol[:ssb, :solar_system_barycenter])
```
naifId also stores the following identifiers:
- :timecenter (1000000000): the center argument when requesting a value from a time ephemeris.
- :ttmtdb (1000000001): the target argument when requesting a value from the difference TT-TDB time ephemeris.
- :tcgmtcb (1000000002): the target argument when requesting a value from the difference TCG-TCB time ephemeris.
naifId is actually an instance of mutable struct BodyId. The user can also create its own identification scheme for its SPICE kernels:
```julia
const MyUniverseIds = CALCEPH.BodyId()
CALCEPH.add!(MyUniverseIds,:tatooine,1000001)
CALCEPH.add!(MyUniverseIds,:dagobah,1000002)
CALCEPH.add!(MyUniverseIds,:endor,1000003)
CALCEPH.add!(MyUniverseIds,:deathstar,1000004)
CALCEPH.add!(MyUniverseIds,:endor_deathstar_system_barycenter,1000005)
CALCEPH.add!(MyUniverseIds,:edsb,1000005)
```
You can also load identification data from an external file:
```julia
CALCEPH.loadData!(MyUniverseIds, "MyUniverseIds.txt")
```
See example: [https://github.com/JuliaAstro/CALCEPH.jl/blob/master/data/NaifIds.txt](https://github.com/JuliaAstro/CALCEPH.jl/blob/master/data/NaifIds.txt)
Names from the file are converted to lower case and have spaces replaced by underscores before being converted to symbols/interned strings.
## Computing positions and velocities:
The following methods are available to compute position and velocity with CALCEPH:
```julia
compute(eph,jd1,jd2,target,center)
compute(eph,jd1,jd2,target,center,options)
compute(eph,jd1,jd2,target,center,options,order)
```
Those methods compute the position and its time derivatives of target with respect to center.
- The first argument eph is the ephemerides context.
- The second and third arguments jd1 and jd2 are the epoch.
- The third argument target is the body for which position is to be computed with respect to origin.
- The fourth argument center is the origin.
- The options argument shall specify the units. It can also be used to switch target and center numbering scheme to the NAIF identification scheme.
- The order argument can be set to:
- 0: compute position only
- 1: compute position and velocity
- 2: compute position, velocity and acceleration
- 3: compute position, velocity, acceleration and jerk.
When order is not specified, position and velocity are computed.
#### Example:
Computing position only of Jupiter system barycenter with respect to the Earth Moon center in kilometers at JD=2456293.5 (Ephemeris Time).
```julia
options = useNaifId + unitKM + unitSec
jd1 = 2456293.0
jd2 = 0.5
center = naifId.id[:moon]
target = naifId.id[:jupiter_barycenter]
pos = compute(eph2, jd1, jd2, target, center, options,0)
```
## Computing orientation:
The following methods are available to compute orientation angles with CALCEPH:
```julia
orient(eph,jd1,jd2,target,options)
orient(eph,jd1,jd2,target,options,order)
```
Those methods compute the Euler angles of target and their time derivatives.
- The first argument eph is the ephemerides context.
- The second and third arguments jd1 and jd2 are the epoch.
- The fourth argument target is the body for which the Euler angles are to be computed.
- The options argument shall specify the units. It can also be used to switch target and center numbering scheme to the NAIF identification scheme and to switch between Euler angles and nutation angles.
- The order argument can be set to:
- 0: only the angles are computed.
- 1: only the angles and first derivatives are computed.
- 2: only the angles, the first and second derivatives are computed.
- 3: the angles, the first, second and third derivatives are computed.
#### Example:
JPL DE405 binary ephemerides contain Chebychev polynomials for the IAU 1980 nutation theory. Interpolating those is much faster than computing the IAU 1980 nutation series.
Computing Earth nutation angles in radians at JD=2456293.5 (Ephemeris Time).
```julia
download("ftp://ssd.jpl.nasa.gov/pub/eph/planets/Linux/de405/lnxp1600p2200.405","DE405")
eph1 = Ephem("DE405")
options = useNaifId + unitRad + unitSec + outputNutationAngles
jd1 = 2456293.0
jd2 = 0.5
target = naifId.id[:earth]
angles = orient(eph1, jd1, jd2, target, options,0)
```
Note that the returned value is a vector of 3 even though there are only 2 nutation angles. The last value is zero and meaningless.
## Computing angular momentum:
The following methods are available to compute body angular momentum with CALCEPH:
```julia
rotAngMom(eph,jd1,jd2,target,options)
rotAngMom(eph,jd1,jd2,target,options,order)
```
Those methods compute the angular momentum of target and their time derivatives.
- The first argument eph is the ephemerides context.
- The second and third arguments jd1 and jd2 are the epoch.
- The fourth argument target is the body for which the angular momentum are to be computed.
- The options argument shall specify the units. It can also be used to switch target numbering scheme to the NAIF identification scheme.
- The order argument can be set to:
- 0: only the angular momentum vector are computed.
- 1: only the angular momentum vector and first derivative are computed.
- 2: only the angular momentum vector, the first and second derivatives are computed.
- 3: the angular momentum, the first, second and third derivatives are computed.
## Time ephemeris
The time ephemeris TT-TDB or TCG-TCB at the geocenter can be evaluated with a suitable source.
INPOP and some JPL DE ephemerides includes a numerically integrated time ephemeris for the geocenter which is usually more accurate than the analytical series: Moreover it is much faster to interpolate those ephemerides than to evaluate the analytical series. This is only for the geocenter but a simple correction can also be added for the location of the observer (and its velocity in case the observer is on a highly elliptical orbit).
Files that can be used to obtain the difference between TT and TDB are, e.g.:
- [ftp://ftp.imcce.fr/pub/ephem/planets/inpop17a/inpop17a_TDB_m100_p100_tt.dat](ftp://ftp.imcce.fr/pub/ephem/planets/inpop17a/inpop17a_TDB_m100_p100_tt.dat)
- [ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/de432t.bsp](ftp://ssd.jpl.nasa.gov/pub/eph/planets/bsp/de432t.bsp)
#### Example:
Computing TT-TDB at geocenter in seconds at JD=2456293.5 (Ephemeris Time).
```julia
download("ftp://ftp.imcce.fr/pub/ephem/planets/inpop17a/inpop17a_TDB_m100_p100_tt.dat","INPOP17a")
eph1 = Ephem("INPOP17a")
options = useNaifId + unitSec
jd1 = 2456293.0
jd2 = 0.5
target = naifId.id[:ttmtdb]
center = naifId.id[:timecenter]
ttmtdb = compute(eph1, jd1, jd2, target, center, options,0)
```
Note that the returned value is a vector of 3 even though there is only one meaningful value. The last 2 values are zero and meaningless.
## In place methods
In place versions of the methods described above are also available. Those are:
```julia
unsafe_compute!(result,eph,jd1,jd2,target,center)
unsafe_compute!(result,eph,jd1,jd2,target,center,options)
unsafe_compute!(result,eph,jd1,jd2,target,center,options,order)
unsafe_orient!(result,eph,jd1,jd2,target,options)
unsafe_orient!(result,eph,jd1,jd2,target,options,order)
unsafe_rotAngMom!(result,eph,jd1,jd2,target,options)
unsafe_rotAngMom!(result,eph,jd1,jd2,target,options,order)
```
Those methods do not perform any checks on their inputs. In particular, result must be a contiguous vector of double precision floating point number of dimension at least 6 when order is not specified or at least 3*(order+1) otherwise.
## Constants
Ephemerides files may contain related constants. Those can be obtained by the **constants** method which returns a dictionary:
```julia
download("ftp://ftp.imcce.fr/pub/ephem/planets/inpop17a/inpop17a_TDB_m100_p100_tt.dat","INPOP17a")
eph1 = Ephem("INPOP17a")
# retrieve constants from ephemeris as a dictionary
con = constants(eph1)
# list the constants
keys(con)
# get the sun J2
J2sun = con[:J2SUN]
```
## Introspection
#### Time scale
```julia
timeScale(eph)
```
returns the Ephemeris Time identifier:
- 1 for TDB
- 2 for TCB
#### Time span
```julia
timespan(eph)
```
returns the triplet:
- julian date of first entry in ephemerides context.
- julian date of last entry in ephemerides context.
- information about the availability of the quantities over the time span:
- 1 if the quantities of all bodies are available for any time between the first and last time.
- 2 if the quantities of some bodies are available on discontinuous time intervals between the first and last time.
- 3 if the quantities of each body are available on a continuous time interval between the first and last time, but not available for any time between the first and last time.
#### Position records
```julia
positionRecords(eph)
```
retrieve position records metadata in ephemeris associated to handler eph.
This is a vector of metadata about the ephemerides records ordered by priority. The compute methods use the highest priority ephemerides records when there are multiple records that could satisfy the target and epoch.
Each record metadata contains the following information:
- target: NAIF identifier of target.
- center: NAIF identifier of center.
- startEpoch: julian date of record start.
- stopEpoch: julian date of record end.
- frame : 1 for ICRF.
#### Orientation records
```julia
orientationRecords(eph)
```
retrieve orientation records metadata in ephemeris associated to handler eph.
This is a vector of metadata about the ephemerides records ordered by priority. The orient methods use the highest priority ephemerides records when there are multiple records that could satisfy the target and epoch.
Each record metadata contains the following information:
- target: NAIF identifier of target.
- startEpoch: julian date of record start.
- stopEpoch: julian date of record end.
- frame : 1 for ICRF.
## Cleaning up
Because, Julia's garbage collector is lazy, you may want to free the memory managed by the context before you get rid of the reference to the context with eg:
```julia
finalize(eph1)
eph1 = Nothing
```
or after with
```julia
eph1 = Nothing
GC.gc()
```
## Error handling
By default, the CALCEPH C library prints error messages directly to the standard output but this can be modified.
The Julia wrapper provides the following interface for this purpose:
```julia
CALCEPH.setCustomHandler(f)
```
where f should be a user function taking a single argument of type String which will contain the CALCEPH error message. f should return Nothing.
To disable CALCEPH error messages printout to the console:
```julia
CALCEPH.setCustomHandler(s->Nothing)
```
To get back the default behavior:
```julia
CALCEPH.disableCustomHandler()
```
| CALCEPH | https://github.com/JuliaAstro/CALCEPH.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 53268 | module RustyObjectStore
export init_object_store, get_object!, put_object, delete_object
export StaticConfig, ClientOptions, Config, AzureConfig, AWSConfig
export status_code, is_connection, is_timeout, is_early_eof, is_unknown, is_parse_url
export get_object_stream, ReadStream, finish!
export put_object_stream, WriteStream, cancel!, shutdown!
export current_metrics
export max_entries_per_chunk, ListEntry, list_objects, list_objects_stream, next_chunk!
using Base.Libc.Libdl: dlext
using Base: @kwdef, @lock
using DocStringExtensions
using object_store_ffi_jll
using JSON3
const Option{T} = Union{T, Nothing}
const rust_lib = if haskey(ENV, "OBJECT_STORE_LIB")
# For development, e.g. run `cargo build --release` and point to `target/release/` dir.
# Note this is set a precompilation time, as `ccall` needs this to be a `const`,
# so you need to restart Julia / recompile the package if you change it.
lib_path = realpath(joinpath(ENV["OBJECT_STORE_LIB"], "libobject_store_ffi.$(dlext)"))
@warn """
Using unreleased object_store_ffi library:
$(repr(contractuser(lib_path)))
This is only intended for local development and should not be used in production.
"""
lib_path
else
object_store_ffi_jll.libobject_store_ffi
end
"""
$TYPEDEF
Global configuration for the object store requests.
# Keywords
$TYPEDFIELDS
"""
@kwdef struct StaticConfig
"""
The number of worker threads for the native pool,
a value of zero makes it equal to the number of logical cores on the machine.
"""
n_threads::Culonglong
"The maximum capacity for the client cache"
cache_capacity::Culonglong
"The time-to-live in seconds for entries in the client cache"
cache_ttl_secs::Culonglong
"The time-to-idle in seconds for entries in the client cache"
cache_tti_secs::Culonglong
"Put requests with a size in bytes greater than this will use multipart operations"
multipart_put_threshold::Culonglong
"Get requests with a size in bytes greater than this will use multipart operations"
multipart_get_threshold::Culonglong
"The size in bytes for each part of multipart get operations"
multipart_get_part_size::Culonglong
"The max number of allowed Rust request tasks"
concurrency_limit::Cuint
end
function Base.show(io::IO, config::StaticConfig)
print(io, "StaticConfig("),
print(io, "n_threads=", Int(config.n_threads), ",")
print(io, "cache_capacity=", Int(config.cache_capacity), ",")
print(io, "cache_ttl_secs=", Int(config.cache_ttl_secs), ",")
print(io, "cache_tti_secs=", Int(config.cache_tti_secs), ",")
print(io, "multipart_put_threshold=", Int(config.multipart_put_threshold), ",")
print(io, "multipart_get_threshold=", Int(config.multipart_get_threshold), ",")
print(io, "multipart_get_part_size=", Int(config.multipart_get_part_size), ")")
end
const DEFAULT_CONFIG = StaticConfig(
n_threads=0,
cache_capacity=100,
cache_ttl_secs=30 * 60,
cache_tti_secs=5 * 60,
multipart_put_threshold=10 * 1024 * 1024,
multipart_get_threshold=8 * 1024 * 1024,
multipart_get_part_size=8 * 1024 * 1024,
concurrency_limit=512
)
function default_panic_hook()
println("Rust thread panicked, exiting the process")
exit(1)
end
const _OBJECT_STORE_STARTED = Ref(false)
const _INIT_LOCK::ReentrantLock = ReentrantLock()
_PANIC_HOOK::Function = default_panic_hook
struct InitException <: Exception
msg::String
return_code::Cint
end
Base.@ccallable function panic_hook_wrapper()::Cint
global _PANIC_HOOK
_PANIC_HOOK()
return 0
end
# This is the callback that Rust calls to notify a Julia task of a completed operation.
# The argument is transparent to Rust and is simply what gets passed from Julia in the handle
# argument of the @ccall. Currently we pass a pointer to a Base.Event that must be notified to
# wakeup the appropriate task.
Base.@ccallable function notify_result(event_ptr::Ptr{Nothing})::Cint
event = unsafe_pointer_to_objref(event_ptr)::Base.Event
notify(event)
return 0
end
# A dict of all tasks that are waiting some result from Rust
# and should thus not be garbage collected.
# This copies the behavior of Base.preserve_handle.
const tasks_in_flight = IdDict{Task, Int64}()
const preserve_task_lock = Threads.SpinLock()
function preserve_task(x::Task)
@lock preserve_task_lock begin
v = get(tasks_in_flight, x, 0)::Int
tasks_in_flight[x] = v + 1
end
nothing
end
function unpreserve_task(x::Task)
@lock preserve_task_lock begin
v = get(tasks_in_flight, x, 0)::Int
if v == 0
error("unbalanced call to unpreserve_task for $(typeof(x))")
elseif v == 1
pop!(tasks_in_flight, x)
else
tasks_in_flight[x] = v - 1
end
end
nothing
end
"""
init_object_store()
init_object_store(config::StaticConfig)
init_object_store(config::StaticConfig; on_rust_panic::Function)
Initialise object store.
This starts a `tokio` runtime for handling `object_store` requests.
It must be called before sending a request e.g. with `get_object!` or `put_object`.
The runtime is only started once and cannot be re-initialised with a different config,
subsequent `init_object_store` calls have no effect.
An optional panic hook may be provided to react to panics on Rust's native threads.
The default behavior is to log and exit the process.
# Throws
- `InitException`: if the runtime fails to start.
"""
function init_object_store(
config::StaticConfig=DEFAULT_CONFIG;
on_rust_panic::Function=default_panic_hook
)
global _PANIC_HOOK
@lock _INIT_LOCK begin
if _OBJECT_STORE_STARTED[]
return nothing
end
_PANIC_HOOK = on_rust_panic
panic_fn_ptr = @cfunction(panic_hook_wrapper, Cint, ())
fn_ptr = @cfunction(notify_result, Cint, (Ptr{Nothing},))
res = @ccall rust_lib.start(config::StaticConfig, panic_fn_ptr::Ptr{Nothing}, fn_ptr::Ptr{Nothing})::Cint
if res != 0
throw(InitException("Failed to initialise object store runtime.", res))
end
_OBJECT_STORE_STARTED[] = true
end
return nothing
end
macro option_print(obj, name, hide = false)
return esc(:( !isnothing($obj.$name)
&& print(io, ", ", $(string(name)), "=", $hide ? "*****" : repr($obj.$name)) ))
end
function response_error_to_string(response, operation)
err = string("failed to process ", operation, " with error: ", unsafe_string(response.error_message))
@ccall rust_lib.destroy_cstring(response.error_message::Ptr{Cchar})::Cint
return err
end
macro throw_on_error(response, operation, exception)
throw_on_error(response, operation, exception)
end
function throw_on_error(response, operation, exception)
return :( $(esc(:($response.result == 1))) ? throw($exception($response_error_to_string($(esc(response)), $operation))) : $(nothing) )
end
function ensure_wait(event::Base.Event)
for i in 1:20
try
return wait(event)
catch e
@error "cannot skip this wait point to prevent UB, ignoring exception: $(e)"
end
end
@error "ignored too many wait exceptions, giving up"
exit(1)
end
function wait_or_cancel(event::Base.Event, response)
try
return wait(event)
catch e
@ccall rust_lib.cancel_context(response.context::Ptr{Cvoid})::Cint
ensure_wait(event)
@ccall rust_lib.destroy_cstring(response.error_message::Ptr{Cchar})::Cint
rethrow()
finally
@ccall rust_lib.destroy_context(response.context::Ptr{Cvoid})::Cint
end
end
"""
$TYPEDEF
# Keyword Arguments
- `request_timeout_secs::Option{Int}`: (Optional) Client request timeout in seconds.
- `connect_timeout_secs::Option{Int}`: (Optional) Client connection timeout in seconds.
- `max_retries::Option{Int}`: (Optional) Maximum number of retry attempts.
- `retry_timeout_secs::Option{Int}`: (Optional) Maximum amount of time from the initial request after which no further retries will be attempted (in seconds).
- `initial_backoff_ms::Option{Int}`: (Optional) Initial delay for exponential backoff (in milliseconds).
- `max_backoff_ms::Option{Int}`: (Optional) Maximum delay for exponential backoff (in milliseconds).
- `backoff_exp_base::Option{Float64}`: (Optional) The base of the exponential for backoff delay calculations.
"""
struct ClientOptions
request_timeout_secs::Option{Int}
connect_timeout_secs::Option{Int}
max_retries::Option{Int}
retry_timeout_secs::Option{Int}
initial_backoff_ms::Option{Int}
max_backoff_ms::Option{Int}
backoff_exp_base::Option{Float64}
params::Dict{String, String}
function ClientOptions(;
request_timeout_secs::Option{Int} = nothing,
connect_timeout_secs::Option{Int} = nothing,
max_retries::Option{Int} = nothing,
retry_timeout_secs::Option{Int} = nothing,
initial_backoff_ms::Option{Int} = nothing,
max_backoff_ms::Option{Int} = nothing,
backoff_exp_base::Option{Float64} = nothing,
)
params = Dict()
if !isnothing(request_timeout_secs)
# Include `s` so parsing on Rust understands this as seconds
params["timeout"] = string(request_timeout_secs, "s")
end
if !isnothing(connect_timeout_secs)
# Include `s` so parsing on Rust understands this as seconds
params["connect_timeout"] = string(connect_timeout_secs, "s")
end
if !isnothing(max_retries)
params["max_retries"] = string(max_retries)
end
if !isnothing(retry_timeout_secs)
# `s` suffix is not required as this field is already expected to be the number of seconds
params["retry_timeout_secs"] = string(retry_timeout_secs)
end
if !isnothing(initial_backoff_ms)
# `ms` suffix is not required as this field is already expected to be the number of milliseconds
params["initial_backoff_ms"] = string(initial_backoff_ms)
end
if !isnothing(max_backoff_ms)
# `ms` suffix is not required as this field is already expected to be the number of milliseconds
params["max_backoff_ms"] = string(max_backoff_ms)
end
if !isnothing(backoff_exp_base)
params["backoff_exp_base"] = string(backoff_exp_base)
end
return new(
request_timeout_secs,
connect_timeout_secs,
max_retries,
retry_timeout_secs,
initial_backoff_ms,
max_backoff_ms,
backoff_exp_base,
params
)
end
end
function Base.show(io::IO, opts::ClientOptions)
print(io, "ClientOptions("),
@option_print(opts, request_timeout_secs)
@option_print(opts, connect_timeout_secs)
@option_print(opts, max_retries)
@option_print(opts, retry_timeout_secs)
@option_print(opts, initial_backoff_ms)
@option_print(opts, max_backoff_ms)
@option_print(opts, backoff_exp_base)
print(io, ")")
end
abstract type AbstractConfig end
"""
$TYPEDEF
Opaque configuration type for dynamic configuration use cases.
This allows passing the url and configuration key-value pairs directly to the underlying library
for validation and dispatching.
It is recommended to reuse an instance for many operations.
# Arguments
- `url::String`: Url of the object store container root path.
It must include the cloud specific url scheme (s3://, azure://, az://).
- `params::Dict{String, String}`: A set of key-value pairs to configure access to the object store.
Refer to the object_store crate documentation for the list of all supported parameters.
"""
struct Config <: AbstractConfig
# The serialized string is stored here instead of the constructor arguments
# in order to avoid any serialization overhead when performing get/put operations.
# For this to be effective the recommended usage pattern is to reuse this object often
# instead of constructing for each use.
config_string::String
function Config(url::String, params::Dict{String, String})
return new(url_params_to_config_string(url, params))
end
end
function url_params_to_config_string(url::String, params::Dict{String, String})
dict = merge(Dict("url" => url), params)
return JSON3.write(dict)
end
into_config(conf::Config) = conf
function Base.show(io::IO, config::Config)
dict = JSON3.read(config.config_string, Dict{String, String})
print(io, "Config(")
print(io, repr(dict["url"]), ", ")
for key in keys(dict)
if occursin(r"secret|token|key", string(key))
dict[key] = "*****"
end
end
print(io, repr(dict))
print(io, ")")
end
const _ConfigFFI = Cstring
function Base.cconvert(::Type{Ref{Config}}, config::Config)
config_ffi = Base.unsafe_convert(Cstring, Base.cconvert(Cstring, config.config_string))::_ConfigFFI
# cconvert ensures its outputs are preserved during a ccall, so we can crate a pointer
# safely in the unsafe_convert call.
return config_ffi, Ref(config_ffi)
end
function Base.unsafe_convert(::Type{Ref{Config}}, x::Tuple{T,Ref{T}}) where {T<:_ConfigFFI}
return Base.unsafe_convert(Ptr{_ConfigFFI}, x[2])
end
"""
$TYPEDEF
Configuration for the Azure Blob object store backend.
Only one of `storage_account_key` or `storage_sas_token` is allowed for a given instance.
It is recommended to reuse an instance for many operations.
# Keyword Arguments
- `storage_account_name::String`: Azure storage account name.
- `container_name::String`: Azure container name.
- `storage_account_key::Option{String}`: (Optional) Azure storage account key (conflicts with storage_sas_token).
- `storage_sas_token::Option{String}`: (Optional) Azure storage SAS token (conflicts with storage_account_key).
- `host::Option{String}`: (Optional) Alternative Azure host. For example, if using Azurite.
- `opts::ClientOptions`: (Optional) Client configuration options.
"""
struct AzureConfig <: AbstractConfig
storage_account_name::String
container_name::String
storage_account_key::Option{String}
storage_sas_token::Option{String}
host::Option{String}
opts::ClientOptions
cached_config::Config
function AzureConfig(;
storage_account_name::String,
container_name::String,
storage_account_key::Option{String} = nothing,
storage_sas_token::Option{String} = nothing,
host::Option{String} = nothing,
opts::ClientOptions = ClientOptions()
)
if !isnothing(storage_account_key) && !isnothing(storage_sas_token)
error("Should provide either a storage_account_key or a storage_sas_token")
end
params = copy(opts.params)
params["azure_storage_account_name"] = storage_account_name
params["azure_container_name"] = container_name
if !isnothing(storage_account_key)
params["azure_storage_account_key"] = storage_account_key
elseif !isnothing(storage_sas_token)
params["azure_storage_sas_token"] = storage_sas_token
end
if !isnothing(host)
params["azurite_host"] = host
end
if isnothing(storage_account_key) && isnothing(storage_sas_token)
params["azure_skip_signature"] = "true"
end
map!(v -> strip(v), values(params))
cached_config = Config("az://$(strip(container_name))/", params)
return new(
storage_account_name,
container_name,
storage_account_key,
storage_sas_token,
host,
opts,
cached_config
)
end
end
into_config(conf::AzureConfig) = conf.cached_config
function Base.show(io::IO, conf::AzureConfig)
print(io, "AzureConfig("),
print(io, "storage_account_name=", repr(conf.storage_account_name), ", ")
print(io, "container_name=", repr(conf.container_name))
@option_print(conf, storage_account_key, true)
@option_print(conf, storage_sas_token, true)
@option_print(conf, host)
print(io, ", ", "opts=", repr(conf.opts), ")")
end
"""
$TYPEDEF
Configuration for the AWS S3 object store backend.
It is recommended to reuse an instance for many operations.
# Keyword Arguments
- `region::String`: AWS S3 region.
- `bucket_name::String`: AWS S3 bucket name.
- `access_key_id::Option{String}`: (Optional) AWS S3 access key id.
- `secret_access_key::Option{String}`: (Optional) AWS S3 secret access key.
- `session_token::Option{String}`: (Optional) AWS S3 session_token.
- `host::Option{String}`: (Optional) Alternative S3 host. For example, if using Minio.
- `opts::ClientOptions`: (Optional) Client configuration options.
"""
struct AWSConfig <: AbstractConfig
region::String
bucket_name::String
access_key_id::Option{String}
secret_access_key::Option{String}
session_token::Option{String}
use_instance_metadata::Bool
host::Option{String}
opts::ClientOptions
cached_config::Config
function AWSConfig(;
region::String,
bucket_name::String,
access_key_id::Option{String} = nothing,
secret_access_key::Option{String} = nothing,
session_token::Option{String} = nothing,
use_instance_metadata::Bool = false,
host::Option{String} = nothing,
opts::ClientOptions = ClientOptions()
)
params = copy(opts.params)
params["region"] = region
params["bucket_name"] = bucket_name
if !isnothing(access_key_id)
params["aws_access_key_id"] = access_key_id
end
if !isnothing(secret_access_key)
params["aws_secret_access_key"] = secret_access_key
end
if !isnothing(session_token)
params["aws_session_token"] = session_token
end
if !isnothing(host)
params["minio_host"] = host
else
params["aws_virtual_hosted_style_request"] = "true"
end
if !use_instance_metadata && isnothing(access_key_id)
params["aws_skip_signature"] = "true"
end
if use_instance_metadata && (!isnothing(access_key_id) || !isnothing(secret_access_key))
error("Credentials should not be provided when using instance metadata")
end
map!(v -> strip(v), values(params))
cached_config = Config("s3://$(strip(bucket_name))/", params)
return new(
region,
bucket_name,
access_key_id,
secret_access_key,
session_token,
use_instance_metadata,
host,
opts,
cached_config
)
end
end
into_config(conf::AWSConfig) = conf.cached_config
function Base.show(io::IO, conf::AWSConfig)
print(io, "AWSConfig("),
print(io, "region=", repr(conf.region), ", ")
print(io, "bucket_name=", repr(conf.bucket_name))
@option_print(conf, access_key_id, true)
@option_print(conf, secret_access_key, true)
@option_print(conf, session_token, true)
conf.use_instance_metadata && print(io, "use_instance_metadata=", repr(conf.use_instance_metadata))
@option_print(conf, host)
print(io, ", ", "opts=", repr(conf.opts), ")")
end
mutable struct Response
result::Cint
length::Culonglong
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
Response() = new(-1, 0, C_NULL, C_NULL)
end
abstract type ErrorReason end
struct ConnectionError <: ErrorReason end
struct StatusError <: ErrorReason
code::Int
end
struct EarlyEOF <: ErrorReason end
struct TimeoutError <: ErrorReason end
struct ParseURLError <: ErrorReason end
struct UnknownError <: ErrorReason end
reason_description(::ConnectionError) = "Connection"
reason_description(r::StatusError) = "StatusCode($(r.code))"
reason_description(::EarlyEOF) = "EarlyEOF"
reason_description(::TimeoutError) = "Timeout"
reason_description(::ParseURLError) = "ParseURL"
reason_description(::UnknownError) = "Unknown"
abstract type RequestException <: Exception end
struct GetException <: RequestException
msg::String
reason::ErrorReason
GetException(msg) = new(msg, rust_message_to_reason(msg))
end
struct PutException <: RequestException
msg::String
reason::ErrorReason
PutException(msg) = new(msg, rust_message_to_reason(msg))
end
struct DeleteException <: RequestException
msg::String
reason::ErrorReason
DeleteException(msg) = new(msg, rust_message_to_reason(msg))
end
struct ListException <: RequestException
msg::String
reason::ErrorReason
ListException(msg) = new(msg, rust_message_to_reason(msg))
end
message(e::GetException) = e.msg::String
message(e::PutException) = e.msg::String
message(e::ListException) = e.msg::String
message(e::DeleteException) = e.msg::String
function message(e::Exception)
iobuf = IOBuffer()
Base.showerror(iobuf, e)
return String(take!(iobuf))
end
reason(e::GetException) = e.reason::ErrorReason
reason(e::PutException) = e.reason::ErrorReason
reason(e::ListException) = e.reason::ErrorReason
reason(e::DeleteException) = e.reason::ErrorReason
reason(e::Exception) = UnknownError()
function status_code(e::Exception)
return reason(e) isa StatusError ? reason(e).code : nothing
end
function is_connection(e::Exception)
return reason(e) isa ConnectionError
end
function is_timeout(e::Exception)
return reason(e) isa TimeoutError
end
function is_early_eof(e::Exception)
return reason(e) isa EarlyEOF
end
function is_parse_url(e::Exception)
return reason(e) isa ParseURLError
end
function is_unknown(e::Exception)
return reason(e) isa UnknownError
end
function safe_message(e::Exception)
if e isa RequestException
msg = message(e)
r = reason(e)
if contains(msg, "<Error>") || contains(msg, "http")
# Contains unreadacted payload from backend or urls, try extracting safe information
code, backend_msg, report = extract_safe_parts(message(e))
reason_str = reason_description(r)
code = isnothing(code) ? "Unknown" : code
backend_msg = isnothing(backend_msg) ? "Error without safe message" : backend_msg
retry_report = isnothing(report) ? "" : "\n\n$(report)"
return "$(backend_msg) (code: $(code), reason: $(reason_str))$(retry_report)"
else
# Assuming it safe as it does not come from backend or has url, return message directly
return msg
end
else
return nothing
end
end
function rust_message_to_reason(msg::AbstractString)
if (
contains(msg, "connection error")
|| contains(msg, "tcp connect error")
|| contains(msg, "error trying to connect")
|| contains(msg, "client error (Connect)")
) && (
contains(msg, "deadline has elapsed")
|| contains(msg, "Connection refused")
|| contains(msg, "Connection reset by peer")
|| contains(msg, "dns error")
)
return ConnectionError()
elseif contains(msg, "Client error with status")
m = match(r"Client error with status (\d+) ", msg)
if !isnothing(m)
code = tryparse(Int, m.captures[1])
if !isnothing(code)
return StatusError(code)
else
return UnknownError()
end
else
return UnknownError()
end
elseif contains(msg, "HTTP status server error")
m = match(r"HTTP status server error \((\d+) ", msg)
if !isnothing(m)
code = tryparse(Int, m.captures[1])
if !isnothing(code)
return StatusError(code)
else
return UnknownError()
end
else
return UnknownError()
end
elseif contains(msg, "connection closed before message completed") ||
contains(msg, "end of file before message length reached") ||
contains(msg, "Connection reset by peer")
return EarlyEOF()
elseif contains(msg, "timed out")
return TimeoutError()
elseif contains(msg, "Unable to convert URL") ||
contains(msg, "Unable to recognise URL")
return ParseURLError()
else
return UnknownError()
end
end
function extract_safe_parts(msg::AbstractString)
code = nothing
backend_message = nothing
retry_report = nothing
codemsg = match(r"<Error>[\s\S]*?<Code>([\s\S]*?)</Code>[\s\S]*?<Message>([\s\S]*?)(?:</Message>|\n)", msg)
if !isnothing(codemsg)
code = codemsg.captures[1]
backend_message = codemsg.captures[2]
end
retry_match = match(r"Recent attempts \([\s\S]*", msg)
if !isnothing(retry_match)
retry_report = retry_match.match
end
return (code, backend_message, retry_report)
end
"""
get_object!(buffer, path, conf) -> Int
Send a get request to the object store.
Fetches the data bytes at `path` and writes them to the given `buffer`.
# Arguments
- `buffer::AbstractVector{UInt8}`: The buffer to write the object data to.
The contents of the buffer will be mutated.
The buffer must be at least as large as the data.
The buffer will not be resized.
- `path::String`: The location of the data to fetch.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Returns
- `nbytes::Int`: The number of bytes read from the object store and written to the buffer.
That is, `buffer[1:nbytes]` will contain the object data.
# Throws
- `GetException`: If the request fails for any reason, including if the `buffer` is too small.
"""
function get_object!(buffer::AbstractVector{UInt8}, path::String, conf::AbstractConfig)
response = Response()
size = length(buffer)
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
while true
preserve_task(ct)
result = GC.@preserve buffer config response event try
result = @ccall rust_lib.get(
path::Cstring,
buffer::Ref{Cuchar},
size::Culonglong,
config::Ref{Config},
response::Ref{Response},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
@throw_on_error(response, "get", GetException)
return Int(response.length)
end
end
"""
put_object(buffer, path, conf) -> Int
Send a put request to the object store.
Atomically writes the data bytes in `buffer` to `path`.
# Arguments
- `buffer::AbstractVector{UInt8}`: The data to write to the object store.
This buffer will not be mutated.
- `path::String`: The location to write data to.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Returns
- `nbytes::Int`: The number of bytes written to the object store.
Is always equal to `length(buffer)`.
# Throws
- `PutException`: If the request fails for any reason.
"""
function put_object(buffer::AbstractVector{UInt8}, path::String, conf::AbstractConfig)
response = Response()
size = length(buffer)
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
while true
preserve_task(ct)
result = GC.@preserve buffer config response event try
result = @ccall rust_lib.put(
path::Cstring,
buffer::Ref{Cuchar},
size::Culonglong,
config::Ref{Config},
response::Ref{Response},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
@throw_on_error(response, "put", PutException)
return Int(response.length)
end
end
"""
delete_object(path, conf)
Send a delete request to the object store.
# Arguments
- `path::String`: The location of the object to delete.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Throws
- `DeleteException`: If the request fails for any reason. Note that S3 will treat a delete request
to a non-existing object as a success, while Azure Blob will treat it as a 404 error.
"""
function delete_object(path::String, conf::AbstractConfig)
response = Response()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
while true
preserve_task(ct)
result = GC.@preserve config response event try
result = @ccall rust_lib.delete(
path::Cstring,
config::Ref{Config},
response::Ref{Response},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
@throw_on_error(response, "delete_object", DeleteException)
return nothing
end
end
mutable struct ReadResponseFFI
result::Cint
length::Culonglong
eof::Cuchar
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
ReadResponseFFI() = new(-1, 0, 0, C_NULL, C_NULL)
end
mutable struct ReadStreamResponseFFI
result::Cint
stream::Ptr{Nothing}
object_size::Culonglong
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
ReadStreamResponseFFI() = new(-1, C_NULL, 0, C_NULL, C_NULL)
end
"""
ReadStream
Opaque IO stream of object data.
It is necessary to `Base.close` the stream if it is not run to completion.
"""
mutable struct ReadStream <: IO
ptr::Ptr{Nothing}
object_size::Int
bytes_read::Int
ended::Bool
error::Option{String}
end
function Base.eof(io::ReadStream)
if io.ended
return true
elseif !isnothing(io.error)
throw("stream stopped by prevoius error: $(io.error)")
elseif bytesavailable(io) > 0
return false
else
response = ReadResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
preserve_task(ct)
GC.@preserve io response event try
result = @ccall rust_lib.is_end_of_stream(
io.ptr::Ptr{Cvoid},
response::Ref{ReadResponseFFI},
handle::Ptr{Cvoid}
)::Cint
@assert result == 0
wait_or_cancel(event, response)
finally
unpreserve_task(ct)
end
try
@throw_on_error(response, "is_end_of_stream", GetException)
catch e
stream_error!(io, e.msg)
rethrow()
end
eof = response.eof > 0
if eof
stream_end!(io)
end
return eof
end
end
function Base.bytesavailable(io::ReadStream)
if !Base.isopen(io)
return 0
else
result = @ccall rust_lib.bytes_available(io.ptr::Ptr{Cvoid})::Clonglong
@assert result >= 0
return Int(result)
end
end
function Base.close(io::ReadStream)
finish!(io)
return nothing
end
Base.isopen(io::ReadStream) = !io.ended && isnothing(io.error)
Base.iswritable(io::ReadStream) = false
Base.filesize(io::ReadStream) = io.object_size
function stream_end!(io::ReadStream)
@assert Base.isopen(io)
io.ended = true
@ccall rust_lib.destroy_read_stream(io.ptr::Ptr{Nothing})::Cint
end
function stream_error!(io::ReadStream, err::String)
@assert Base.isopen(io)
io.error = err
@ccall rust_lib.destroy_read_stream(io.ptr::Ptr{Nothing})::Cint
end
function Base.readbytes!(io::ReadStream, dest::AbstractVector{UInt8}, n)
eof(io) && return 0
if n == typemax(Int)
bytes_read = 0
while !eof(io)
bytes_to_read = 128 * 1024
bytes_read + bytes_to_read > length(dest) && resize!(dest, bytes_read + bytes_to_read)
bytes_read += GC.@preserve dest _unsafe_read(io, pointer(dest, bytes_read+1), bytes_to_read)
end
resize!(dest, bytes_read)
return bytes_read
else
bytes_to_read = n == typemax(Int) ? 64 * 1024 : Int(n)
bytes_to_read > length(dest) && resize!(dest, bytes_to_read)
bytes_read = GC.@preserve dest _unsafe_read(io, pointer(dest), bytes_to_read)
return bytes_read
end
end
function Base.unsafe_read(io::ReadStream, p::Ptr{UInt8}, nb::UInt)
if eof(io)
nb > 0 && throw(EOFError())
return nothing
end
bytes_read = _unsafe_read(io, p, Int(nb))
eof(io) && nb > bytes_read && throw(EOFError())
return nothing
end
# TranscodingStreams.jl are calling this method when Base.bytesavailable is zero
# to trigger buffer refill
function Base.read(io::ReadStream, ::Type{UInt8})
eof(io) && throw(EOFError())
buf = zeros(UInt8, 1)
n = _unsafe_read(io, pointer(buf), 1)
n < 1 && throw(EOFError())
@inbounds b = buf[1]
return b
end
function _forward(to::IO, from::IO)
buf = Vector{UInt8}(undef, 64 * 1024)
n = 0
while !eof(from)
bytes_read = readbytes!(from, buf, 64 * 1024)
bytes_written = 0
while bytes_written < bytes_read
bytes_written += write(to, buf[bytes_written+1:bytes_read])
end
n += bytes_written
end
return n
end
function Base.write(to::IO, from::ReadStream)
return _forward(to, from)
end
"""
get_object_stream(path, conf; size_hint, decompress) -> ReadStream
Send a get request to the object store returning a stream of object data.
# Arguments
- `path::String`: The location of the data to fetch.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Keyword
- `size_hint::Int`: (Optional) Expected size of the object (optimization for small objects).
- `decompress::Option{String}`: (Optional) Compression algorithm to decode the response stream (supports gzip, deflate, zlib or zstd)
# Returns
- `stream::ReadStream`: The stream of object data chunks.
# Throws
- `GetException`: If the request fails for any reason.
"""
function get_object_stream(path::String, conf::AbstractConfig; size_hint::Int=0, decompress::String="")
response = ReadStreamResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
hint = convert(UInt64, size_hint)
while true
preserve_task(ct)
result = GC.@preserve config response event try
result = @ccall rust_lib.get_stream(
path::Cstring,
hint::Culonglong,
decompress::Cstring,
config::Ref{Config},
response::Ref{ReadStreamResponseFFI},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
# No need to destroy_read_stream in case of errors here
@throw_on_error(response, "get_stream", GetException)
return ReadStream(
response.stream,
convert(Int, response.object_size),
0,
false,
nothing
)
end
end
function _unsafe_read(stream::ReadStream, dest::Ptr{UInt8}, bytes_to_read::Int)
if stream.ended
return nothing
end
if !isnothing(stream.error)
throw("stream stopped by prevoius error: $(stream.error)")
end
response = ReadResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
preserve_task(ct)
GC.@preserve stream dest response event try
result = @ccall rust_lib.read_from_stream(
stream.ptr::Ptr{Cvoid},
dest::Ptr{UInt8},
bytes_to_read::Culonglong,
bytes_to_read::Culonglong,
response::Ref{ReadResponseFFI},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
finally
unpreserve_task(ct)
end
try
@throw_on_error(response, "read_from_stream", GetException)
catch e
stream_error!(stream, e.msg)
rethrow()
end
if response.length > 0
stream.bytes_read += response.length
if response.eof == 0
return convert(Int, response.length)
else
stream_end!(stream)
return convert(Int, response.length)
end
else
stream_end!(stream)
return nothing
end
end
"""
finish!(stream::ReadStream) -> Bool
Finishes the stream reclaiming resources.
This function is not thread-safe.
# Arguments
- `stream::ReadStream`: The stream of object data.
# Returns
- `was_running::Bool`: Indicates if the stream was running when `finish!` was called.
"""
function finish!(stream::ReadStream)
if !Base.isopen(stream)
return false
end
stream_end!(stream)
return true
end
mutable struct WriteResponseFFI
result::Cint
length::Culonglong
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
WriteResponseFFI() = new(-1, 0, C_NULL, C_NULL)
end
mutable struct WriteStreamResponseFFI
result::Cint
stream::Ptr{Nothing}
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
WriteStreamResponseFFI() = new(-1, C_NULL, C_NULL, C_NULL)
end
"""
WriteStream
Opaque IO sink of object data.
It is necessary to call `shutdown!` to ensure data is persisted, or `cancel!` if the stream is to be discarded.
"""
mutable struct WriteStream <: IO
ptr::Ptr{Nothing}
bytes_written::Int
destroyed::Bool
error::Option{String}
end
"""
put_object_stream(path, conf; compress) -> WriteStream
Send a put request to the object store returning a stream to write data into.
# Arguments
- `path::String`: The location where to write the object.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Keyword
- `compress::Option{String}`: (Optional) Compression algorithm to encode the stream (supports gzip, deflate, zlib or zstd)
# Returns
- `stream::WriteStream`: The stream where to write object data.
# Throws
- `PutException`: If the request fails for any reason.
"""
function put_object_stream(path::String, conf::AbstractConfig; compress::String="")
response = WriteStreamResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
while true
preserve_task(ct)
result = GC.@preserve config response event try
result = @ccall rust_lib.put_stream(
path::Cstring,
compress::Cstring,
config::Ref{Config},
response::Ref{WriteStreamResponseFFI},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
# No need to destroy_write_stream in case of errors here
@throw_on_error(response, "put_stream", PutException)
return WriteStream(
response.stream,
0,
false,
nothing
)
end
end
"""
cancel!(stream::WriteStream) -> Bool
Cancels the stream reclaiming resources.
No partial writes will be observed.
This function is not thread-safe.
# Arguments
- `stream::WriteStream`: The writeable stream to be canceled.
# Returns
- `was_writeable::Bool`: Indicates if the stream was writeable when `cancel!` was called.
"""
function cancel!(stream::WriteStream)
if !Base.isopen(stream)
return false
end
stream_destroy(stream)
return true
end
"""
shutdown!(stream::WriteStream) -> Bool
Shuts down the stream ensuring the data is persisted.
On failure partial writes will NOT be observed.
This function is not thread-safe.
# Arguments
- `stream::WriteStream`: The writeable stream to be shutdown.
"""
function shutdown!(stream::WriteStream)
if !isnothing(stream.error)
throw(PutException("Tried to shutdown a stream in error state, previous error: $(stream.error)"))
end
if stream.destroyed
throw(PutException("Tried to shutdown a destroyed stream (from a previous `cancel!` or `shutdown!`)"))
end
response = WriteResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
GC.@preserve stream response event try
preserve_task(ct)
result = @ccall rust_lib.shutdown_write_stream(
stream.ptr::Ptr{Cvoid},
response::Ref{WriteResponseFFI},
handle::Ptr{Cvoid}
)::Cint
@assert result == 0
wait_or_cancel(event, response)
finally
unpreserve_task(ct)
end
try
@throw_on_error(response, "shutdown_write_stream", PutException)
catch e
stream_error!(stream, e.msg)
rethrow()
end
if response.result == 0
stream_destroy(stream)
return nothing
else
@assert false "unreachable"
end
end
Base.isopen(io::WriteStream) = !io.destroyed && isnothing(io.error)
Base.iswritable(io::WriteStream) = true
function Base.close(io::WriteStream)
shutdown!(io)
return nothing
end
function Base.flush(stream::WriteStream)
_unsafe_write(stream, convert(Ptr{UInt8}, C_NULL), 0; flush=true)
return nothing
end
function Base.unsafe_write(stream::WriteStream, input::Ptr{UInt8}, nbytes::Int)
_unsafe_write(stream, input, nbytes)
return nothing
end
function Base.write(io::WriteStream, bytes::Vector{UInt8})
return _unsafe_write(io, pointer(bytes), length(bytes))
end
function Base.write(to::WriteStream, from::IO)
return _forward(to, from)
end
function Base.write(to::WriteStream, from::ReadStream)
return _forward(to, from)
end
function stream_destroy(io::WriteStream)
@assert Base.isopen(io)
io.destroyed = true
@ccall rust_lib.destroy_write_stream(io.ptr::Ptr{Nothing})::Cint
end
function stream_error!(io::WriteStream, err::String)
@assert Base.isopen(io)
io.error = err
@ccall rust_lib.destroy_write_stream(io.ptr::Ptr{Nothing})::Cint
end
function _unsafe_write(stream::WriteStream, input::Ptr{UInt8}, nbytes::Int; flush=false)
if !isnothing(stream.error)
throw(PutException("Tried to write to a stream in error state, previous error: $(stream.error)"))
end
if stream.destroyed
throw(PutException("Tried to write to a destroyed stream (from a previous `cancel!` or `shutdown!`)"))
end
response = WriteResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
GC.@preserve stream response event try
preserve_task(ct)
result = @ccall rust_lib.write_to_stream(
stream.ptr::Ptr{Cvoid},
input::Ptr{UInt8},
nbytes::Culonglong,
flush::Cuchar,
response::Ref{WriteResponseFFI},
handle::Ptr{Cvoid}
)::Cint
@assert result == 0
wait_or_cancel(event, response)
finally
unpreserve_task(ct)
end
try
@throw_on_error(response, "write_to_stream", PutException)
catch e
stream_error!(stream, e.msg)
rethrow()
end
@assert response.result == 0
stream.bytes_written += response.length
return Int(response.length)
end
# List operations
"""
function max_entries_per_chunk()::Int
Return the maximum number of entries a listing stream chunk can hold.
This is kept in sync manually with the Rust care for now, it should later be re-expoted.
"""
max_entries_per_chunk() = 1000
struct ListEntryFFI
location::Cstring
last_modified::Culonglong
size::Culonglong
e_tag::Cstring
version::Cstring
end
struct ListEntry
location::String
last_modified::Int
size::Int
e_tag::Option{String}
version::Option{String}
end
function convert_list_entry(entry::ListEntryFFI)
return ListEntry(
unsafe_string(entry.location),
convert(Int, entry.last_modified),
convert(Int, entry.size),
entry.e_tag != C_NULL ? unsafe_string(entry.e_tag) : nothing,
entry.version != C_NULL ? unsafe_string(entry.version) : nothing
)
end
mutable struct ListResponseFFI
result::Cint
entries::Ptr{ListEntryFFI}
entry_count::Culonglong
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
ListResponseFFI() = new(-1, C_NULL, 0, C_NULL, C_NULL)
end
"""
list_objects(prefix, conf; offset) -> Vector{ListEntry}
Send a list request to the object store.
This buffers all entries in memory. For large (or unknown) object counts use `list_objects_stream`.
# Arguments
- `prefix::String`: Only objects with this prefix will be returned.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Keyword Arguments
- `offset::Option{String}`: (Optional) Start listing after this offset
# Returns
- `entries::Vector{ListEntry}`: The array with metadata for each object in the prefix.
Returns an empty array if no objects match.
# Throws
- `ListException`: If the request fails for any reason.
"""
function list_objects(prefix::String, conf::AbstractConfig; offset::Option{String} = nothing)
response = ListResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
c_offset = if isnothing(offset)
C_NULL
else
offset
end
while true
preserve_task(ct)
result = GC.@preserve config response event try
result = @ccall rust_lib.list(
prefix::Cstring,
c_offset::Cstring,
config::Ref{Config},
response::Ref{ListResponseFFI},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
# No need to destroy_list_response in case of errors here
@throw_on_error(response, "list", ListException)
entries = if response.entry_count > 0
raw_entries = unsafe_wrap(Array, response.entries, response.entry_count)
vector = map(convert_list_entry, raw_entries)
@ccall rust_lib.destroy_list_entries(
response.entries::Ptr{ListEntryFFI},
response.entry_count::Culonglong
)::Cint
vector
else
Vector{ListEntry}[]
end
return entries
end
end
mutable struct ListStreamResponseFFI
result::Cint
stream::Ptr{Nothing}
error_message::Ptr{Cchar}
context::Ptr{Cvoid}
ListStreamResponseFFI() = new(-1, C_NULL, C_NULL, C_NULL)
end
"""
ListStream
Opaque stream of metadata list chunks (Vector{ListEntry}).
Use `next_chunk!` repeatedly to fetch data. An empty chunk indicates end of stream.
The stream stops if an error occours, any following calls to `next_chunk!` will repeat the same error.
It is necessary to `finish!` the stream if it is not run to completion.
"""
mutable struct ListStream
ptr::Ptr{Nothing}
ended::Bool
error::Option{String}
end
function stream_end!(stream::ListStream)
@assert (!stream.ended && isnothing(stream.error))
stream.ended = true
@ccall rust_lib.destroy_list_stream(stream.ptr::Ptr{Nothing})::Cint
end
function stream_error!(stream::ListStream, err::String)
@assert (!stream.ended && isnothing(stream.error))
stream.error = err
@ccall rust_lib.destroy_list_stream(stream.ptr::Ptr{Nothing})::Cint
end
"""
list_objects_stream(prefix, conf) -> ListStream
Send a list request to the object store returning a stream of entry chunks.
# Arguments
- `prefix::String`: Only objects with this prefix will be returned.
- `conf::AbstractConfig`: The configuration to use for the request.
It includes credentials and other client options.
# Keyword Arguments
- `offset::Option{String}`: (Optional) Start listing after this offset
# Returns
- `stream::ListStream`: The stream of object metadata chunks.
# Throws
- `ListException`: If the request fails for any reason.
"""
function list_objects_stream(prefix::String, conf::AbstractConfig; offset::Option{String} = nothing)
response = ListStreamResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
config = into_config(conf)
c_offset = if isnothing(offset)
C_NULL
else
offset
end
while true
preserve_task(ct)
result = GC.@preserve config response event try
result = @ccall rust_lib.list_stream(
prefix::Cstring,
c_offset::Cstring,
config::Ref{Config},
response::Ref{ListStreamResponseFFI},
handle::Ptr{Cvoid}
)::Cint
wait_or_cancel(event, response)
result
finally
unpreserve_task(ct)
end
if result == 2
# backoff
sleep(0.01)
continue
end
# No need to destroy_list_stream in case of errors here
@throw_on_error(response, "list_stream", ListException)
return ListStream(response.stream, false, nothing)
end
end
"""
next_chunk!(stream) -> Option{Vector{ListEntry}}
Fetch the next chunk from a ListStream.
If the returned entries are the last in the stream, `stream.ended` will be set to true.
An empty chunk indicates end of stream too.
After an error any following calls will replay the error.
# Arguments
- `stream::ListStream`: The stream of object metadata list chunks.
# Returns
- `entries::Vector{ListEntry}`: The array with metadata for each object in the prefix.
Resturns and empty array if no objects match or the stream is over.
# Throws
- `ListException`: If the request fails for any reason.
"""
function next_chunk!(stream::ListStream)
if !isnothing(stream.error)
throw(PutException("Tried to fetch next chunk from a stream in error state, previous error: $(stream.error)"))
end
if stream.ended
return nothing
end
response = ListResponseFFI()
ct = current_task()
event = Base.Event()
handle = pointer_from_objref(event)
GC.@preserve stream response event try
preserve_task(ct)
result = @ccall rust_lib.next_list_stream_chunk(
stream.ptr::Ptr{Cvoid},
response::Ref{ListResponseFFI},
handle::Ptr{Cvoid}
)::Cint
@assert result == 0
wait_or_cancel(event, response)
finally
unpreserve_task(ct)
end
try
@throw_on_error(response, "next_list_stream_chunk", ListException)
catch e
stream_error!(stream, e.msg)
rethrow()
end
@assert response.result == 0
# To avoid calling `next_chunk!` again on a practically ended stream, we mark
# the stream as ended if the response has less entries than the chunk maximum.
# This is safe to do because the Rust backend always fill the chunk to the maximum
# unless the underlying stream is drained.
if response.entry_count < max_entries_per_chunk()
stream_end!(stream)
end
if response.entry_count > 0
raw_entries = unsafe_wrap(Array, response.entries, response.entry_count)
vector = map(convert_list_entry, raw_entries)
@ccall rust_lib.destroy_list_entries(
response.entries::Ptr{ListEntryFFI},
response.entry_count::Culonglong
)::Cint
return vector
else
return nothing
end
end
"""
finish!(stream) -> Bool
Finishes the stream reclaiming resources.
This function is not thread-safe.
# Arguments
- `stream::ListStream`: The stream of object metadata list chunks.
# Returns
- `was_running::Bool`: Indicates if the stream was running when `finish!` was called.
"""
function finish!(stream::ListStream)
if stream.ended || !isnothing(stream.error)
return false
end
stream_end!(stream)
return true
end
struct Metrics
live_bytes::Int64
end
function current_metrics()
return @ccall rust_lib.current_metrics()::Metrics
end
end # module
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 981 | @testitem "AWSConfig" begin
# access key is obscured when printing
@test repr(AWSConfig(;
region="a",
bucket_name="b",
access_key_id="c",
secret_access_key="d"
)) == "AWSConfig(region=\"a\", bucket_name=\"b\", access_key_id=*****, secret_access_key=*****, opts=ClientOptions())"
# session token is obscured when printing
@test repr(AWSConfig(;
region="a",
bucket_name="b",
access_key_id="c",
secret_access_key="d",
session_token="d"
)) == "AWSConfig(region=\"a\", bucket_name=\"b\", access_key_id=*****, secret_access_key=*****, session_token=*****, opts=ClientOptions())"
# host is supported
@test repr(AWSConfig(;
region="a",
bucket_name="b",
access_key_id="c",
secret_access_key="d",
host="d"
)) == "AWSConfig(region=\"a\", bucket_name=\"b\", access_key_id=*****, secret_access_key=*****, host=\"d\", opts=ClientOptions())"
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 26411 | @testitem "Basic S3 exceptions" setup=[InitializeObjectStore] begin
using CloudBase.CloudTest: Minio
import CloudBase
using RustyObjectStore: RustyObjectStore, get_object!, put_object, ClientOptions, AWSConfig
# For interactive testing, use Minio.run() instead of Minio.with()
# conf, p = Minio.run(; debug=true, public=false); atexit(() -> kill(p))
Minio.with(; debug=true, public=false) do conf
_credentials, _container = conf
base_url = _container.baseurl
default_region = "us-east-1"
config = AWSConfig(;
region=default_region,
bucket_name=_container.name,
access_key_id=_credentials.access_key_id,
secret_access_key=_credentials.secret_access_key,
host=base_url
)
global _stale_config = config
global _stale_base_url = base_url
@testset "Insufficient output buffer size" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 10)
@assert sizeof(input) == 100
@assert sizeof(buffer) < sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", config)
@test nbytes_written == 100
try
nbytes_read = get_object!(buffer, "test100B.csv", config)
@test false # Should have thrown an error
catch err
@test err isa RustyObjectStore.GetException
@test occursin("Supplied buffer was too small", err.msg)
end
end
@testset "Malformed credentials" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 100)
bad_config = AWSConfig(;
region=default_region,
bucket_name=_container.name,
access_key_id=_credentials.access_key_id,
secret_access_key="",
host=base_url
)
try
put_object(codeunits(input), "invalid_credentials.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("403 Forbidden", e.msg)
@test occursin("Check your key and signing method", e.msg)
end
nbytes_written = put_object(codeunits(input), "invalid_credentials.csv", config)
@assert nbytes_written == 100
try
get_object!(buffer, "invalid_credentials.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("403 Forbidden", e.msg)
@test occursin("Check your key and signing method", e.msg)
end
end
@testset "Non-existing file" begin
buffer = Vector{UInt8}(undef, 100)
try
get_object!(buffer, "doesnt_exist.csv", config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified key does not exist", e.msg)
end
end
@testset "Delete non-existing file" begin
# S3 semantics is to return success on deleting a non-existing file, so we expect this
# to succeed
delete_object("doesnt_exist.csv", config)
end
@testset "Non-existing container" begin
non_existent_container_name = string(_container.name, "doesntexist")
non_existent_base_url = replace(base_url, _container.name => non_existent_container_name)
bad_config = AWSConfig(;
region=default_region,
bucket_name=non_existent_container_name,
access_key_id=_credentials.access_key_id,
secret_access_key=_credentials.secret_access_key,
host=non_existent_base_url
)
buffer = Vector{UInt8}(undef, 100)
try
put_object(codeunits("a,b,c"), "invalid_credentials2.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified bucket does not exist", e.msg)
end
nbytes_written = put_object(codeunits("a,b,c"), "invalid_credentials2.csv", config)
@assert nbytes_written == 5
try
get_object!(buffer, "invalid_credentials2.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified bucket does not exist", e.msg)
end
end
end # Minio.with
# Minio is not running at this point
@testset "Connection error" begin
buffer = Vector{UInt8}(undef, 100)
# These test retry the connection error
try
put_object(codeunits("a,b,c"), "still_doesnt_exist.csv", _stale_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("Connection refused", e.msg)
end
try
get_object!(buffer, "still_doesnt_exist.csv", _stale_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("Connection refused", e.msg)
end
end
@testset "multiple start" begin
res = @ccall RustyObjectStore.rust_lib.start()::Cint
@test res == 1 # Rust CResult::Error
end
end # @testitem
### See AWS S3 docs:
### - "Error Responses - Amazon S3":
### https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
### - "GetObject"
### https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html
### - "PutObject"
### https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
@testitem "AWS S3 retries" setup=[InitializeObjectStore] begin
using CloudBase.CloudTest: Minio
import CloudBase
using RustyObjectStore: get_object!, put_object, AWSConfig, ClientOptions, is_timeout, is_early_eof, status_code
import HTTP
import Sockets
max_retries = 2
retry_timeout_secs = 10
request_timeout_secs = 1
region = "us-east-1"
container = "mybucket"
dummy_access_key_id = "qUwJPLlmEtlCDXJ1OUzF"
dummy_secret_access_key = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
function test_tcp_error(method)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8082)
@async begin
while true
sock = Sockets.accept(tcp_server)
_ = read(sock, 4)
close(sock)
nrequests[] += 1
end
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin("connection closed", e.msg)
@test is_early_eof(e)
finally
close(tcp_server)
end
return nrequests[]
end
function test_get_stream_error()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.startwrite(http)
write(http, "not enough")
close(http.stream)
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("end of file before message length reached", e.msg)
@test is_early_eof(e)
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function dummy_cb(handle::Ptr{Cvoid})
return nothing
end
function test_tcp_reset(method)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8082)
@async begin
while true
sock = Sockets.accept(tcp_server)
_ = read(sock, 4)
nrequests[] += 1
ccall(
:uv_tcp_close_reset,
Cint,
(Ptr{Cvoid}, Ptr{Cvoid}),
sock.handle, @cfunction(dummy_cb, Cvoid, (Ptr{Cvoid},))
)
end
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin("reset by peer", e.msg)
@test is_connection(e)
finally
close(tcp_server)
end
return nrequests[]
end
function test_get_stream_reset()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.startwrite(http)
write(http, "not enough")
socket = HTTP.IOExtras.tcpsocket(HTTP.Connections.getrawstream(http))
ccall(
:uv_tcp_close_reset,
Cint,
(Ptr{Cvoid}, Ptr{Cvoid}),
socket.handle, @cfunction(dummy_cb, Cvoid, (Ptr{Cvoid},))
)
close(http.stream)
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("Connection reset by peer", e.msg)
@test is_early_eof(e)
finally
Threads.@spawn HTTP.forceclose(http_server)
end
# wait(http_server)
return nrequests[]
end
function test_get_stream_timeout()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.setheader(http, "Last-Modified" => "Tue, 15 Oct 2019 12:45:26 GMT")
HTTP.setheader(http, "ETag" => "123")
HTTP.startwrite(http)
write(http, "not enough")
sleep(10)
close(http.stream)
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("operation timed out", e.msg)
@test is_timeout(e)
finally
Threads.@spawn HTTP.forceclose(http_server)
end
# wait(http_server)
return nrequests[]
end
function test_status(method, response_status, headers=nothing)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
response = isnothing(headers) ? HTTP.Response(response_status, response_body) : HTTP.Response(response_status, headers, response_body)
return response
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin(string(response_status), e.msg)
@test status_code(e) == response_status
response_status < 500 && (@test occursin("response body from the dummy server", e.msg))
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function test_timeout(method, message, wait_secs::Int = 60)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
if wait_secs > 0
sleep(wait_secs)
end
return HTTP.Response(200, response_body)
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test is_timeout(e)
@test occursin(string(message), e.msg)
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function test_cancellation()
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
sleep(5)
return HTTP.Response(200, response_body)
end
baseurl = "http://127.0.0.1:$port"
conf = AWSConfig(;
region=region,
bucket_name=container,
access_key_id=dummy_access_key_id,
secret_access_key=dummy_secret_access_key,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=10,
request_timeout_secs=10
)
)
try
size = 7_000_000
ptr = Base.Libc.malloc(size)
buf = unsafe_wrap(Array, convert(Ptr{UInt8}, ptr), size)
t = errormonitor(Threads.@spawn begin
try
RustyObjectStore.put_object(buf, "cancelled.bin", conf)
@test false
catch e
@test e == "cancel"
finally
Base.Libc.free(ptr)
end
true
end)
sleep(1)
schedule(t, "cancel"; error=true)
@test fetch(t::Task)
finally
HTTP.forceclose(http_server)
end
wait(http_server)
return nrequests[]
end
@testset "400: Bad Request" begin
# Returned when there's an error in the request URI, headers, or body. The response body
# contains an error message explaining what the specific problem is.
# See https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
# AWS S3 can also respond with this code for other unrecoverable cases such as when
# an upload exceeds the maximum allowed object size
# See https://www.rfc-editor.org/rfc/rfc9110#status.400
nrequests = test_status(:GET, 400)
@test nrequests == 1
nrequests = test_status(:PUT, 400)
@test nrequests == 1
end
@testset "403: Forbidden" begin
# Returned when you pass an invalid api-key.
# See https://www.rfc-editor.org/rfc/rfc9110#status.403
nrequests = test_status(:GET, 403)
@test nrequests == 1
nrequests = test_status(:PUT, 403)
@test nrequests == 1
end
@testset "404: Not Found" begin
# Returned when container not found or blob not found
# See https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
# See https://www.rfc-editor.org/rfc/rfc9110#status.404
nrequests = test_status(:GET, 404)
@test nrequests == 1
end
@testset "405: Method Not Supported" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.405
nrequests = test_status(:GET, 405, ["Allow" => "PUT"])
@test nrequests == 1
nrequests = test_status(:PUT, 405, ["Allow" => "GET"])
@test nrequests == 1
end
@testset "409: Conflict" begin
# Returned when write operations conflict.
# See https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
# See https://www.rfc-editor.org/rfc/rfc9110#status.409
nrequests = test_status(:GET, 409)
@test nrequests == 1
nrequests = test_status(:PUT, 409)
@test nrequests == 1
end
@testset "412: Precondition Failed" begin
# Returned when an If-Match or If-None-Match header's condition evaluates to false
# See https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
# See https://www.rfc-editor.org/rfc/rfc9110#status.412
nrequests = test_status(:GET, 412)
@test nrequests == 1
nrequests = test_status(:PUT, 412)
@test nrequests == 1
end
@testset "413: Content Too Large" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.413
nrequests = test_status(:PUT, 413)
@test nrequests == 1
end
@testset "429: Too Many Requests" begin
# See https://www.rfc-editor.org/rfc/rfc6585#section-4
nrequests = test_status(:GET, 429)
@test nrequests == 1
nrequests = test_status(:PUT, 429)
@test nrequests == 1
# See https://www.rfc-editor.org/rfc/rfc9110#field.retry-after
# TODO: We probably should respect the Retry-After header, but we currently don't
# (and we don't know if AWS actually sets it)
# This can happen when AWS is throttling us, so it might be a good idea to retry with some
# larger initial backoff (very eager retries probably only make the situation worse).
nrequests = test_status(:GET, 429, ["Retry-After" => 10])
@test nrequests == 1 + max_retries broken=true
nrequests = test_status(:PUT, 429, ["Retry-After" => 10])
@test nrequests == 1 + max_retries broken=true
end
@testset "502: Bad Gateway" begin
# https://www.rfc-editor.org/rfc/rfc9110#status.502
# The 502 (Bad Gateway) status code indicates that the server, while acting as a
# gateway or proxy, received an invalid response from an inbound server it accessed
# while attempting to fulfill the request.
# This error can occur when you enter HTTP instead of HTTPS in the connection.
nrequests = test_status(:GET, 502)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 502)
@test nrequests == 1 + max_retries
end
@testset "503: Service Unavailable" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.503
# The 503 (Service Unavailable) status code indicates that the server is currently
# unable to handle the request due to a temporary overload or scheduled maintenance,
# which will likely be alleviated after some delay. The server MAY send a Retry-After
# header field (Section 10.2.3) to suggest an appropriate amount of time for the
# client to wait before retrying the request.
# See https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
nrequests = test_status(:GET, 503)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 503)
@test nrequests == 1 + max_retries
end
@testset "504: Gateway Timeout" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.504
# The 504 (Gateway Timeout) status code indicates that the server, while acting as
# a gateway or proxy, did not receive a timely response from an upstream server it
# needed to access in order to complete the request
nrequests = test_status(:GET, 504)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 504)
@test nrequests == 1 + max_retries
end
@testset "Timeout" begin
nrequests = test_timeout(:GET, "timed out", 2)
@test nrequests == 1 + max_retries
nrequests = test_timeout(:PUT, "timed out", 2)
@test nrequests == 1 + max_retries
end
@testset "TCP Closed" begin
nrequests = test_tcp_error(:GET)
@test nrequests == 1 + max_retries
nrequests = test_tcp_error(:PUT)
@test nrequests == 1 + max_retries
end
@testset "TCP reset" begin
nrequests = test_tcp_reset(:GET)
@test nrequests == 1 + max_retries
nrequests = test_tcp_reset(:PUT)
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body" begin
nrequests = test_get_stream_error()
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body reset" begin
nrequests = test_get_stream_reset()
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body timeout" begin
nrequests = test_get_stream_timeout()
@test nrequests == 1 + max_retries
end
@testset "Cancellation" begin
nrequests = test_cancellation()
@test nrequests == 1
end
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 1267 | @testitem "AzureConfig" begin
# access key is obscured when printing
@test repr(AzureConfig(;
storage_account_name="a",
container_name="b",
storage_account_key="c"
)) == "AzureConfig(storage_account_name=\"a\", container_name=\"b\", storage_account_key=*****, opts=ClientOptions())"
# sas token is obscured when printing
@test repr(AzureConfig(;
storage_account_name="a",
container_name="b",
storage_sas_token="c"
)) == "AzureConfig(storage_account_name=\"a\", container_name=\"b\", storage_sas_token=*****, opts=ClientOptions())"
@test repr(AzureConfig(;
storage_account_name="a",
container_name="b",
storage_account_key="c",
host="d"
)) == "AzureConfig(storage_account_name=\"a\", container_name=\"b\", storage_account_key=*****, host=\"d\", opts=ClientOptions())"
# can only supply either access key or sas token
try
AzureConfig(;
storage_account_name="a",
container_name="b",
storage_account_key="c",
storage_sas_token="d"
)
catch e
@test e isa ErrorException
@test e.msg == "Should provide either a storage_account_key or a storage_sas_token"
end
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 30272 | @testitem "Basic BlobStorage exceptions" setup=[InitializeObjectStore] begin
using CloudBase.CloudTest: Azurite
import CloudBase
using RustyObjectStore: RustyObjectStore, get_object!, put_object, ClientOptions, AzureConfig, AWSConfig
# For interactive testing, use Azurite.run() instead of Azurite.with()
# conf, p = Azurite.run(; debug=true, public=false); atexit(() -> kill(p))
Azurite.with(; debug=true, public=false) do conf
_credentials, _container = conf
base_url = _container.baseurl
config = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=_container.name,
storage_account_key=_credentials.auth.key,
host=base_url
)
global _stale_config = config
global _stale_base_url = base_url
@testset "Insufficient output buffer size" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 10)
@assert sizeof(input) == 100
@assert sizeof(buffer) < sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", config)
@test nbytes_written == 100
try
nbytes_read = get_object!(buffer, "test100B.csv", config)
@test false # Should have thrown an error
catch err
@test err isa RustyObjectStore.GetException
@test occursin("Supplied buffer was too small", err.msg)
end
end
@testset "Insufficient output buffer size multipart" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 1_000_000
buffer = Vector{UInt8}(undef, 20_000_000)
@assert sizeof(input) == 20_000_000
@assert sizeof(buffer) == sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", config)
@test nbytes_written == 20_000_000
try
# Buffer is over multipart threshold but too small for object
buffer = Vector{UInt8}(undef, 10_000_000)
nbytes_read = get_object!(buffer, "test100B.csv", config)
@test false # Should have thrown an error
catch err
@test err isa RustyObjectStore.GetException
@test occursin("Supplied buffer was too small", err.msg)
end
end
@testset "Malformed credentials" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 100)
bad_config = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=_container.name,
storage_account_key="",
host=base_url
)
try
put_object(codeunits(input), "invalid_credentials.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("400 Bad Request", e.msg) # Should this be 403 Forbidden? We've seen that with invalid SAS tokens
@test occursin("Authentication information is not given in the correct format", e.msg)
end
nbytes_written = put_object(codeunits(input), "invalid_credentials.csv", config)
@assert nbytes_written == 100
try
get_object!(buffer, "invalid_credentials.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("400 Bad Request", e.msg)
@test occursin("Authentication information is not given in the correct format", e.msg)
end
end
@testset "Non-existing file" begin
buffer = Vector{UInt8}(undef, 100)
try
get_object!(buffer, "doesnt_exist.csv", config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified blob does not exist", e.msg)
end
end
@testset "Delete non-existing file" begin
try
delete_object("doesnt_exist.csv", config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.DeleteException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified blob does not exist", e.msg)
end
end
@testset "Non-existing container" begin
non_existent_container_name = string(_container.name, "doesntexist")
non_existent_base_url = replace(base_url, _container.name => non_existent_container_name)
bad_config = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=non_existent_container_name,
storage_account_key=_credentials.auth.key,
host=non_existent_base_url
)
buffer = Vector{UInt8}(undef, 100)
try
put_object(codeunits("a,b,c"), "invalid_credentials2.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified container does not exist", e.msg)
end
nbytes_written = put_object(codeunits("a,b,c"), "invalid_credentials2.csv", config)
@assert nbytes_written == 5
try
get_object!(buffer, "invalid_credentials2.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified container does not exist", e.msg)
end
end
@testset "Non-existing resource" begin
bad_config = AzureConfig(;
storage_account_name="non_existing_account",
container_name=_container.name,
storage_account_key=_credentials.auth.key,
host=base_url
)
buffer = Vector{UInt8}(undef, 100)
try
put_object(codeunits("a,b,c"), "invalid_credentials3.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified resource does not exist.", e.msg)
end
nbytes_written = put_object(codeunits("a,b,c"), "invalid_credentials3.csv", config)
@assert nbytes_written == 5
try
get_object!(buffer, "invalid_credentials3.csv", bad_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("404 Not Found", e.msg)
@test occursin("The specified resource does not exist.", e.msg)
end
end
end # Azurite.with
# Azurite is not running at this point
@testset "Connection error" begin
buffer = Vector{UInt8}(undef, 100)
# These test retry the connection error
try
put_object(codeunits("a,b,c"), "still_doesnt_exist.csv", _stale_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.PutException
@test occursin("Connection refused", e.msg)
end
try
get_object!(buffer, "still_doesnt_exist.csv", _stale_config)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("Connection refused", e.msg)
end
end
@testset "multiple start" begin
res = @ccall RustyObjectStore.rust_lib.start()::Cint
@test res == 1 # Rust CResult::Error
end
end # @testitem
### See Azure Blob Storage docs: https://learn.microsoft.com/en-us/rest/api/storageservices
### - "Common REST API error codes":
### https://learn.microsoft.com/en-us/rest/api/storageservices/common-rest-api-error-codes
### - "Azure Blob Storage error codes":
### https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-error-codes
### - "Get Blob"
### https://learn.microsoft.com/en-us/rest/api/storageservices/get-blob
### - "Put Blob"
### https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob
@testitem "BlobStorage retries" setup=[InitializeObjectStore] begin
using CloudBase.CloudTest: Azurite
import CloudBase
using RustyObjectStore: get_object!, put_object, AWSConfig, ClientOptions, is_timeout, is_early_eof, status_code
import HTTP
import Sockets
max_retries = 2
retry_timeout_secs = 10
request_timeout_secs = 1
account = "myaccount"
container = "mycontainer"
shared_key_from_azurite = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
function test_status(method, response_status, headers=nothing)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$account/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
response = isnothing(headers) ? HTTP.Response(response_status, response_body) : HTTP.Response(response_status, headers, response_body)
return response
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin(string(response_status), e.msg)
@test status_code(e) == response_status
response_status < 500 && (@test occursin("response body from the dummy server", e.msg))
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function test_tcp_error(method)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8082)
@async begin
while true
sock = Sockets.accept(tcp_server)
_ = read(sock, 4)
close(sock)
nrequests[] += 1
end
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin("connection closed", e.msg)
@test is_early_eof(e)
finally
close(tcp_server)
end
return nrequests[]
end
function test_get_stream_error()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.setheader(http, "Last-Modified" => "Tue, 15 Oct 2019 12:45:26 GMT")
HTTP.setheader(http, "ETag" => "123")
HTTP.startwrite(http)
write(http, "not enough")
close(http.stream)
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("end of file before message length reached", e.msg)
@test is_early_eof(e)
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function dummy_cb(handle::Ptr{Cvoid})
return nothing
end
function test_tcp_reset(method)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8082)
@async begin
while true
sock = Sockets.accept(tcp_server)
_ = read(sock, 4)
nrequests[] += 1
ccall(
:uv_tcp_close_reset,
Cint,
(Ptr{Cvoid}, Ptr{Cvoid}),
sock.handle, @cfunction(dummy_cb, Cvoid, (Ptr{Cvoid},))
)
end
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test occursin("reset by peer", e.msg)
@test is_connection(e)
finally
close(tcp_server)
end
return nrequests[]
end
function test_get_stream_reset()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.setheader(http, "Last-Modified" => "Tue, 15 Oct 2019 12:45:26 GMT")
HTTP.setheader(http, "ETag" => "123")
HTTP.startwrite(http)
write(http, "not enough")
socket = HTTP.IOExtras.tcpsocket(HTTP.Connections.getrawstream(http))
ccall(
:uv_tcp_close_reset,
Cint,
(Ptr{Cvoid}, Ptr{Cvoid}),
socket.handle, @cfunction(dummy_cb, Cvoid, (Ptr{Cvoid},))
)
close(http.stream)
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("Connection reset by peer", e.msg)
@test is_early_eof(e)
finally
Threads.@spawn HTTP.forceclose(http_server)
end
# wait(http_server)
return nrequests[]
end
function test_get_stream_timeout()
nrequests = Ref(0)
(port, tcp_server) = Sockets.listenany(8083)
http_server = HTTP.listen!(tcp_server) do http::HTTP.Stream
nrequests[] += 1
HTTP.setstatus(http, 200)
HTTP.setheader(http, "Content-Length" => "20")
HTTP.setheader(http, "Last-Modified" => "Tue, 15 Oct 2019 12:45:26 GMT")
HTTP.setheader(http, "ETag" => "123")
HTTP.startwrite(http)
write(http, "not enough")
sleep(10)
close(http.stream)
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
get_object!(zeros(UInt8, 20), "blob", conf)
@test false # Should have thrown an error
catch e
@test e isa RustyObjectStore.GetException
@test occursin("operation timed out", e.msg)
@test is_timeout(e)
finally
Threads.@spawn HTTP.forceclose(http_server)
end
# wait(http_server)
return nrequests[]
end
function test_timeout(method, message, wait_secs::Int = 60)
@assert method === :GET || method === :PUT
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
if wait_secs > 0
sleep(wait_secs)
end
return HTTP.Response(200, response_body)
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
method === :GET && get_object!(zeros(UInt8, 5), "blob", conf)
method === :PUT && put_object(codeunits("a,b,c"), "blob", conf)
@test false # Should have thrown an error
catch e
method === :GET && @test e isa RustyObjectStore.GetException
method === :PUT && @test e isa RustyObjectStore.PutException
@test is_timeout(e)
@test occursin(string(message), e.msg)
finally
close(http_server)
end
wait(http_server)
return nrequests[]
end
function test_cancellation()
nrequests = Ref(0)
response_body = "response body from the dummy server"
(port, tcp_server) = Sockets.listenany(8081)
http_server = HTTP.serve!(tcp_server) do request::HTTP.Request
if request.method == "GET" && request.target == "/$container/_this_file_does_not_exist"
# This is the exploratory ping from connect_and_test in lib.rs
return HTTP.Response(404, "Yup, still doesn't exist")
end
nrequests[] += 1
sleep(5)
return HTTP.Response(200, response_body)
end
baseurl = "http://127.0.0.1:$port/$account/$container/"
conf = AzureConfig(;
storage_account_name=account,
container_name=container,
storage_account_key=shared_key_from_azurite,
host=baseurl,
opts=ClientOptions(;
max_retries=max_retries,
retry_timeout_secs=retry_timeout_secs,
request_timeout_secs
)
)
try
size = 7_000_000
ptr = Base.Libc.malloc(size)
buf = unsafe_wrap(Array, convert(Ptr{UInt8}, ptr), size)
t = errormonitor(Threads.@spawn begin
try
RustyObjectStore.put_object(buf, "cancelled.bin", conf)
@test false
catch e
@test e == "cancel"
finally
Base.Libc.free(ptr)
end
true
end)
sleep(1)
schedule(t, "cancel"; error=true)
@test fetch(t::Task)
finally
HTTP.forceclose(http_server)
end
wait(http_server)
return nrequests[]
end
@testset "400: Bad Request" begin
# Returned when there's an error in the request URI, headers, or body. The response body
# contains an error message explaining what the specific problem is.
# See https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-error-codes
# See https://www.rfc-editor.org/rfc/rfc9110#status.400
nrequests = test_status(:GET, 400)
@test nrequests == 1
nrequests = test_status(:PUT, 400)
@test nrequests == 1
end
@testset "403: Forbidden" begin
# Returned when you pass an invalid api-key.
# See https://www.rfc-editor.org/rfc/rfc9110#status.403
nrequests = test_status(:GET, 403)
@test nrequests == 1
nrequests = test_status(:PUT, 403)
@test nrequests == 1
end
@testset "404: Not Found" begin
# Returned when container not found or blob not found
# See https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-error-codes
# See https://www.rfc-editor.org/rfc/rfc9110#status.404
nrequests = test_status(:GET, 404)
@test nrequests == 1
end
@testset "405: Method Not Supported" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.405
nrequests = test_status(:GET, 405, ["Allow" => "PUT"])
@test nrequests == 1
nrequests = test_status(:PUT, 405, ["Allow" => "GET"])
@test nrequests == 1
end
@testset "409: Conflict" begin
# Returned when write operations conflict.
# See https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-error-codes
# See https://www.rfc-editor.org/rfc/rfc9110#status.409
nrequests = test_status(:GET, 409)
@test nrequests == 1
nrequests = test_status(:PUT, 409)
@test nrequests == 1
end
@testset "412: Precondition Failed" begin
# Returned when an If-Match or If-None-Match header's condition evaluates to false
# See https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob#blob-custom-properties
# See https://www.rfc-editor.org/rfc/rfc9110#status.412
nrequests = test_status(:GET, 412)
@test nrequests == 1
nrequests = test_status(:PUT, 412)
@test nrequests == 1
end
@testset "413: Content Too Large" begin
# See https://learn.microsoft.com/en-us/rest/api/storageservices/put-blob#remarks
# If you attempt to upload either a block blob that's larger than the maximum
# permitted size for that service version or a page blob that's larger than 8 TiB,
# the service returns status code 413 (Request Entity Too Large). Blob Storage also
# returns additional information about the error in the response, including the
# maximum permitted blob size, in bytes.
# See https://www.rfc-editor.org/rfc/rfc9110#status.413
nrequests = test_status(:PUT, 413)
@test nrequests == 1
end
@testset "429: Too Many Requests" begin
# See https://www.rfc-editor.org/rfc/rfc6585#section-4
nrequests = test_status(:GET, 429)
@test nrequests == 1
nrequests = test_status(:PUT, 429)
@test nrequests == 1
# See https://www.rfc-editor.org/rfc/rfc9110#field.retry-after
# TODO: We probably should respect the Retry-After header, but we currently don't
# (and we don't know if Azure actually sets it)
# This can happen when Azure is throttling us, so it might be a good idea to retry with some
# larger initial backoff (very eager retries probably only make the situation worse).
nrequests = test_status(:GET, 429, ["Retry-After" => 10])
@test nrequests == 1 + max_retries broken=true
nrequests = test_status(:PUT, 429, ["Retry-After" => 10])
@test nrequests == 1 + max_retries broken=true
end
@testset "502: Bad Gateway" begin
# https://www.rfc-editor.org/rfc/rfc9110#status.502
# The 502 (Bad Gateway) status code indicates that the server, while acting as a
# gateway or proxy, received an invalid response from an inbound server it accessed
# while attempting to fulfill the request.
# This error can occur when you enter HTTP instead of HTTPS in the connection.
nrequests = test_status(:GET, 502)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 502)
@test nrequests == 1 + max_retries
end
@testset "503: Service Unavailable" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.503
# The 503 (Service Unavailable) status code indicates that the server is currently
# unable to handle the request due to a temporary overload or scheduled maintenance,
# which will likely be alleviated after some delay. The server MAY send a Retry-After
# header field (Section 10.2.3) to suggest an appropriate amount of time for the
# client to wait before retrying the request.
# See https://learn.microsoft.com/en-us/rest/api/storageservices/common-rest-api-error-codes
# An operation on any of the Azure Storage services can return the following error codes:
# Error code HTTP status code User message
# ServerBusy Service Unavailable (503) The server is currently unable to receive requests. Please retry your request.
# ServerBusy Service Unavailable (503) Ingress is over the account limit.
# ServerBusy Service Unavailable (503) Egress is over the account limit.
# ServerBusy Service Unavailable (503) Operations per second is over the account limit.
nrequests = test_status(:GET, 503)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 503)
@test nrequests == 1 + max_retries
end
@testset "504: Gateway Timeout" begin
# See https://www.rfc-editor.org/rfc/rfc9110#status.504
# The 504 (Gateway Timeout) status code indicates that the server, while acting as
# a gateway or proxy, did not receive a timely response from an upstream server it
# needed to access in order to complete the request
nrequests = test_status(:GET, 504)
@test nrequests == 1 + max_retries
nrequests = test_status(:PUT, 504)
@test nrequests == 1 + max_retries
end
@testset "Timeout" begin
nrequests = test_timeout(:GET, "timed out", 2)
@test nrequests == 1 + max_retries
nrequests = test_timeout(:PUT, "timed out", 2)
@test nrequests == 1 + max_retries
end
@testset "TCP Closed" begin
nrequests = test_tcp_error(:GET)
@test nrequests == 1 + max_retries
nrequests = test_tcp_error(:PUT)
@test nrequests == 1 + max_retries
end
@testset "TCP reset" begin
nrequests = test_tcp_reset(:GET)
@test nrequests == 1 + max_retries
nrequests = test_tcp_reset(:PUT)
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body" begin
nrequests = test_get_stream_error()
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body reset" begin
nrequests = test_get_stream_reset()
@test nrequests == 1 + max_retries
end
@testset "Incomplete GET body timeout" begin
nrequests = test_get_stream_timeout()
@test nrequests == 1 + max_retries
end
@testset "Cancellation" begin
nrequests = test_cancellation()
@test nrequests == 1
end
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 26128 | @testsetup module ReadWriteCases
using RustyObjectStore: get_object!, put_object, get_object_stream, put_object_stream,
AbstractConfig, delete_object, list_objects, list_objects_stream, next_chunk!, finish!
using CodecZlib
using RustyObjectStore
using Test: @testset, @test, @test_throws
export run_read_write_test_cases, run_stream_test_cases, run_sanity_test_cases, run_list_test_cases
function run_stream_test_cases(config::AbstractConfig)
# ReadStream
@testset "ReadStream small readbytes!" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^10; # 200 B
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 200
buffer = Vector{UInt8}(undef, 200)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 200
N = 19
buf = Vector{UInt8}(undef, N)
copyto!(buf, 1, buffer, 1, N)
@test buf == view(codeunits(multicsv), 1:N)
ioobj = get_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
readbytes!(ioobj, buf, N)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream large readbytes!" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20 MB
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 20 * 1000 * 1000
buffer = Vector{UInt8}(undef, 20 * 1000 * 1000)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 20 * 1000 * 1000
N = 1024*1024
buf = Vector{UInt8}(undef, N)
copyto!(buf, 1, buffer, 1, N)
@test buf == view(codeunits(multicsv), 1:N)
ioobj = get_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
readbytes!(ioobj, buf, N)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream small unsafe_read" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^10; # 200 B
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 200
buffer = Vector{UInt8}(undef, 200)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 200
N = 19
buf = Vector{UInt8}(undef, N)
copyto!(buf, 1, buffer, 1, N)
@test buf == view(codeunits(multicsv), 1:N)
ioobj = get_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
unsafe_read(ioobj, pointer(buf), nb)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream large unsafe_read" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20 MB
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 20 * 1000 * 1000
buffer = Vector{UInt8}(undef, 20 * 1000 * 1000)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 20 * 1000 * 1000
N = 1024*1024
buf = Vector{UInt8}(undef, N)
copyto!(buf, 1, buffer, 1, N)
@test buf == view(codeunits(multicsv), 1:N)
ioobj = get_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
unsafe_read(ioobj, pointer(buf), nb)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream small readbytes! decompress" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^100; # 2000 B
codec = ZlibCompressor()
CodecZlib.initialize(codec)
compressed = transcode(codec, codeunits(multicsv))
nbytes_written = put_object(compressed, "test.csv.gz", config)
@test nbytes_written == length(compressed)
CodecZlib.finalize(codec)
buffer = Vector{UInt8}(undef, length(compressed))
nbytes_read = get_object!(buffer, "test.csv.gz", config)
@test nbytes_read == length(compressed)
N = 19
buf = Vector{UInt8}(undef, N)
ioobj = get_object_stream("test.csv.gz", config; decompress="zlib")
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
readbytes!(ioobj, buf, N)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream large readbytes! decompress" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20 MB
codec = ZlibCompressor()
CodecZlib.initialize(codec)
compressed = transcode(codec, codeunits(multicsv))
nbytes_written = put_object(compressed, "test.csv.gz", config)
@test nbytes_written == length(compressed)
CodecZlib.finalize(codec)
buffer = Vector{UInt8}(undef, length(compressed))
nbytes_read = get_object!(buffer, "test.csv.gz", config)
@test nbytes_read == length(compressed)
N = 1024*1024
buf = Vector{UInt8}(undef, N)
ioobj = get_object_stream("test.csv.gz", config; decompress="zlib")
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv) - i : N
readbytes!(ioobj, buf, N)
@test view(buf, 1:nb) == view(codeunits(multicsv), i:i+nb-1)
i += N
end
close(ioobj)
end
@testset "ReadStream empty file readbytes! decompress" begin
multicsv = "" # 0 MB
codec = ZlibCompressor()
CodecZlib.initialize(codec)
compressed = transcode(codec, codeunits(multicsv))
nbytes_written = put_object(compressed, "test.csv.gz", config)
@test nbytes_written == length(compressed)
CodecZlib.finalize(codec)
buffer = Vector{UInt8}(undef, length(compressed))
nbytes_read = get_object!(buffer, "test.csv.gz", config)
@test nbytes_read == length(compressed)
N = 1024*1024
buf = ones(UInt8, N)
ioobj = get_object_stream("test.csv.gz", config; decompress="zlib")
readbytes!(ioobj, buf, N)
@test eof(ioobj)
@test all(buf .== 1)
close(ioobj)
end
@testset "ReadStream empty file readbytes!" begin
multicsv = "" # 0 MB
data = codeunits(multicsv)
nbytes_written = put_object(data, "test.csv", config)
@test nbytes_written == length(data)
buffer = Vector{UInt8}(undef, length(data))
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == length(data)
N = 1024*1024
buf = ones(UInt8, N)
ioobj = get_object_stream("test.csv", config)
readbytes!(ioobj, buf, N)
@test eof(ioobj)
@test all(buf .== 1)
close(ioobj)
end
@testset "ReadStream empty file unsafe_read" begin
multicsv = "" # 0 MB
data = codeunits(multicsv)
nbytes_written = put_object(data, "test.csv", config)
@test nbytes_written == length(data)
buffer = Vector{UInt8}(undef, length(data))
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == length(data)
N = 1024*1024
buf = ones(UInt8, N)
ioobj = get_object_stream("test.csv", config)
@test_throws EOFError unsafe_read(ioobj, pointer(buf), N)
@test eof(ioobj)
@test all(buf .== 1)
close(ioobj)
end
@testset "ReadStream read last byte" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20 MB
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 20 * 1000 * 1000
buffer = Vector{UInt8}(undef, 20 * 1000 * 1000)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 20 * 1000 * 1000
N = length(multicsv) - 1
buf = Vector{UInt8}(undef, N)
copyto!(buf, 1, buffer, 1, N)
@test buf == view(codeunits(multicsv), 1:N)
ioobj = get_object_stream("test.csv", config)
readbytes!(ioobj, buf, N)
@test buf == view(codeunits(multicsv), 1:N)
@test read(ioobj, UInt8) == UInt8(last(multicsv))
close(ioobj)
end
@testset "ReadStream read bytes into file" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20 MB
nbytes_written = put_object(codeunits(multicsv), "test.csv", config)
@test nbytes_written == 20 * 1000 * 1000
buffer = Vector{UInt8}(undef, 20 * 1000 * 1000)
nbytes_read = get_object!(buffer, "test.csv", config)
@test nbytes_read == 20 * 1000 * 1000
(path, io) = mktemp()
rs = get_object_stream("test.csv", config)
write(io, rs)
close(io)
io = open(path, "r")
filedata = read(io)
@test length(filedata) == length(codeunits(multicsv))
close(io)
@test buffer == codeunits(multicsv)
close(rs)
end
# WriteStream
@testset "WriteStream write small bytes" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^100; # 2000 B
N = 2000
ws = put_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv)-i+1 : N
buf = Vector{UInt8}(undef, nb)
copyto!(buf, 1, codeunits(multicsv), i, nb)
write(ws, buf)
i += N
end
close(ws)
rs = get_object_stream("test.csv", config)
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
@testset "WriteStream write large bytes" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20MB
N = 2000000
ws = put_object_stream("test.csv", config)
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv)-i+1 : N
buf = Vector{UInt8}(undef, nb)
copyto!(buf, 1, codeunits(multicsv), i, nb)
write(ws, buf)
i += N
end
close(ws)
rs = get_object_stream("test.csv", config)
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
@testset "WriteStream write empty" begin
multicsv = ""; # 0 B
ws = put_object_stream("test.csv", config)
write(ws, codeunits(multicsv))
close(ws)
rs = get_object_stream("test.csv", config)
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
@testset "WriteStream write small bytes and compress" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^100; # 2000 B
N = 2000
ws = put_object_stream("test.csv.gz", config; compress="gzip")
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv)-i+1 : N
buf = Vector{UInt8}(undef, nb)
copyto!(buf, 1, codeunits(multicsv), i, nb)
write(ws, buf)
i += N
end
close(ws)
rs = get_object_stream("test.csv.gz", config; decompress="gzip")
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
@testset "WriteStream write large bytes and compress" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20MB
N = 2000000
ws = put_object_stream("test.csv", config; compress="gzip")
i = 1
while i < sizeof(multicsv)
nb = i + N > length(multicsv) ? length(multicsv)-i+1 : N
buf = Vector{UInt8}(undef, nb)
copyto!(buf, 1, codeunits(multicsv), i, nb)
write(ws, buf)
i += N
end
close(ws)
rs = get_object_stream("test.csv", config; decompress="gzip")
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
@testset "WriteStream write bytes from file" begin
multicsv = "1,2,3,4,5,6,7,8,9,1\n"^1000000; # 20MB
N = 2000000
(path, io) = mktemp()
written = write(io, codeunits(multicsv))
@test written == length(codeunits(multicsv))
close(io)
ws = put_object_stream("test.csv", config)
io = open(path, "r")
write(ws, io)
close(ws)
rs = get_object_stream("test.csv", config)
objdata = read(rs)
@test objdata == codeunits(multicsv)
end
end
function run_read_write_test_cases(read_config::AbstractConfig, write_config::AbstractConfig = read_config)
@testset "0B file, 0B buffer" begin
buffer = Vector{UInt8}(undef, 0)
nbytes_written = put_object(codeunits(""), "empty.csv", write_config)
@test nbytes_written == 0
nbytes_read = get_object!(buffer, "empty.csv", read_config)
@test nbytes_read == 0
end
@testset "0B file, 1KB buffer" begin
buffer = Vector{UInt8}(undef, 1000)
nbytes_written = put_object(codeunits(""), "empty.csv", write_config)
@test nbytes_written == 0
nbytes_read = get_object!(buffer, "empty.csv", read_config)
@test nbytes_read == 0
end
@testset "100B file, 100B buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 100)
@assert sizeof(input) == 100
@assert sizeof(buffer) == sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 100
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 100
@test String(buffer[1:nbytes_read]) == input
end
@testset "100B file, 1KB buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 1000)
@assert sizeof(input) == 100
@assert sizeof(buffer) > sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 100
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 100
@test String(buffer[1:nbytes_read]) == input
end
@testset "1MB file, 1MB buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 50_000
buffer = Vector{UInt8}(undef, 1_000_000)
@assert sizeof(input) == 1_000_000 == sizeof(buffer)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 1_000_000
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 1_000_000
@test String(buffer[1:nbytes_read]) == input
end
@testset "delete_object" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 5
buffer = Vector{UInt8}(undef, 100)
@assert sizeof(input) == 100
@assert sizeof(buffer) == sizeof(input)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 100
delete_object("test100B.csv", write_config)
try
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test false # should throw
catch e
@test e isa RustyObjectStore.GetException
@test occursin("not found", e.msg)
end
end
# Large files should use multipart upload / download requests
@testset "20MB file, 20MB buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 1_000_000
buffer = Vector{UInt8}(undef, 20_000_000)
@assert sizeof(input) == 20_000_000 == sizeof(buffer)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 20_000_000
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 20_000_000
@test String(buffer[1:nbytes_read]) == input
end
@testset "20MB file, 21MB buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 1_000_000
buffer = Vector{UInt8}(undef, 21_000_000)
@assert sizeof(input) < sizeof(buffer)
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 20_000_000
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 20_000_000
@test String(buffer[1:nbytes_read]) == input
end
@testset "1MB file, 20MB buffer" begin
input = "1,2,3,4,5,6,7,8,9,1\n" ^ 50_000
nbytes_written = put_object(codeunits(input), "test100B.csv", write_config)
@test nbytes_written == 1_000_000
# Edge case for multpart download, file is less than threshold but buffer is greater
buffer = Vector{UInt8}(undef, 20_000_000)
nbytes_read = get_object!(buffer, "test100B.csv", read_config)
@test nbytes_read == 1_000_000
@test String(buffer[1:nbytes_read]) == input
end
end
function run_sanity_test_cases(read_config::AbstractConfig, write_config::AbstractConfig = read_config)
@testset "Round trip" begin
input = "1,2,3,4,5,6,7,8,9,1\n"
buffer = Vector{UInt8}(undef, length(input))
nbytes_written = put_object(codeunits(input), "roundtrip.csv", write_config)
@test nbytes_written == length(input)
nbytes_read = get_object!(buffer, "roundtrip.csv", read_config)
@test nbytes_read == length(input)
@test String(buffer[1:nbytes_read]) == input
end
end
function run_list_test_cases(config::AbstractConfig)
@testset "basic listing" begin
for i in range(10; step=10, length=5)
nbytes_written = put_object(codeunits(repeat('=', i)), "list/$(i).csv", config)
@test nbytes_written == i
end
entries = list_objects("list/", config)
@test length(entries) == 5
@test map(x -> x.size, entries) == range(10; step=10, length=5)
@test map(x -> x.location, entries) == ["list/10.csv", "list/20.csv", "list/30.csv", "list/40.csv", "list/50.csv"]
end
@testset "basic prefix" begin
for i in range(10; step=10, length=5)
nbytes_written = put_object(codeunits(repeat('=', i)), "other/$(i).csv", config)
@test nbytes_written == i
end
for i in range(110; step=10, length=5)
nbytes_written = put_object(codeunits(repeat('=', i)), "other/prefix/$(i).csv", config)
@test nbytes_written == i
end
entries = list_objects("other/", config)
@test length(entries) == 10
entries = list_objects("other/prefix/", config)
@test length(entries) == 5
@test map(x -> x.size, entries) == range(110; step=10, length=5)
@test map(x -> x.location, entries) ==
["other/prefix/110.csv", "other/prefix/120.csv", "other/prefix/130.csv", "other/prefix/140.csv", "other/prefix/150.csv"]
entries = list_objects("other/nonexistent/", config)
@test length(entries) == 0
entries = list_objects("other/p/", config)
@test length(entries) == 0
end
@testset "list empty entries" begin
for i in range(10; step=10, length=3)
nbytes_written = put_object(codeunits(""), "list_empty/$(i).csv", config)
@test nbytes_written == 0
end
entries = list_objects("list_empty/", config)
@test length(entries) == 3
@test map(x -> x.size, entries) == [0, 0, 0]
@test map(x -> x.location, entries) == ["list_empty/10.csv", "list_empty/20.csv", "list_empty/30.csv"]
end
@testset "list stream" begin
data = range(10; step=10, length=1001)
for i in data
nbytes_written = put_object(codeunits(repeat('=', i)), "list/$(i).csv", config)
@test nbytes_written == i
end
stream = list_objects_stream("list/", config)
entries = next_chunk!(stream)
@test length(entries) == max_entries_per_chunk()
one_entry = next_chunk!(stream)
@test length(one_entry) == 1
@test isnothing(next_chunk!(stream))
append!(entries, one_entry)
@test sort(map(x -> x.size, entries)) == data
@test sort(map(x -> x.location, entries)) == sort(map(x -> "list/$(x).csv", data))
end
@testset "list stream finish" begin
data = range(10; step=10, length=1001)
for i in data
nbytes_written = put_object(codeunits(repeat('=', i)), "list/$(i).csv", config)
@test nbytes_written == i
end
stream = list_objects_stream("list/", config)
entries = next_chunk!(stream)
@test length(entries) == max_entries_per_chunk()
@test finish!(stream)
@test isnothing(next_chunk!(stream))
@test !finish!(stream)
end
@testset "list stream offset" begin
key(x) = "offset/$(lpad(x, 10, "0")).csv"
data = range(10; step=10, length=101)
for i in data
nbytes_written = put_object(codeunits(repeat('=', i)), key(i), config)
@test nbytes_written == i
end
stream = list_objects_stream("offset/", config; offset=key(data[50]))
entries = next_chunk!(stream)
@test length(entries) == 51
@test isnothing(next_chunk!(stream))
@test sort(map(x -> x.size, entries)) == data[51:end]
@test sort(map(x -> x.location, entries)) == sort(map(x -> key(x), data[51:end]))
end
end
end # @testsetup
@testitem "Basic BlobStorage usage" setup=[InitializeObjectStore, ReadWriteCases] begin
using CloudBase.CloudTest: Azurite
using RustyObjectStore: AzureConfig, ClientOptions
# For interactive testing, use Azurite.run() instead of Azurite.with()
# conf, p = Azurite.run(; debug=true, public=false); atexit(() -> kill(p))
Azurite.with(; debug=true, public=false) do conf
_credentials, _container = conf
base_url = _container.baseurl
config = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=_container.name,
storage_account_key=_credentials.auth.key,
host=base_url
)
run_read_write_test_cases(config)
run_stream_test_cases(config)
run_list_test_cases(config)
config_padded = AzureConfig(;
storage_account_name=_credentials.auth.account * " \n",
container_name=_container.name * " \n",
storage_account_key=_credentials.auth.key * " \n",
host=base_url * " \n"
)
run_sanity_test_cases(config_padded)
end # Azurite.with
end # @testitem
# NOTE: PUT on azure always requires credentials, while GET on public containers doesn't
@testitem "Basic BlobStorage usage (anonymous read enabled)" setup=[InitializeObjectStore, ReadWriteCases] begin
using CloudBase.CloudTest: Azurite
using RustyObjectStore: AzureConfig, ClientOptions
# For interactive testing, use Azurite.run() instead of Azurite.with()
# conf, p = Azurite.run(; debug=true, public=true); atexit(() -> kill(p))
Azurite.with(; debug=true, public=true) do conf
_credentials, _container = conf
base_url = _container.baseurl
config = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=_container.name,
storage_account_key=_credentials.auth.key,
host=base_url
)
config_no_creds = AzureConfig(;
storage_account_name=_credentials.auth.account,
container_name=_container.name,
host=base_url
)
run_read_write_test_cases(config_no_creds, config)
end # Azurite.with
end # @testitem
@testitem "Basic AWS S3 usage" setup=[InitializeObjectStore, ReadWriteCases] begin
using CloudBase.CloudTest: Minio
using RustyObjectStore: AWSConfig, ClientOptions
# For interactive testing, use Minio.run() instead of Minio.with()
# conf, p = Minio.run(; debug=true, public=false); atexit(() -> kill(p))
Minio.with(; debug=true, public=false) do conf
_credentials, _container = conf
base_url = _container.baseurl
default_region = "us-east-1"
config = AWSConfig(;
region=default_region,
bucket_name=_container.name,
access_key_id=_credentials.access_key_id,
secret_access_key=_credentials.secret_access_key,
host=base_url
)
run_read_write_test_cases(config)
run_stream_test_cases(config)
run_list_test_cases(config)
config_padded = AWSConfig(;
region=default_region * " \n",
bucket_name=_container.name * " \n",
access_key_id=_credentials.access_key_id * " \n",
secret_access_key=_credentials.secret_access_key * " \n",
host=base_url * " \n"
)
run_sanity_test_cases(config_padded)
end # Minio.with
end # @testitem
@testitem "Basic AWS S3 usage (anonymous read enabled)" setup=[InitializeObjectStore, ReadWriteCases] begin
using CloudBase.CloudTest: Minio
using RustyObjectStore: AWSConfig, ClientOptions
# For interactive testing, use Minio.run() instead of Azurite.with()
# conf, p = Minio.run(; debug=true, public=true); atexit(() -> kill(p))
Minio.with(; debug=true, public=true) do conf
_credentials, _container = conf
base_url = _container.baseurl
default_region = "us-east-1"
config = AWSConfig(;
region=default_region,
bucket_name=_container.name,
access_key_id=_credentials.access_key_id,
secret_access_key=_credentials.secret_access_key,
host=base_url
)
config_no_creds = AWSConfig(;
region=default_region,
bucket_name=_container.name,
host=base_url
)
run_read_write_test_cases(config_no_creds, config)
end # Minio.with
end # @testitem
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 432 | @testsetup module InitializeObjectStore
using RustyObjectStore
test_config = StaticConfig(
n_threads=0,
cache_capacity=20,
cache_ttl_secs=30 * 60,
cache_tti_secs=5 * 60,
multipart_put_threshold=8 * 1024 * 1024,
multipart_get_threshold=8 * 1024 * 1024,
multipart_get_part_size=8 * 1024 * 1024,
concurrency_limit=512
)
init_object_store(test_config)
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 289 | @testitem "destroy_* functions do not panic" setup=[InitializeObjectStore] begin
result = @ccall RustyObjectStore.rust_lib._destroy_from_julia_thread()::Cint
@test result == 0
result = @ccall RustyObjectStore.rust_lib._destroy_in_tokio_thread()::Cint
@test result == 0
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 344 | # We should always be using the JLL package.
# We allow seeting the environment variable OBJECT_STORE_LIB to override this
# for development reasons, but it should never be set in CI.
@testitem "Using object_store_ffi_jll" begin
using object_store_ffi_jll
@test RustyObjectStore.rust_lib == object_store_ffi_jll.libobject_store_ffi
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 143 | using ReTestItems
using RustyObjectStore
withenv("RUST_BACKTRACE"=>1) do
runtests(RustyObjectStore; testitem_timeout=180, nworkers=1)
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | code | 798 | @testitem "Handle object_store_ffi panic" begin
# This needs to be run on a spawned process to ensure proper initialization
julia_cmd_ffi_panic = Base.julia_cmd()
code = """
using Test
using RustyObjectStore
triggered = false
function on_panic()
global triggered
triggered = true
end
init_object_store(;on_rust_panic=on_panic)
@test !triggered
@ccall RustyObjectStore.rust_lib._trigger_panic()::Cint
@test timedwait(() -> triggered, 0.5) == :ok
triggered = false
@ccall RustyObjectStore.rust_lib._trigger_panic()::Cint
@test timedwait(() -> triggered, 0.5) == :ok
"""
cmd = `$(julia_cmd_ffi_panic) --startup-file=no --project=. -e $code`
@test success(pipeline(cmd; stdout=stdout, stderr=stderr))
end
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.8.2 | 84a23be59399f9addb44021a7a4c7af2eb966589 | docs | 10070 | # RustyObjectStore.jl
[](https://github.com/RelationalAI/RustyObjectStore.jl/actions/workflows/CI.yml)
RustyObjectStore.jl is a Julia package for getting and putting data in cloud object stores, such as Azure Blob Storage and AWS S3.
It is built on top of the Rust [object_store crate](https://docs.rs/object_store/).
It provides a minimal API and focusses on high throughput.
_The package is under active development._
## Usage
The object_store runtime must be started before any requests are sent.
```julia
using RustyObjectStore
init_object_store()
```
Requests are sent via calling `put_object` or `get_object!`, providing the location of the object to put/get, either the data to send or a buffer that will receive data, and credentials.
For `put_object` the data must be a vector of bytes (`UInt8`).
For `get_object!` the buffer must be a vector into which bytes (`UInt8`) can be written.
```julia
using RustyObjectStore: get_object!, put_object, AzureConfig
config = AzureConfig(
storage_account_name="my_account",
container_name="my_container",
storage_account_key="my_key"
)
input = "1,2,3,4,5,6,7,8,9,0\n" ^ 5 # 100 B
nbytes_written = put_object(codeunits(input), "path/to/example.csv", config)
@assert nbytes_written == 100
buffer = Vector{UInt8}(undef, 1000) # 1000 B
@assert sizeof(buffer) > sizeof(input)
nbytes_read = get_object!(buffer, "path/to/example.csv", config)
@assert nbytes_read == 100
@assert String(buffer[1:nbytes_read]) == input
```
One-time global configuration can be set using a StaticConfig object passed to init\_object\_store():
```julia
test_config = StaticConfig(
n_threads=0,
cache_capacity=20,
cache_ttl_secs=30 * 60,
cache_tti_secs=5 * 60,
multipart_put_threshold=8 * 1024 * 1024,
multipart_get_threshold=8 * 1024 * 1024,
multipart_get_part_size=8 * 1024 * 1024,
concurrency_limit=512
)
init_object_store(test_config)
```
n\_threads is the number of rust executor threads to use. The default 0 means to use threads equal
to the number of cores.
cache\_capacity is the size of the LRU cache rust uses to cache connection objects. Here a connection
means a unique combination of destination URL, credentials, and per-connection configuration such as
timeouts; it does not mean an HTTP connection.
cache\_ttl\_secs is the time-to-live in seconds for the rust connection cache. Using 0 will disable
ttl eviction.
cache\_tti\_secs is the time in seconds that a connection can be idle before it is removed from the
rust cache. Using 0 will disable tti eviction.
multipart\_put\_threshold is the size in bytes for which any put request over this size will use a
multipart upload. The put part size is determined by the rust object\_store implementation, which
uses 10MB.
multipart\_get\_threshold and multipart\_get\_part\_size configure automatic multipart gets. The part
size can be greater than the threshold without breaking anything, but it may not make sense to do so.
The default 8MB for these values was borrowed from CloudStore.jl.
concurrency\_limit is the max number of concurrent Rust tasks that will be allowed for requests.
## Design
#### Packaging
The Rust [object_store](https://github.com/apache/arrow-rs/tree/master/object_store) crate does not provide a C API, so we have defined a C API in [object_store_ffi](https://github.com/relationalAI/object_store_ffi).
RustyObjectStore.jl depends on [object_store_ffi_jll.jl](https://github.com/JuliaBinaryWrappers/object_store_ffi_jll.jl) to provides a pre-built object_store_ffi library, and calls into the native library via `@ccall`.
#### Rust/Julia Interaction
Julia calls into the native library providing a libuv condition variable and then waits on that variable.
In the native code, the request from Julia is passed into a queue that is processed by a Rust spawned task.
Once the request to cloud storage is complete, Rust signals the condition variable.
In this way, the requests are asynchronous all the way up to Julia and the network processing is handled in the context of native thread pool.
For a GET request, Julia provides a buffer for the native library to write into.
This requires Julia to know a suitable size before-hand and requires the native library to do an extra memory copy, but the upside is that Julia controls the lifetime of the memory.
The library provides a way for Julia code to be notifed about a panic on a Rust thread through the `on_rust_panic` argument of `init_object_store`.
The default behavior is to log the stack trace (if enabled through RUST_BACKTRACE) and exit the process.
The general recommendation is to treat Rust panics as fatal because Julia tasks may hang due to not being notified.
#### Threading Model
Rust object_store uses the [tokio](https://docs.rs/tokio) async runtime. By default tokio sets up a worker thread pool with a number of threads equal to the number of cores.
This is configurable using the StaticConfig n\_threads option described above.
The unit of scheduling for tokio is a task, and tasks are created by spawn calls. Tasks must be non-blocking and use async/await for I/O operations,
which also serve as yield points for the cooperative concurrency between tasks. There is work stealing of tasks among the worker thread pools.
In object_store_ffi we use buffer_unordered to create a task for each request from Julia (up to a configurable concurrency limit) and allow them to be processed in any order.
The concurrency limit is configurable using the StaticConfig concurrency\_limit option described above.
Julia will call into object_store_ffi providing a libuv condition variable and then wait on that variable.
In the Rust code, the request from Julia is passed into a queue that is processed by a Rust spawned task. Once the request to cloud storage is complete,
Rust signals the condition variable. In this way, the requests are asynchronous all the way up to Julia and the network processing is handled in the context of Rust thread pool.
## Developement
When working on RustyObjectStore.jl you can either use [object_store_ffi_jll.jl](https://github.com/JuliaBinaryWrappers/object_store_ffi_jll.jl) or use a local build of [object_store_ffi](https://github.com/relationalAI/object_store_ffi).
Using object_store_ffi_jll.jl is just like using any other Julia package.
For example, you can change object_store_ffi_jll.jl version by updating the Project.toml `compat` entry and running `Pkg.update` to get the latest compatible release,
or `Pkg.develop` to use an unreleased version.
Alternatively, you can use a local build of object_store_ffi library by setting the `OBJECT_STORE_LIB` environment variable to the location of the build.
For example, if you have the object_store_ffi repository at `~/repos/object_store_ffi` and build the library by running `cargo build --release` from the base of that repository,
then you could use that local build by setting `OBJECT_STORE_LIB="~/repos/object_store_ffi/target/release"`.
The `OBJECT_STORE_LIB` environment variable is intended to be used only for local development.
The library path is set at package precompile time, so if the environment variable is changed RustyObjectStore.jl must recompile for the change to take effect.
You can check the location of the library in use by inspecting `RustyObjectStore.rust_lib`.
Since RustyObjectStore.jl is the primary user of object_store_ffi, the packages should usually be developed alongside one another.
For example, updating object_store_ffi and then testing out the changes in RustyObjectStore.jl.
A new release of object_store_ffi should usually be followed by a new release of object_store_ffi_jll.jl, and then a new release RustyObjectStore.jl.
#### Testing
Tests use the [ReTestItems.jl](https://github.com/JuliaTesting/ReTestItems.jl) test framework.
Run tests using the package manager Pkg.jl like:
```sh
$ julia --project -e 'using Pkg; Pkg.test()'
```
or after starting in a Julia session started with `julia --project`:
```julia
julia> # press ] to enter the Pkg REPL mode
(RustyObjectStore) pkg> test
```
Alternatively, tests can be run using ReTestItems.jl directly, which supports running individual tests.
For example:
```julia
julia> using ReTestItems
julia> runtests("test/azure_api_tests.jl"; name="AzureCredentials")
```
If `OBJECT_STORE_LIB` is set, then running tests locally will use the specified local build of the object_store_ffi library, rather than the version installed by object_store_ffi_jll.jl.
This is useful for testing out changes to object_store_ffi.
Adding new tests is done by writing test code in a `@testitem` in a file suffixed `*_tests.jl`.
See the existing [tests](./test) or the [ReTestItems documentation](https://github.com/JuliaTesting/ReTestItems.jl/#writing-tests) for examples.
#### Release Process
New releases of RustyObjectStore.jl can be made by incrementing the version number in the Project.toml file following [Semantic Versioning](semver.org),
and then commenting on the commit that should be released with `@JuliaRegistrator register`
(see [example](https://github.com/RelationalAI/RustyObjectStore.jl/commit/1b1ba5a198e76afe37f75a1d07e701deb818869c#comments)).
The [JuliaRegistrator](https://github.com/JuliaRegistries/Registrator.jl) bot will reply to the comment and automatically open a PR to the [General](https://github.com/JuliaRegistries/General/) package registry, that should then automatically be merged within a few minutes.
Once that PR to General is merged the new version of RustyObjectStore.jl is available, and the TagBot Github Action will make add a Git tag and a GitHub release for the new version.
RustyObjectStore.jl uses the object_store_ffi library via depending on object_store_ffi_jll.jl which installs pre-built binaries.
So when a new release of object_store_ffi is made, we need there to be a new release of object_store_ffi_jll.jl before we can make a release of RustyObjectStore.jl that uses the latest object_store_ffi.
| RustyObjectStore | https://github.com/RelationalAI/RustyObjectStore.jl.git |
|
[
"MIT"
] | 0.1.1 | f4dc2c1d994c7e2e602692a7dadd2ac79212c3a9 | code | 11737 | module DifferentiableFlatten
using SparseArrays, ChainRulesCore, NamedTupleTools, Requires, OrderedCollections
# Adapted from ParameterHandling.jl with the following license.
#=
Copyright (c) 2020 Invenia Technical Computing Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
=#
"""
flatten(x)
Returns a "flattened" representation of `x` as a vector of real numbers, and a function
`unflatten` that takes a vector of reals of the same length and returns an object of the
same type as `x`.
`unflatten` is the inverse of `flatten`, so
```julia
julia> x = (randn(5), 5.0, (a=5.0, b=randn(2, 3)));
julia> v, unflatten = flatten(x);
julia> x == unflatten(v)
true
```
"""
function flatten end
maybeflatten(x::Real) = x
maybeflatten(x) = flatten(x)
function flatten(x::Real)
v = [x]
unflatten_to_Real(v) = only(v)
return v, unflatten_to_Real
end
flatten(x::Vector{<:Real}) = (identity.(x), identity)
function flatten(x::AbstractVector)
x_vecs_and_backs = map(val -> flatten(val), identity.(x))
x_vecs, backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
function Vector_from_vec(x_vec)
sz = _cumsum(map(_length, x_vecs))
x_Vec = [backs[n](x_vec[sz[n] - _length(x_vecs[n]) + 1:sz[n]]) for n in eachindex(x)]
return x_Vec
end
return reduce(vcat, x_vecs), Vector_from_vec
end
function flatten(x::AbstractArray)
x_vec, from_vec = flatten(vec(identity.(x)))
Array_from_vec(x_vec) = reshape(from_vec(x_vec), size(x))
return identity.(x_vec), Array_from_vec
end
function flatten(x::Tuple)
x_vecs_and_backs = map(val -> flatten(val), x)
x_vecs, x_backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
lengths = map(_length, x_vecs)
sz = _cumsum(lengths)
function unflatten_to_Tuple(v)
map(x_backs, lengths, sz) do x_back, l, s
return x_back(v[s - l + 1:s])
end
end
return reduce(vcat, x_vecs), unflatten_to_Tuple
end
function flatten(x::NamedTuple)
x_vec, unflatten = flatten(values(x))
function unflatten_to_NamedTuple(v)
v_vec_vec = unflatten(v)
return NamedTuple{keys(x)}(v_vec_vec)
end
return identity.(x_vec), unflatten_to_NamedTuple
end
function flatten(d::AbstractDict, ks = collect(keys(d)))
_d = OrderedDict(k => d[k] for k in ks)
d_vec, unflatten = flatten(identity.(collect(values(_d))))
function unflatten_to_Dict(v)
v_vec_vec = unflatten(v)
return _build_ordered_dict(v_vec_vec, keys(_d))
end
return identity.(d_vec), unflatten_to_Dict
end
function _build_ordered_dict(vals, keys)
OrderedDict(key => vals[n] for (n, key) in enumerate(keys))
end
function ChainRulesCore.rrule(::typeof(_build_ordered_dict), vals, keys)
_build_ordered_dict(vals, keys), ฮ -> begin
NoTangent(), values(ฮ), NoTangent()
end
end
function flatten(x)
if Base.issingletontype(typeof(x))
v, un = Union{}[], _ -> x
else
v, un = flatten(ntfromstruct(x))
return identity.(v), Unflatten(x, y -> structfromnt(typeof(x), un(y)))
end
end
function zygote_flatten(::Real, x::Real)
v = [x]
unflatten_to_Real(v) = only(v)
return v, unflatten_to_Real
end
zygote_flatten(::Vector{<:Real}, x::Vector{<:Real}) = (identity.(x), identity)
# x_vecs_and_backs = map(val -> flatten(val), identity.(x))
# x_vecs, backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
# function Vector_from_vec(x_vec)
# sz = _cumsum(map(_length, x_vecs))
# x_Vec = [backs[n](x_vec[sz[n] - _length(x_vecs[n]) + 1:sz[n]]) for n in eachindex(x)]
# return x_Vec
# end
# return reduce(vcat, x_vecs), Vector_from_vec
function zygote_flatten(x1::AbstractVector, x2::AbstractVector)
x_vecs_and_backs = map(tuple.(identity.(x1), identity.(x2))) do val
zygote_flatten(val[1], val[2])
end
x_vecs, backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
function Vector_from_vec(x_vec)
sz = _cumsum(map(_length, x_vecs))
x_Vec = [backs[n](x_vec[sz[n] - _length(x_vecs[n]) + 1:sz[n]]) for n in eachindex(x2)]
return x_Vec
end
return reduce(vcat, x_vecs), Vector_from_vec
end
function zygote_flatten(x1::AbstractArray, x2::AbstractArray)
x_vec, from_vec = zygote_flatten(vec(identity.(x1)), vec(identity.(x2)))
Array_from_vec(x_vec) = reshape(from_vec(x_vec), size(x2))
return identity.(x_vec), Array_from_vec
end
function zygote_flatten(x1::Tuple, x2::Tuple)
x_vecs_and_backs = map(tuple.(x1, x2)) do val
zygote_flatten(val[1], val[2])
end
x_vecs, x_backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
lengths = map(_length, x_vecs)
sz = _cumsum(lengths)
function unflatten_to_Tuple(v)
map(x_backs, lengths, sz) do x_back, l, s
return x_back(v[s - l + 1:s])
end
end
return reduce(vcat, x_vecs), unflatten_to_Tuple
end
function zygote_flatten(x1, x2::Tangent)
zygote_flatten(x1, ntfromstruct(x2).backing)
end
function zygote_flatten(x1, x2::NamedTuple)
zygote_flatten(ntfromstruct(x1), x2)
end
function zygote_flatten(x1::NamedTuple, x2::NamedTuple)
x_vec, unflatten = zygote_flatten(values(x1), values(x2))
function unflatten_to_NamedTuple(v)
v_vec_vec = unflatten(v)
return NamedTuple{keys(x1)}(v_vec_vec)
end
return identity.(x_vec), unflatten_to_NamedTuple
end
function zygote_flatten(d1::AbstractDict, d2::AbstractDict, ks = collect(keys(d2)))
_d1 = OrderedDict(k => d1[k] for k in ks)
_d2 = OrderedDict(k => d2[k] for k in ks)
d_vec, unflatten = zygote_flatten(identity.(collect(values(_d1))), identity.(collect(values(_d2))))
function unflatten_to_Dict(v)
v_vec_vec = unflatten(v)
return OrderedDict(key => v_vec_vec[n] for (n, key) in enumerate(ks))
end
return identity.(d_vec), Unflatten(d1, unflatten_to_Dict)
end
function zygote_flatten(x1, x2)
v, un = zygote_flatten(ntfromstruct(x1), ntfromstruct(x2))
return identity.(v), Unflatten(x1, y -> structfromnt(typeof(x2), un(y)))
end
_length(x) = length(x)
_length(::Nothing) = 0
function ChainRulesCore.rrule(::typeof(flatten), x)
d_vec, un = flatten(x)
return (d_vec, un), ฮ -> begin
(NoTangent(), un(ฮ[1]), NoTangent())
end
end
function ChainRulesCore.rrule(::typeof(flatten), d::AbstractDict, ks)
_d = OrderedDict(k => d[k] for k in ks)
d_vec, un = flatten(_d, ks)
return (d_vec, un), ฮ -> begin
(NoTangent(), un(ฮ[1]), NoTangent())
end
end
struct Unflatten{X, F} <: Function
x::X
unflatten::F
end
(f::Unflatten)(x) = f.unflatten(x)
_zero(x::Real) = zero(x)
_zero(x::AbstractArray) = _zero.(x)
_zero(x::AbstractDict) = Dict(keys(x) .=> map(_zero, values(x)))
_zero(x::NamedTuple) = map(_zero, x)
_zero(x::Tuple) = map(_zero, x)
_zero(x) = structfromnt(typeof(x), _zero(ntfromstruct(x)))
function _merge(d1::AbstractDict{K, V}, d2::AbstractDict) where {K, V}
_d = OrderedDict{K, V}(k => _zero(v) for (k, v) in d1)
return sort!(merge(_d, OrderedDict{K, V}(d2)))
end
function _merge(d1::Tuple, d2::Tangent)
return _merge.(d1, d2.backing)
end
_merge(::Any, d2) = d2
function ChainRulesCore.rrule(un::Unflatten, v)
x = un(v)
return x, ฮ -> begin
_ฮ = _merge(x, ฮ)
return (NoTangent(), zygote_flatten(un.x, _ฮ)[1])
end
end
function flatten(::Nothing)
return Float64[], _ -> nothing
end
function flatten(::NoTangent)
return Float64[], _ -> NoTangent()
end
function flatten(::ZeroTangent)
return Float64[], _ -> ZeroTangent()
end
function flatten(::Tuple{})
return Float64[], _ -> ()
end
function zygote_flatten(x, ::Nothing)
t = flatten(x)
return zero(t[1]), Base.tail(t)
end
function zygote_flatten(x, ::NoTangent)
t = flatten(x)
return zero(t[1]), Base.tail(t)
end
function zygote_flatten(x, ::ZeroTangent)
t = flatten(x)
return zero(t[1]), Base.tail(t)
end
function zygote_flatten(::Any, ::Tuple{})
return Float64[], _ -> ()
end
macro constructor(T)
return flatten_expr(T, T)
end
macro constructor(T, C)
return flatten_expr(T, C)
end
flatten_expr(T, C) = quote
function DifferentiableFlatten.flatten(x::$(esc(T)))
v, un = flatten(ntfromstruct(x))
return identity.(v), Unflatten(x, y -> structfromnt($(esc(C)), un(y)))
end
function DifferentiableFlatten.zygote_flatten(x1::$(esc(T)), x2::$(esc(T)))
v, un = zygote_flatten(ntfromstruct(x1), ntfromstruct(x2))
return identity.(v), Unflatten(x2, y -> structfromnt($(esc(C)), un(y)))
end
DifferentiableFlatten._zero(x::$(esc(T))) = structfromnt($(esc(C)), _zero(ntfromstruct(x)))
end
_cumsum(x) = cumsum(x)
# Zygote can return a sparse vector co-tangent
# even if the input is a vector. This is causing
# issues in the rrule definition of Unflatten
flatten(x::SparseVector) = flatten(Array(x))
function flatten(x::SparseMatrixCSC)
x_vec, from_vec = flatten(x.nzval)
Array_from_vec(x_vec) = SparseMatrixCSC(x.m, x.n, x.colptr, x.rowval, from_vec(x_vec))
return identity.(x_vec), Array_from_vec
end
# Zygote can return a sparse vector co-tangent
# even if the input is a vector. This is causing
# issues in the rrule definition of Unflatten
zygote_flatten(x1::SparseVector, x2::SparseVector) = zygote_flatten(Array(x1), Array(x2))
function zygote_flatten(x1::SparseMatrixCSC, x2::SparseMatrixCSC)
x_vec, from_vec = zygote_flatten(x1.nzval, x2.nzval)
Array_from_vec(x_vec) = SparseMatrixCSC(x1.m, x1.n, x1.colptr, x1.rowval, from_vec(x_vec))
return identity.(x_vec), Unflatten(x1, Array_from_vec)
end
@init @require JuMP="4076af6c-e467-56ae-b986-b466b2749572" begin
import .JuMP
@eval begin
function flatten(x::JuMP.Containers.DenseAxisArray)
x_vec, from_vec = flatten(vec(identity.(x.data)))
Array_from_vec(x_vec) = JuMP.Containers.DenseAxisArray(reshape(from_vec(x_vec), size(x)), axes(x)...)
return identity.(x_vec), Array_from_vec
end
function zygote_flatten(x1::JuMP.Containers.DenseAxisArray, x2::NamedTuple)
x_vec, from_vec = zygote_flatten(vec(identity.(x1.data)), vec(identity.(x2.data)))
Array_from_vec(x_vec) = JuMP.Containers.DenseAxisArray(reshape(from_vec(x_vec), size(x2)), axes(x2)...)
return identity.(x_vec), Array_from_vec
end
function zygote_flatten(x1::JuMP.Containers.DenseAxisArray, x2::JuMP.Containers.DenseAxisArray)
x_vec, from_vec = zygote_flatten(vec(identity.(x1.data)), vec(identity.(x2.data)))
Array_from_vec(x_vec) = JuMP.Containers.DenseAxisArray(reshape(from_vec(x_vec), size(x2)), axes(x2)...)
return identity.(x_vec), Array_from_vec
end
end
end
end
| DifferentiableFlatten | https://github.com/JuliaNonconvex/DifferentiableFlatten.jl.git |
|
[
"MIT"
] | 0.1.1 | f4dc2c1d994c7e2e602692a7dadd2ac79212c3a9 | code | 4019 | using DifferentiableFlatten: flatten, zygote_flatten, maybeflatten
using DifferentiableFlatten: DifferentiableFlatten, @constructor
using OrderedCollections, JuMP, Zygote, SparseArrays, LinearAlgebra, Test
using ChainRulesCore
struct SS
a
b
end
struct MyStruct{T, T1, T2}
a::T1
b::T2
end
MyStruct(a, b) = MyStruct{typeof(a), typeof(a), typeof(b)}(a, b)
@constructor MyStruct MyStruct
@testset "DifferentiableFlatten.jl" begin
xs = [
1.0,
[1.0],
[1.0, 2.0],
[1.0, Float64[1.0, 2.0]],
[1.0, (1.0, 2.0)],
[1.0, OrderedDict(1 => Float64[1.0, 2.0])],
[[1.0], OrderedDict(1 => Float64[1.0, 2.0])],
[(1.0,), [1.0,], OrderedDict(1 => Float64[1.0, 2.0])],
[1.0 1.0; 1.0 1.0],
rand(2, 2, 2),
[Float64[1.0, 2.0], Float64[3.0, 4.0]],
OrderedDict(1 => 1.0),
OrderedDict(1 => Float64[1.0]),
OrderedDict(1 => 1.0, 2 => Float64[2.0]),
OrderedDict(1 => 1.0, 2 => Float64[2.0], 3 => [Float64[1.0, 2.0], Float64[3.0, 4.0]]),
JuMP.Containers.DenseAxisArray(reshape(Float64[1.0, 1.0], (2,)), 1),
(1.0,),
(1.0, 2.0),
(1.0, (1.0, 2.0)),
(1.0, Float64[1.0, 2.0]),
(1.0, OrderedDict(1 => Float64[1.0, 2.0])),
([1.0], OrderedDict(1 => Float64[1.0, 2.0])),
((1.0,), [1.0,], OrderedDict(1 => Float64[1.0, 2.0])),
(a = 1.0,),
(a = 1.0, b = 2.0),
(a = 1.0, b = (1.0, 2.0)),
(a = 1.0, b = Float64[1.0, 2.0]),
(a = 1.0, b = OrderedDict(1 => Float64[1.0, 2.0])),
(a = [1.0], b = OrderedDict(1 => Float64[1.0, 2.0])),
(a = (1.0,), b = [1.0,], c = OrderedDict(1 => Float64[1.0, 2.0])),
sparsevec(Float64[1.0, 2.0], [1, 3], 10),
sparse([1, 2, 2, 3], [2, 3, 1, 4], Float64[1.0, 2.0, 3.0, 4.0], 10, 10),
SS(1.0, 2.0),
[SS(1.0, 2.0), 1.0],
MyStruct(1.0, 1.0),
]
for x in xs
@show x
xvec, unflatten = flatten(x)
@test x == unflatten(xvec)
J = Zygote.jacobian(xvec) do x
unflatten(x)
flatten(x)[1]
end[1]
@test logabsdet(J) == (0.0, 1.0)
xvec, unflatten = zygote_flatten(x, x)
@test x == unflatten(xvec)
J = Zygote.jacobian(xvec) do x
unflatten(x)
zygote_flatten(x, x)[1]
end[1]
@test logabsdet(J) == (0.0, 1.0)
if x isa Real
@test maybeflatten(x) == x
else
xvec, unflatten = maybeflatten(x)
@test x == unflatten(xvec)
end
@show DifferentiableFlatten._zero(x)
@test all(==(0), flatten(DifferentiableFlatten._zero(x))[1])
end
xvec, unflatten = zygote_flatten(SS(1.0, 2.0), Tangent{SS}(a = 1.0, b = 2.0))
@test unflatten(xvec) isa NamedTuple
xvec, unflatten = zygote_flatten(SS(1.0, 2.0), (a = 1.0, b = 2.0))
@test unflatten(xvec) isa NamedTuple
@test DifferentiableFlatten._length(nothing) == 0
@test DifferentiableFlatten._merge(
OrderedDict(:a => 1.0),
OrderedDict(:b => 2.0),
) == OrderedDict(:a => 0.0, :b => 2.0)
@test DifferentiableFlatten._merge(1, SS(1.0, 2.0)) == SS(1.0, 2.0)
x = OrderedDict(:a => 1.0)
@test DifferentiableFlatten._merge(
(1.0,),
Tangent{NamedTuple{(:b,), Tuple{Float64}}}(b = 1.0),
) == (ZeroTangent(),)
@test flatten(nothing)[1] == Float64[]
@test flatten(NoTangent())[1] == Float64[]
@test flatten(ZeroTangent())[1] == Float64[]
@test flatten(())[1] == Float64[]
@test zygote_flatten(1.0, nothing)[1] == [0.0]
@test zygote_flatten(1.0, NoTangent())[1] == [0.0]
@test zygote_flatten(1.0, ZeroTangent())[1] == [0.0]
@test zygote_flatten(1.0, ())[1] == Float64[]
x = JuMP.Containers.DenseAxisArray(reshape(Float64[1.0, 1.0], (2,)), 1)
@test zygote_flatten(x, (data = [1.0, 1.0],))[1] == [1.0, 1.0]
@test flatten(exp)[1] == Union{}[]
@test flatten(exp)[2]([]) === exp
end
| DifferentiableFlatten | https://github.com/JuliaNonconvex/DifferentiableFlatten.jl.git |
|
[
"MIT"
] | 0.1.1 | f4dc2c1d994c7e2e602692a7dadd2ac79212c3a9 | docs | 652 | # DifferentiableFlatten
[](https://github.com/JuliaNonconvex/DifferentiableFlatten.jl/actions)
[](https://codecov.io/gh/JuliaNonconvex/DifferentiableFlatten.jl)
This package includes an implementation of a `flatten` function which flattens all data structures to vectors and returns an unflattenning function to back to the original data structure. This was originally part of [NonconvexCore.jl](https://github.com/JuliaNonconvex/NonconvexCore.jl).
| DifferentiableFlatten | https://github.com/JuliaNonconvex/DifferentiableFlatten.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 1727 | using DataConvenience
using DataFrames
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
fsort(df, :col) # sort by `:col`
fsort(df, [:col1, :col2]) # sort by `:col1` and `:col2`
fsort!(df, :col) # sort by `:col` # sort in-place by `:col`
fsort!(df, [:col1, :col2]) # sort in-place by `:col1` and `:col2`
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
using BenchmarkTools
fsort_1col = @belapsed fsort($df, :col) # sort by `:col`
fsort_2col = @belapsed fsort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
sort_1col = @belapsed sort($df, :col) # sort by `:col`
sort_2col = @belapsed sort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
using Plots
bar(["DataFrames.sort 1 col","DataFrames.sort 2 col2", "DataCon.sort 1 col","DataCon.sort 2 col2"],
[sort_1col, sort_2col, fsort_1col, fsort_2col],
title="DataFrames sort performance comparison",
label = "seconds")
using DataFrames
using CSV
df = DataFrame(a = rand(1_000_000), b = rand(Int8, 1_000_000), c = rand(Int8, 1_000_000))
filepath = tempname()*".csv"
CSV.write(filepath, df)
for chunk in CsvChunkIterator(filepath)
print(describe(chunk))
end
# read all column as String
for chunk in CsvChunkIterator(filepath, type=String)
print(describe(chunk))
end
# read a three colunms csv where the column types are String, Int, Float32
for chunk in CsvChunkIterator(filepath, types=[String, Int, Float32])
print(describe(chunk))
end
@replicate 10 8
x = Vector{Union{Missing, Int}}(undef, 10_000_000)
cmx = count_missing(x) # this is faster
cmx2 = countmissing(x) # this is faster
cimx = count(ismissing, x) # the way available at base
cmx == cimx # true
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 188 | # Weave readme
using Pkg
Pkg.activate("readme-env")
#upcheck()
# Pkg.update()
using Weave
weave("README.jmd", out_path = :pwd, doctype = "github")
if false
tangle("README.jmd")
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 372 | using LinearAlgebra
export canonicalcor
function canonicalcor(x::AbstractMatrix, y::AbstractMatrix)
ma = inv(cov(x))*cov(x, y)*inv(cov(y))*cov(y,x)
mb = inv(cov(y))*cov(y, x)*inv(cov(x))*cov(x,y)
evx = eigvecs(ma)
evy = eigvecs(mb)
abs(cor(x*evx[:, end], y*evy[:, end]))
#[-cor(x*evx, y*evy) for (evx, evy) in zip(eachcol(evx), eachcol(evy))]
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 934 | module DataConvenience
import WeakRefStrings:StringVector
using DataFrames: AbstractDataFrame, DataFrame, rename, dropmissing
using CategoricalArrays
using Statistics
using Missings: nonmissingtype
import Statistics:cor
export cor, dfcor, @replicate, StringVector
include("cate-arrays.jl")
include("CCA.jl")
include("janitor.jl")
include("dfcor.jl")
include("onehot.jl")
include("create-missing.jl")
include("read-csv-in-chunks.jl")
include("fsort-dataframes.jl")
include("fast-missing-count.jl")
include("sample.jl")
include("nest.jl")
include("shortstringify.jl")
# head(df::AbstractDataFrame) = first(df, 10)
#
# tail(df::AbstractDataFrame) = last(df, 10)
"""
@replicate n expr
Replicate the expression `n` times
## Example
```julia
using DataConvenience, Random
@replicate 10 randstring(8) # returns 10 random length 8 strings
```
"""
macro replicate(n, expr)
:([$(esc(expr)) for i=1:$(esc(n))])
end
end # module
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 643 | function parseDateTimeN(str)
date, mmn = split(str, '.')
date1, time1 = split(date,'T')
time2 = parse.(Int64, split(time1, ':'))
mmn1 = mmn * reduce(*, ["0" for i in 1:(9-length(mmn))])
rd = reverse(digits(parse(Int, mmn1), pad = 9))
t = reduce(vcat, [
time2,
parse(Int, reduce(*, string.(rd[1:3]))),
parse(Int, reduce(*, string.(rd[4:6]))),
parse(Int, reduce(*, string.(rd[7:9])))]
)
DateTimeN(Date(date1), Time(t...))
end
import Base:show
show(io::IO, dd::DateTimeN) = begin
print(io, dd.d)
print(io, dd.t)
end
DateTimeN(str::String) = parseDateTimeN(str)
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 1089 | ################################################################################
# convenient function for CategoricalArrays
################################################################################
import SortingLab:sorttwo!
using SortingLab
import StatsBase: rle
using CategoricalArrays
SortingLab.sorttwo!(x::CategoricalVector, y) = begin
SortingLab.sorttwo!(x.refs, y)
x, y
end
pooltype(::CategoricalPool{T,S}) where {T, S} = T,S
rle(x::CategoricalVector) = begin
refrle = rle(x.refs)
T,S = pooltype(x.pool)
(CategoricalArray{T, 1}(S.(refrle[1]), x.pool), refrle[2])
end
"""
StringVector(v::CategoricalVector{String})
Convert `v::CategoricalVector` efficiently to WeakRefStrings.StringVector
## Example
```julia
using DataFrames
a = categorical(["a","c", "a"])
a.refs
a.pool.index
# efficiently convert
sa = StringVector(a)
sa.buffer
sa.lengths
sa.offsets
```
"""
StringVector(v::CategoricalVector{S}) where S<:AbstractString = begin
sa = StringVector(v.pool.index)
StringVector{S}(sa.buffer, sa.offsets[v.refs], sa.lengths[v.refs])
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 502 | export create_missing!
using Missings: disallowmissing
"""
create_missing!(df, col::Symbol)
Create a new column for where `col` is missing
"""
create_missing!(df, col::Symbol; prefix="", suffix = "_missing") = begin
df[!, prefix*string(col)*suffix] = ismissing.(df[!, col])
if eltype(df[!, col]) <: Union{String, Missing}
df[!, col] .= disallowmissing.(coalesce.(df[!, col], "JULIA.MISSING"))
else
df[!, col] .= disallowmissing.(coalesce.(df[!, col], zero(eltype(df[!, col]))))
end
df
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 1707 | """
cor(x::AbstractVector{Bool}, y)
cor(y, x::AbstractVector{Bool})
Compute correlation between `Bool` and other types
"""
Statistics.cor(x::AbstractVector{Bool}, y::AbstractVector) = cor(y, Int.(x))
Statistics.cor(x::AbstractVector{Union{Bool, Missing}}, y::AbstractVector) = cor(y, passmissing(Int).(x))
"""
dfcor(df::AbstractDataFrame, cols1=names(df), cols2=names(df), verbose=false)
Compute correlation in a DataFrames by specifying a set of columns `cols1` vs
another set `cols2`. The cartesian product of `cols1` and `cols2`'s correlation
will be computed
"""
function dfcor(df::AbstractDataFrame, cols1::Vector{T} = names(df), cols2::Vector{T} = names(df); verbose=false) where {T}
k = 1
l1 = length(cols1)
l2 = length(cols2)
res = Vector{Float32}(undef, l1*l2)
names1 = Vector{T}(undef, l1*l2)
names2 = Vector{T}(undef, l1*l2)
for i in 1:l1
icol = df[!, cols1[i]]
if eltype(icol) >: String
# do nothing
else
Threads.@threads for j in 1:l2
if eltype(df[!, cols2[j]]) >: String
# do nothing
else
if verbose
println(k, " ", cols1[i], " ", cols2[j])
end
df2 = df[:,[cols1[i], cols2[j]] |> unique] |> dropmissing
if size(df2, 1) > 0
res[k] = cor(df2[!,cols1[j]], df2[!, cols2[j]])
names1[k] = cols1[i]
names2[k] = cols2[j]
k+=1
end
end
end
end
end
(names1[1:k-1], names2[1:k-1], res[1:k-1])
end | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 756 | export count_not_missing, count_missing, countmissing, countnotmissing
countmissing(args...) = count_missing(args...)
countnotmissing(args...) = count_not_missing(args...)
count_not_missing(x) = length(x) - count(ismissing, x)
count_missing(x) = count(ismissing, x)
function count_not_missing(::Type{S}, x::Vector{Union{T, Missing}}) where {S, T}
@assert isbitstype(T)
res = zero(S)
@inbounds for i in 1:length(x)
res += !ismissing(x[i])
end
res
end
count_not_missing(x::Vector{Union{T, Missing}}) where T = count_not_missing(Int, x)
count_missing(::Type{S}, x::Vector{Union{T, Missing}}) where {S, T} =
length(x) - count_not_missing(S, x)
count_missing(x::Vector{Union{T, Missing}}) where T = count_missing(Int, x)
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 1369 | export fsort, fsort!
using SortingLab: fsortperm
using Tables
if VERSION >= v"1.3.0"
import Base.Threads: @spawn
else
macro spawn(_)
println("DataConvenience: multithreading do not work in < Julia 1.3")
end
end
using Base.Threads: @spawn
fsort(tbl, col::Symbol) = fsort(tbl, [col])
fsort!(tbl, col::Symbol) = fsort!(tbl, [col])
fsort(tbl, cols; threaded=true) = fsort!(copy(tbl), cols; threaded=threaded)
function fsort!(tbl, cols; threaded=true)
@assert Tables.columnaccess(tbl)
if threaded && (VERSION >= v"1.3")
return _fsort_parallel!(tbl, cols)
else
return _fsort_single!(tbl, cols)
end
end
function _fsort_parallel!(tbl, cols)
@assert VERSION >= v"1.3"
for col in reverse(cols)
ordering = fsortperm(tbl[!, col])
channel_lock = Channel{Bool}(length(names(tbl)))
for c in names(tbl)
@spawn begin
v = tbl[!, c]
@inbounds v .= v[ordering]
put!(channel_lock, true)
end
end
for _ in names(tbl)
take!(channel_lock)
end
end
tbl
end
function _fsort_single!(tbl, cols)
for col in reverse(cols)
ordering = fsortperm(tbl[!, col])
for c in names(tbl)
v = tbl[!, c]
@inbounds v .= v[ordering]
end
end
tbl
end | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 966 | import DataFrames: AbstractDataFrame
using DataFrames: rename!
export cleannames!, cleanname, renamedups!
"""
cleannames!(df::DataFrame)
Uses R's `janitor::clean_names` to clean the names
"""
const ALLOWED_CHARS = vcat(vcat(vcat(Char.(-32+97:-32+97+25), Char.(97:97+25)), '_'), Char.(48:57))
renamedups!(n::AbstractVector{Symbol}) = begin
# are the uniques?
d = Dict{Symbol, Bool}()
for (i, n1) in enumerate(n)
if haskey(d, n1)
n[i] = Symbol(string(n[i])*"_1")
d[n[i]] = true
else
d[n1] = true
end
end
n
end
cleanname(s) = begin
ss = string(s)
res = join([c in ALLOWED_CHARS ? c : '_' for c in ss])
if res[1] in vcat(Char.(48:57))
res = "x" * res
end
Symbol(res)
end
function cleannames!(df::AbstractDataFrame)
n = names(df)
cn = cleanname.(n)
cn = renamedups!(cn)
for p in Pair.(n, cn)
rename!(df, p)
end
df
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 391 | export nest, unnest
using DataFrames
function nest(df::AbstractDataFrame, by, out)
function _subdf_as_vec(sdf)
[sdf[!, Not(by)]]
end
res = combine(groupby(df, by), _subdf_as_vec)
rename!(res, names(res)[end]=>out)
res
end
function unnest(df, val)
tmp = [crossjoin(df[i:i, Not(val)], sdf) for (i, sdf) in enumerate(df[!, val])]
reduce(vcat, tmp)
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 808 | export onehot, onehot!
using DataFrames: AbstractDataFrame
"""
onehot(df, col, cate = sort(unique(df[!, col])); outnames = Symbol.(:ohe_, cate))
onehot!(df, col, cate = sort(unique(df[!, col])); outnames = Symbol.(:ohe_, cate))
One-hot encode a column and create columns. The output columns will be overwritten WITHOUT warning
Arguments:
df - The DataFrame
col - The column to onehot encode
cate - The categories
"""
function onehot!(df::AbstractDataFrame, col, cate = sort(unique(df[!, col])); outnames = Symbol.(:ohe_, cate))
transform!(df, @. col => ByRow(isequal(cate)) .=> outnames)
end
function onehot(df::AbstractDataFrame, col, cate = sort(unique(df[!, col])); outnames = Symbol.(:ohe_, cate))
transform(df, @. col => ByRow(isequal(cate)) .=> outnames)
end | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 348 | export filter
import DataFrames: filter
function filter(df::AbstractDataFrame, arg; kwargs...)
filter(arg, df; kwargs...)
end
if false
using Pkg
Pkg.activate("c:/git/DataConvenience")
using DataFrames, DataConvenience
df = DataFrame(a=1:3)
filter(df, :a => ==(1))
@> df begin
filter(:a => ==(1))
end
end | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 2764 | export CsvChunkIterator
using CSV
using DataFrames: DataFrame, names
import Base: iterate, length, IteratorSize
using Base.Iterators
"""
CsvChunkIterator("path/to/file.csv")
Define a Chunking iterator on CSV file
"""
mutable struct CsvChunkIterator
file::IOStream
step::Int
column_headers::Union{Vector{String}, Vector{Symbol}}
csv_rows_params
function CsvChunkIterator(path::String, chunk_byte_size = 2^30; csv_rows_params...)
new(open(path, "r"), chunk_byte_size, Symbol[], csv_rows_params)
end
end
function Base.iterate(chunk_iterator::CsvChunkIterator)
first_read = position(chunk_iterator.file) == 0
bytes_read = read(chunk_iterator.file, chunk_iterator.step)
# try to find the newline character
# TODO you may not actually find the new line
last_newline_pos = findlast(x->x==UInt8('\n'), bytes_read)
# no more to be read
# if it's nothing then
if isnothing(last_newline_pos) & (length(bytes_read) == 0)
close(chunk_iterator.file)
return nothing
# have not found a new line, so continue
elseif isnothing(last_newline_pos)
if eof(chunk_iterator.file)
close(chunk_iterator.file)
# do nothing and go to CSV.read
last_newline_pos = length(bytes_read)
else
# increase step size but doubling
chunk_iterator.step = 2chunk_iterator.step
# go back to the beginning
seek(chunk_iterator.file, position(chunk_iterator.file) - length(bytes_read))
return iterate(chunk_iterator)
end
end
if first_read
df =
CSV.read(
# It no longer requires wrapping by an IOBuffer
@view bytes_read[1:last_newline_pos]
, DataFrame;
chunk_iterator.csv_rows_params...
)
chunk_iterator.column_headers = names(df)
# removes header options from table
c = chunk_iterator.csv_rows_params
#d = Dict(zip(keys(c), values(c))...)
d = Dict(c)
delete!(d, :head)
chunk_iterator.csv_rows_params = (;d...)
else
df =
CSV.read(
@view bytes_read[1:last_newline_pos]
, DataFrame;
header=chunk_iterator.column_headers,
chunk_iterator.csv_rows_params...
)
end
new_pos = position(chunk_iterator.file) - (length(bytes_read) - last_newline_pos)
seek(chunk_iterator.file, new_pos)
return df, nothing
end
Base.iterate(chunk_iterator::CsvChunkIterator, _) = Base.iterate(chunk_iterator)
# this is needed for `[a for a in chunk_iterator]` to work properly
Base.IteratorSize(_::CsvChunkIterator) = Base.SizeUnknown()
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 941 | export sample
using DataFrames: AbstractDataFrame, nrow
import StatsBase: sample
using StatsBase
function StatsBase.sample(df::AbstractDataFrame, args...; kwargs...)
rows_sampled = sample(axes(df, 1), args...; kwargs...)
df[rows_sampled, :]
end
function StatsBase.sample(rng, df::AbstractDataFrame, args...; kwargs...)
rows_sampled = sample(rng, axes(df, 1), args...; kwargs...)
df[rows_sampled, :]
end
function StatsBase.sample(df::AbstractDataFrame, frac::T; kwargs...) where T <: Union{AbstractFloat, Rational}
@assert 0 <= frac <= 1
n = round(Int, nrow(df)*frac)
rows_sampled = sample(axes(df, 1), n, ; kwargs...)
df[rows_sampled, :]
end
function StatsBase.sample(rng, df::AbstractDataFrame, frac::T; kwargs...) where T <: Union{AbstractFloat, Rational}
@assert 0 <= frac <= 1
n = round(Int, nrow(df)*frac)
rows_sampled = sample(rng, axes(df, 1), n; kwargs...)
df[rows_sampled, :]
end | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 559 | export short_stringify!
# Original code courtesy of Nils Gudat
using ShortStrings: ShortString
using Missings: passmissing
using PooledArrays
# Functions to turn String columns into ShortStrings
function short_stringify(x::AbstractVector)
y = ShortString("a"^maximum(length.(skipmissing(x))))
return passmissing(typeof(y)).(x)
end
function short_stringify!(df::DataFrame)
cols = unique([names(df, String); names(df, Union{Missing, String})])
for c โ cols
df[!, c] = PooledArray(short_stringify(df[!, c]))
end
return df
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 357 | using Test
# using RCall
# @testset "DataConvenience.jl" begin
# for i in 1:100
# # Write your own tests here.
# x = rand(100, 5)
# y = rand(100, 5)
#
# @rput x
# @rput y
# R"""
# res = cancor(x,y)$cor[1]
# """
# @rget res
# @test res โ canonicalcor(x,y)
# end
# end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 217 | # support for nanoseconds in dates
using Dates
struct DateTimeN
d::Date
t::Time
end
str = "2019-10-23T12:01:15.123456789"
parseDateTimeN(str)
parseDateTimeN( "2019-10-23T12:01:15.230")
parseDateTimeN(str)
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 88 | using DataConvenience
using DataFrames
df = DataFrame(A = 1:4, B = 2:2:8)
dfcor(df)
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 286 | using DataFrames
using Test
@testset "clean names " begin
df = DataFrame(ok = 2:3, ok2 = 2:3, ok3=2:3)
rename!(df, :ok => Symbol("ok-2"))
@test Symbol.(names(cleannames!(df))) == [:ok_2, :ok2, :ok3]
@test renamedups!([:ok, :ok_1, :ok_1]) == [:ok, :ok_1, :ok_1_1]
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 173 | using Test
using DataConvenience
x = Vector{Union{Missing, Int64}}(undef, 1_000_000)
@testset "count_missing" begin
@test count(ismissing, x) == count_missing(x)
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 202 | using DataFrames
using DataConvenience
dfย =ย DataFrame(
ย ย ย ย ย ย ย ย aย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย bย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย cย =ย rand(1:8,ย 1000),
ย ย ย ย )
nest(df, :a, :meh)
unnest(nest(df, :a, :meh), :meh)
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 1226 | using DataConvenience
using DataFrames
using CSV
using Test
using Random: randstring
@testset "read csv in chunks" begin
filepath = joinpath(tempdir(), "tmp-data-convenience-csv-chunking-test.csv")
M = 1000
str_base = [randstring(8) for i in 1:1_000]
@time df = DataFrame(int = rand(Int32, M), float=rand(M), str = rand(str_base, M))
@time CSV.write(filepath, df)
# read the file 100 bytes at a time
chunks = CsvChunkIterator(filepath, 100)
dfs = [DataFrame(chunk) for chunk in chunks]
made = reduce(vcat, dfs)
actual = CSV.read(filepath, DataFrame)
@test nrow(made) == nrow(actual)
@test ncol(made) == ncol(actual)
# read the file 500 bytes
chunks = CsvChunkIterator(filepath, 500)
dfs = [DataFrame(chunk) for chunk in chunks]
made = reduce(vcat, dfs)
@test nrow(made) == nrow(actual)
@test ncol(made) == ncol(actual)
# read the file 7 rows at a time
# where the file is of size 32 rows
chunks = CsvChunkIterator(filepath)
dfs = [DataFrame(chunk) for chunk in chunks]
made = reduce(vcat, dfs)
@test nrow(made) == nrow(actual)
@test ncol(made) == ncol(actual)
collect(CsvChunkIterator(filepath; header=0))
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 269 | using DataConvenience
using Test
include("canonicalcor.jl")
include("janitor.jl")
include("read-csv-in-chunks.jl")
include("test-fsort-dataframes.jl")
include("missing.jl")
include("sample.jl")
@testset "DataConvenience.jl" begin
# Write your own tests here.
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 137 | using DataFrames
using DataConvenience
sample(DataFrame(a=1:100), 10)
sample(DataFrame(a=1:100), 0.1)
sample(DataFrame(a=1:100), 1//7) | DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | code | 509 | using DataFrames
using Test
using Random: randstring
@testset "sort it" begin
M = 100_000
str_base = [randstring(8) for i in 1:1_000]
df = DataFrame(int = rand(Int32, M), float=rand(M), str = rand(str_base, M))
@time df1 = sort(df, :int);
@time df2 = fsort(df, :int);
@test df1 == df2
@test df != df2
@time df1 = sort(df, :str);
@time df2 = fsort(df, :str);
@test df1 == df2
@test df != df2
@time df1 = sort(df, [:str, :float]);
@time df2 = fsort(df, [:str, :float]);
@test df1 == df2
@test df != df2
end
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | docs | 4319 | # DataConvenience
An eclectic collection of convenience functions for your data manipulation needs.
## Data
### Sampling with `sample`
You can conveniently sample a dataframe with the `sample` method
```
df = DataFrame(a=1:10)
# sample 10 rows
sample(df, 10)
```
```
# sample 10% of rows
sample(df, 0.1)
```
```
# sample 1/10 of rows
sample(df, 1//10)
```
### Faster sorting for DataFrames
You can sort `DataFrame`s (in ascending order only) faster than the `sort` function by using the `fsort` function. E.g.
```julia
using DataConvenience
using DataFrames
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
fsort(df, :col) # sort by `:col`
fsort(df, [:col1, :col2]) # sort by `:col1` and `:col2`
fsort!(df, :col) # sort by `:col` # sort in-place by `:col`
fsort!(df, [:col1, :col2]) # sort in-place by `:col1` and `:col2`
```
```julia
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
using BenchmarkTools
fsort_1col = @belapsed fsort($df, :col) # sort by `:col`
fsort_2col = @belapsed fsort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
sort_1col = @belapsed sort($df, :col) # sort by `:col`
sort_2col = @belapsed sort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
using Plots
bar(["DataFrames.sort 1 col","DataFrames.sort 2 col2", "DataCon.sort 1 col","DataCon.sort 2 col2"],
[sort_1col, sort_2col, fsort_1col, fsort_2col],
title="DataFrames sort performance comparison",
label = "seconds")
```
### Clean column names with `cleannames!`
Somewhat similiar to R's `janitor::clean_names` so that `cleannames!(df)` cleans the names of a `DataFrame`.
### Nesting of `DataFrame`s
Sometimes, nesting is more convenient then using `GroupedDataFrame`s
```
using DataFrames
dfย =ย DataFrame(
ย ย ย ย ย ย ย ย aย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย bย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย cย =ย rand(1:8,ย 1000),
ย ย ย ย )
nested_df = nest(df, :a, :nested_df)
```
To unnest use `unnest(nested_df, :nested_df)`.
### One hot encoding
```
a = DataFrame(
player1 = ["a", "b", "c"],
player2 = ["d", "c", "a"]
)
# does not modify a
onehot(a, :player1)
# modfies a
onehot!(a, :player1)
```
### CSV Chunk Reader
You can read a CSV in chunks and apply logic to each chunk. The types of each column is inferred by `CSV.read`.
```julia
using DataFrames
using CSV
df = DataFrame(a = rand(1_000_000), b = rand(Int8, 1_000_000), c = rand(Int8, 1_000_000))
filepath = tempname()*".csv"
CSV.write(filepath, df)
for (i, chunk) in enumerate(CsvChunkIterator(filepath))
println(i)
print(describe(chunk))
end
```
The chunk iterator uses `CSV.read` parameters. The user can pass in `type` and `types` to dictate the types of each column e.g.
```julia
# read all column as String
for (i, chunk) in enumerate(CsvChunkIterator(filepath, types=String))
println(i)
print(describe(chunk))
end
```
```julia
# read a three colunms csv where the column types are String, Int, Float32
for chunk in CsvChunkIterator(filepath, types=[String, Int, Float32])
print(describe(chunk))
end
```
**Note** The chunks MAY have different column types.
## Statistics & Correlations
### Canonical Correlation
The first component of Canonical Correlation.
```
x = rand(100, 5)
y = rand(100, 5)
canonicalcor(x, y)
```
### Correlation for `Bool`
`cor(x::Bool, y)` - allow you to treat `Bool` as 0/1 when computing correlation
### Correlation for `DataFrames`
`dfcor(df::AbstractDataFrame, cols1=names(df), cols2=names(df), verbose=false)`
Compute correlation in a DataFrames by specifying a set of columns `cols1` vs
another set `cols2`. The cartesian product of `cols1` and `cols2`'s correlation
will be computed
## Miscellaneous
### `@replicate`
`@replicate code times` will run `code` multiple times e.g.
```julia
@replicate 10 8
```
### StringVector
`StringVector(v::CategoricalVector{String})` - Convert `v::CategoricalVector` efficiently to `WeakRefStrings.StringVector`
### Faster count missing
There is a `count_missisng` function
```julia
x = Vector{Union{Missing, Int}}(undef, 10_000_000)
cmx = count_missing(x) # this is faster
cmx2 = countmissing(x) # this is faster
cimx = count(ismissing, x) # the way available at base
cmx == cimx # true
```
There is also the `count_non_missisng` function and `countnonmissing` is its synonym.
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.3.6 | 160d239f831cdc4dfbebc208fa21d961ba1b505a | docs | 7111 | # DataConvenience
An eclectic collection of convenience functions for your data manipulation needs.
## Data
### Sampling with `sample`
You can conveniently sample a dataframe with the `sample` method
```
df = DataFrame(a=1:10)
# sample 10 rows
sample(df, 10)
```
```
# sample 10% of rows
sample(df, 0.1)
```
```
# sample 1/10 of rows
sample(df, 1//10)
```
### Faster sorting for DataFrames
You can sort `DataFrame`s (in ascending order only) faster than the `sort` function by using the `fsort` function. E.g.
```julia
using DataConvenience
using DataFrames
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
fsort(df, :col) # sort by `:col`
fsort(df, [:col1, :col2]) # sort by `:col1` and `:col2`
fsort!(df, :col) # sort by `:col` # sort in-place by `:col`
fsort!(df, [:col1, :col2]) # sort in-place by `:col1` and `:col2`
```
```
1000000ร3 DataFrame
Row โ col col1 col2
โ Float64 Float64 Float64
โโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
1 โ 0.46685 2.53832e-7 0.0374635
2 โ 0.404717 4.47445e-7 0.267923
3 โ 0.724972 1.04096e-6 0.665079
4 โ 0.57888 1.70257e-6 0.404758
5 โ 0.385235 2.39225e-6 0.0781073
6 โ 0.800285 6.07543e-6 0.00295096
7 โ 0.940843 6.69252e-6 0.704978
8 โ 0.817557 8.0119e-6 0.574785
โฎ โ โฎ โฎ โฎ
999994 โ 0.179524 0.999994 0.64448
999995 โ 0.0100945 0.999994 0.953052
999996 โ 0.214368 0.999995 0.224151
999997 โ 0.3488 0.999996 0.91864
999998 โ 0.930586 0.999997 0.894878
999999 โ 0.0312132 0.999999 0.830381
1000000 โ 0.752231 1.0 0.471916
999985 rows omitted
```
```julia
df = DataFrame(col = rand(1_000_000), col1 = rand(1_000_000), col2 = rand(1_000_000))
using BenchmarkTools
fsort_1col = @belapsed fsort($df, :col) # sort by `:col`
fsort_2col = @belapsed fsort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
sort_1col = @belapsed sort($df, :col) # sort by `:col`
sort_2col = @belapsed sort($df, [:col1, :col2]) # sort by `:col1` and `:col2`
using Plots
bar(["DataFrames.sort 1 col","DataFrames.sort 2 col2", "DataCon.sort 1 col","DataCon.sort 2 col2"],
[sort_1col, sort_2col, fsort_1col, fsort_2col],
title="DataFrames sort performance comparison",
label = "seconds")
```

### Clean column names with `cleannames!`
Somewhat similiar to R's `janitor::clean_names` so that `cleannames!(df)` cleans the names of a `DataFrame`.
### Nesting of `DataFrame`s
Sometimes, nesting is more convenient then using `GroupedDataFrame`s
```
using DataFrames
dfย =ย DataFrame(
ย ย ย ย ย ย ย ย aย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย bย =ย rand(1:8,ย 1000),
ย ย ย ย ย ย ย ย cย =ย rand(1:8,ย 1000),
ย ย ย ย )
nested_df = nest(df, :a, :nested_df)
```
To unnest use `unnest(nested_df, :nested_df)`.
### One hot encoding
```
a = DataFrame(
player1 = ["a", "b", "c"],
player2 = ["d", "c", "a"]
)
# does not modify a
onehot(a, :player1)
# modfies a
onehot!(a, :player1)
```
### CSV Chunk Reader
You can read a CSV in chunks and apply logic to each chunk. The types of each column is inferred by `CSV.read`.
```julia
using DataFrames
using CSV
df = DataFrame(a = rand(1_000_000), b = rand(Int8, 1_000_000), c = rand(Int8, 1_000_000))
filepath = tempname()*".csv"
CSV.write(filepath, df)
for (i, chunk) in enumerate(CsvChunkIterator(filepath))
println(i)
print(describe(chunk))
end
```
```
1
3ร7 DataFrame
Row โ variable mean min median max nmissing
eltype
โ Symbol Float64 Real Float64 Real Int64
DataType
โโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโ
1 โ a 0.499738 4.36023e-8 0.499524 0.999999 0
Float64
2 โ b -0.469557 -128 0.0 127 0
Int64
3 โ c -0.547335 -128 -1.0 127 0
Int64
```
The chunk iterator uses `CSV.read` parameters. The user can pass in `type` and `types` to dictate the types of each column e.g.
```julia
# read all column as String
for (i, chunk) in enumerate(CsvChunkIterator(filepath, types=String))
println(i)
print(describe(chunk))
end
```
```
1
3ร7 DataFrame
Row โ variable mean min median max
nmissing eltype
โ Symbol Nothing String Nothing String
Int64 DataType
โโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโ
1 โ a 0.0001001901435260244 9.997666658245752
e-5 0 String
2 โ b -1 99
0 String
3 โ c -1 99
0 String
```
```julia
# read a three colunms csv where the column types are String, Int, Float32
for chunk in CsvChunkIterator(filepath, types=[String, Int, Float32])
print(describe(chunk))
end
```
```
3ร7 DataFrame
Row โ variable mean min median max
nmissing eltype
โ Symbol Unionโฆ Any Unionโฆ Any
Int64 DataType
โโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโ
1 โ a 0.0001001901435260244 9.99766665824575
2e-5 0 String
2 โ b -0.469557 -128 0.0 127
0 Int64
3 โ c -0.547335 -128.0 -1.0 127.0
0 Float32
```
**Note** The chunks MAY have different column types.
## Statistics & Correlations
### Canonical Correlation
The first component of Canonical Correlation.
```
x = rand(100, 5)
y = rand(100, 5)
canonicalcor(x, y)
```
### Correlation for `Bool`
`cor(x::Bool, y)` - allow you to treat `Bool` as 0/1 when computing correlation
### Correlation for `DataFrames`
`dfcor(df::AbstractDataFrame, cols1=names(df), cols2=names(df), verbose=false)`
Compute correlation in a DataFrames by specifying a set of columns `cols1` vs
another set `cols2`. The cartesian product of `cols1` and `cols2`'s correlation
will be computed
## Miscellaneous
### `@replicate`
`@replicate code times` will run `code` multiple times e.g.
```julia
@replicate 10 8
```
```
10-element Vector{Int64}:
8
8
8
8
8
8
8
8
8
8
```
### StringVector
`StringVector(v::CategoricalVector{String})` - Convert `v::CategoricalVector` efficiently to `WeakRefStrings.StringVector`
### Faster count missing
There is a `count_missisng` function
```julia
x = Vector{Union{Missing, Int}}(undef, 10_000_000)
cmx = count_missing(x) # this is faster
cmx2 = countmissing(x) # this is faster
cimx = count(ismissing, x) # the way available at base
cmx == cimx # true
```
```
true
```
There is also the `count_non_missisng` function and `countnonmissing` is its synonym.
| DataConvenience | https://github.com/xiaodaigh/DataConvenience.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 193 | module Fluxperimental
using Flux
include("split_join.jl")
export Split, Join
include("train.jl")
export shinkansen!
include("chain.jl")
include("compact.jl")
end # module Fluxperimental
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 2207 |
import Flux: ChainRulesCore
# Some experiments with chain to start removing the need for recur to be mutable.
# As per the conversation in the recurrent network rework issue.
# Main difference between this and the _applychain function is we return a new chain
# with the internal state modified as well as the output of applying x to the chain.
function apply(chain::Flux.Chain, x)
layers, out = _apply(chain.layers, x)
Flux.Chain(layers), out
end
function _apply(layers::NamedTuple{NMS, TPS}, x) where {NMS, TPS}
layers, out = _apply(Tuple(layers), x)
NamedTuple{NMS}(layers), out
end
function _scan(layers::AbstractVector, x)
new_layers = typeof(layers)(undef, length(layers))
for (idx, f) in enumerate(layers)
new_layers[idx], x = _apply(f, x)
end
new_layers, x
end
# Reverse rule for _scan
# example pulled from https://github.com/mcabbott/Flux.jl/blob/chain_rrule/src/cuda/cuda.jl
function ChainRulesCore.rrule(cfg::ChainRulesCore.RuleConfig, ::typeof(_scan), layers, x)
duo = accumulate(layers; init=((nothing, x), nothing)) do ((pl, input), _), cur_layer
out, back = ChainRulesCore.rrule_via_ad(cfg, _apply, cur_layer, input)
end
outs = map(first, duo)
backs = map(last, duo)
function _scan_pullback(dy)
multi = accumulate(reverse(backs); init=(nothing, dy)) do (_, delta), back
dapply, dlayer, din = back(delta)
return dapply, (dlayer, din)
end
layergrads = reverse(map(first, multi))
xgrad = last(multi[end])
return (ChainRulesCore.NoTangent(), layergrads, xgrad)
end
return (map(first, outs), last(outs[end])), _scan_pullback
end
function _apply(layers::AbstractVector, x) # type-unstable path, helps compile times
_scan(layers, x)
end
# Generated function returns a tuple of args and the last output of the network.
@generated function _apply(layers::Tuple{Vararg{<:Any,N}}, x) where {N}
x_symbols = vcat(:x, [gensym() for _ in 1:N])
l_symbols = [gensym() for _ in 1:N]
calls = [:(($(l_symbols[i]), $(x_symbols[i+1])) = _apply(layers[$i], $(x_symbols[i]))) for i in 1:N]
push!(calls, :(return tuple($(l_symbols...)), $(x_symbols[end])))
Expr(:block, calls...)
end
_apply(layer, x) = layer, layer(x)
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 6144 | import Flux: _big_show
"""
@compact(forward::Function; name=nothing, parameters...)
Creates a layer by specifying some `parameters`, in the form of keywords,
and (usually as a `do` block) a function for the forward pass.
You may think of `@compact` as a specialized `let` block creating local variables
that are trainable in Flux.
Declared variable names may be used within the body of the `forward` function.
Here is a linear model:
```
r = @compact(w = rand(3)) do x
w .* x
end
r([1, 1, 1]) # x is set to [1, 1, 1].
```
Here is a linear model with bias and activation:
```
d = @compact(in=5, out=7, W=randn(out, in), b=zeros(out), act=relu) do x
y = W * x
act.(y .+ b)
end
d(ones(5, 10)) # 7ร10 Matrix as output.
```
Finally, here is a simple MLP:
```
using Flux
n_in = 1
n_out = 1
nlayers = 3
model = @compact(
w1=Dense(n_in, 128),
w2=[Dense(128, 128) for i=1:nlayers],
w3=Dense(128, n_out),
act=relu
) do x
embed = act(w1(x))
for w in w2
embed = act(w(embed))
end
out = w3(embed)
return out
end
model(randn(n_in, 32)) # 1ร32 Matrix as output.
```
We can train this model just like any `Chain`:
```
data = [([x], 2x-x^3) for x in -2:0.1f0:2]
optim = Flux.setup(Adam(), model)
for epoch in 1:1000
Flux.train!((m,x,y) -> (m(x) - y)^2, model, data, optim)
end
```
You may also specify a `name` for the model, which will
be used instead of the default printout, which gives a verbatim
representation of the code used to construct the model:
```
model = @compact(w=rand(3), name="Linear(3 => 1)") do x
sum(w .* x)
end
println(model) # "Linear(3 => 1)"
```
This can be useful when using `@compact` to hierarchically construct
complex models to be used inside a `Chain`.
"""
macro compact(fex, kwexs...)
# check input
Meta.isexpr(fex, :(->)) || error("expects a do block")
isempty(kwexs) && error("expects keyword arguments")
all(ex -> Meta.isexpr(ex, (:kw,:(=))), kwexs) || error("expects only keyword argumens")
# check if user has named layer:
name = findfirst(ex -> ex.args[1] == :name, kwexs)
if name !== nothing && kwexs[name].args[2] !== nothing
length(kwexs) == 1 && error("expects keyword arguments")
name_str = kwexs[name].args[2]
# remove name from kwexs (a tuple)
kwexs = (kwexs[1:name-1]..., kwexs[name+1:end]...)
name = name_str
end
# make strings
layer = "@compact"
setup = NamedTuple(map(ex -> Symbol(string(ex.args[1])) => string(ex.args[2]), kwexs))
input = join(fex.args[1].args, ", ")
block = string(Base.remove_linenums!(fex).args[2])
# edit expressions
vars = map(ex -> ex.args[1], kwexs)
assigns = map(ex -> Expr(:(=), ex.args...), kwexs)
@gensym self
pushfirst!(fex.args[1].args, self)
addprefix!(fex, self, vars)
# assemble
return esc(quote
let
$(assigns...)
$CompactLayer($fex, $name, ($layer, $input, $block), $setup; $(vars...))
end
end)
end
function addprefix!(ex::Expr, self, vars)
for i = 1:length(ex.args)
if ex.args[i] in vars
ex.args[i] = :($self.$(ex.args[i]))
else
addprefix!(ex.args[i], self, vars)
end
end
end
addprefix!(not_ex, self, vars) = nothing
struct CompactLayer{F,NT1<:NamedTuple,NT2<:NamedTuple}
fun::F
name::Union{String,Nothing}
strings::NTuple{3,String}
setup_strings::NT1
variables::NT2
end
CompactLayer(f::Function, name::Union{String,Nothing}, str::Tuple, setup_str::NamedTuple; kw...) = CompactLayer(f, name, str, setup_str, NamedTuple(kw))
(m::CompactLayer)(x...) = m.fun(m.variables, x...)
CompactLayer(args...) = error("CompactLayer is meant to be constructed by the macro")
Flux.@functor CompactLayer
Flux._show_children(m::CompactLayer) = m.variables
function Base.show(io::IO, ::MIME"text/plain", m::CompactLayer)
if get(io, :typeinfo, nothing) === nothing # e.g., top level of REPL
Flux._big_show(io, m)
elseif !get(io, :compact, false) # e.g., printed inside a Vector, but not a matrix
Flux._layer_show(io, m)
else
show(io, m)
end
end
function Flux._big_show(io::IO, obj::CompactLayer, indent::Int=0, name=nothing)
setup_strings = obj.setup_strings
local_name = obj.name
has_explicit_name = local_name !== nothing
if has_explicit_name
if indent != 0 || length(Flux.params(obj)) <= 2
_just_show_params(io, local_name, obj, indent)
else # indent == 0
print(io, local_name)
Flux._big_finale(io, obj)
end
else # no name, so print normally
layer, input, block = obj.strings
pre, post = ("(", ")")
println(io, " "^indent, isnothing(name) ? "" : "$name = ", layer, pre)
for k in keys(obj.variables)
v = obj.variables[k]
if Flux._show_leaflike(v)
# If the value is a leaf, just print verbatim what the user wrote:
str = String(k) * " = " * setup_strings[k]
_just_show_params(io, str, v, indent+2)
else
Flux._big_show(io, v, indent+2, String(k))
end
end
if indent == 0 # i.e. this is the outermost container
print(io, rpad(post, 1))
else
print(io, " "^indent, post)
end
input != "" && print(io, " do ", input)
if block != ""
block_to_print = block[6:end]
# Increase indentation of block according to `indent`:
block_to_print = replace(block_to_print, r"\n" => "\n" * " "^(indent))
print(io, " ", block_to_print)
end
if indent == 0
Flux._big_finale(io, obj)
else
println(io, ",")
end
end
end
# Modified from src/layers/show.jl
function _just_show_params(io::IO, str::String, layer, indent::Int=0)
print(io, " "^indent, str, indent==0 ? "" : ",")
if !isempty(Flux.params(layer))
print(io, " "^max(2, (indent==0 ? 20 : 39) - indent - length(str)))
printstyled(io, "# ", Flux.underscorise(sum(length, Flux.params(layer))), " parameters"; color=:light_black)
nonparam = Flux._childarray_sum(length, layer) - sum(length, Flux.params(layer))
if nonparam > 0
printstyled(io, ", plus ", Flux.underscorise(nonparam), indent==0 ? " non-trainable" : ""; color=:light_black)
end
Flux._nan_show(io, Flux.params(layer))
end
indent==0 || println(io)
end
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 521 | #=
These layers are from
https://fluxml.ai/Flux.jl/stable/models/advanced/
=#
# custom split layer
struct Split{T}
paths::T
end
Split(paths...) = Split(paths)
Flux.@functor Split
(m::Split)(x::AbstractArray) = map(f -> f(x), m.paths)
# custom join layer
struct Join{T, F}
combine::F
paths::T
end
# allow Join(op, m1, m2, ...) as a constructor
Join(combine, paths...) = Join(combine, paths)
Flux.@functor Join
(m::Join)(xs::Tuple) = m.combine(map((f, x) -> f(x), m.paths, xs)...)
(m::Join)(xs...) = m(xs)
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 2463 | using Flux: withgradient, DataLoader
using Optimisers: Optimisers
using ProgressMeter: ProgressMeter, Progress, next!
#=
This grew out of explicit-mode upgrade here:
https://github.com/FluxML/Flux.jl/pull/2082
=#
"""
shinkansen!(loss, model, data...; state, epochs=1, [batchsize, keywords...])
This is a re-design of `train!`:
* The loss function must accept the remaining arguments: `loss(model, data...)`
* The optimiser state from `setup` must be passed to the keyword `state`.
By default it calls `gradient(loss, model, data...)` just like that.
Same order as the arguments. If you specify `epochs = 100`, then it will do this 100 times.
But if you specify `batchsize = 32`, then it first makes `DataLoader(data...; batchsize)`,
and uses that to generate smaller arrays to feed to `gradient`.
All other keywords are passed to `DataLoader`, e.g. to shuffle batches.
Returns the loss from every call.
# Example
```
X = repeat(hcat(digits.(0:3, base=2, pad=2)...), 1, 32)
Y = Flux.onehotbatch(xor.(eachrow(X)...), 0:1)
model = Chain(Dense(2 => 3, sigmoid), BatchNorm(3), Dense(3 => 2))
state = Flux.setup(Adam(0.1, (0.7, 0.95)), model)
# state = Optimisers.setup(Optimisers.Adam(0.1, (0.7, 0.95)), model) # for now
shinkansen!(model, X, Y; state, epochs=100, batchsize=16, shuffle=true) do m, x, y
Flux.logitcrossentropy(m(x), y)
end
all((softmax(model(X)) .> 0.5) .== Y)
```
"""
function shinkansen!(loss::Function, model, data...; state, epochs=1, batchsize=nothing, kw...)
if batchsize != nothing
loader = DataLoader(data; batchsize, kw...)
losses = Vector{Float32}[]
prog = Progress(length(loader) * epochs)
for e in 1:epochs
eplosses = Float32[]
for (i,d) in enumerate(loader)
l, (g, _...) = withgradient(loss, model, d...)
isfinite(l) || error("loss is $l, on batch $i, epoch $epoch")
Optimisers.update!(state, model, g)
push!(eplosses, l)
next!(prog; showvalues=[(:epoch, e), (:loss, l)])
end
push!(losses, eplosses)
end
return allequal(size.(losses)) ? reduce(hcat, losses) : losses
else
losses = Float32[]
prog = Progress(epochs)
for e in 1:epochs
l, (g, _...) = withgradient(loss, model, data...)
isfinite(l) || error("loss is $l, on epoch $epoch")
Optimisers.update!(state, model, g)
push!(losses, l)
next!(prog; showvalues=[(:epoch, epoch), (:loss, l)])
end
return losses
end
end
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 2528 | # Checking if the two grad structures are equal. Simplifies tests below.
function _grads_equal(grads1, grads2)
if length(keys(grads1)) != length(keys(grads2))
return false
end
ret = true
for weights in keys(grads1)
if grads1[weights] isa AbstractArray
ret = ret && all(grads1[weights] .== grads2[weights])
elseif isnothing(grads1[weights])
ret = ret && isnothing(grads2[weights])
else
throw("Grad returned type $(typeof(grads1[weights]))")
end
end
return ret
end
@testset "Applying the Chain!" begin
@testset "Forward pass" begin
x = rand(Float32, 3, 1)
l1 = Flux.Dense(3, 4)
l2 = Flux.Dense(4, 1)
truth = l2(l1(x))
t_c = Flux.Chain(l1, l2) # tuple Chain
new_t_c, out = Fluxperimental.apply(t_c, x)
@test new_t_c[1] === l1 && new_t_c[2] === l2
@test all(out .== truth)
nt_c = Flux.Chain(l1=l1, l2=l2) # namedtuple Chain
new_nt_c, out = Fluxperimental.apply(nt_c, x)
@test new_nt_c[:l1] === l1 && new_nt_c[:l2] === l2
@test all(out .== truth)
v_c = Flux.Chain([l1, l2]) # vector Chain
new_v_c, out = Fluxperimental.apply(v_c, x)
@test new_v_c.layers[1] === l1 && new_v_c.layers[2] === l2
@test all(out .== truth)
end # @testset "Forward Pass"
@testset "Backward pass" begin
x = rand(Float32, 3, 1)
l1 = Flux.Dense(3, 4)
l2 = Flux.Dense(4, 1)
@test begin # Test Tuple Chain Gradients
t_c = Flux.Chain(l1, l2) # tuple Chain
grads_truth = Flux.gradient(Flux.params(t_c)) do
sum(t_c(x))
end
grads_tuple = Flux.gradient(Flux.params(t_c)) do
sum(Fluxperimental.apply(t_c, x)[end])
end
_grads_equal(grads_tuple, grads_truth)
end
@test begin # Test Named Tuple's Gradients
nt_c = Flux.Chain(l1=l1, l2=l2) # named tuple Chain
grads_truth = Flux.gradient(Flux.params(nt_c)) do
sum(nt_c(x))
end
grads_tuple = Flux.gradient(Flux.params(nt_c)) do
sum(Fluxperimental.apply(nt_c, x)[end])
end
_grads_equal(grads_tuple, grads_truth)
end
@test begin # Test Vector Gradient
c = Flux.Chain([l1, l2]) # named tuple Chain
grads_truth = Flux.gradient(Flux.params(c)) do
sum(c(x))
end
grads_tuple = Flux.gradient(Flux.params(c)) do
sum(Fluxperimental.apply(c, x)[end])
end
_grads_equal(grads_tuple, grads_truth)
end
end # @testset "Backward Pass"
end # @testset "Applying the Chain!"
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 4742 | import Fluxperimental: @compact
# Strip both strings of spaces, and then test:
function similar_strings(s1, s2)
s1 = replace(s1, r"\s" => "")
s2 = replace(s2, r"\s" => "")
# We also remove any instances of, e.g.,
# 17.057 KiB (or any other number)
# because this depends on indentation in this file.
s1 = replace(s1, r"\d+\.\d+KiB" => "")
s2 = replace(s2, r"\d+\.\d+KiB" => "")
# Display any differences:
if s1 != s2
println(stderr, "s1: ", s1)
println(stderr, "s2: ", s2)
end
return s1 == s2
end
function get_model_string(model)
io = IOBuffer()
show(io, MIME"text/plain"(), model)
String(take!(io))
end
@testset "@compact" begin
r = @compact(w = [1, 5, 10]) do x
sum(w .* x)
end
@test Flux.params(r) == Flux.Params([[1, 5, 10]])
@test r([1, 1, 1]) == 1 + 5 + 10
@test r([1, 2, 3]) == 1 + 2 * 5 + 3 * 10
@test r(ones(3, 3)) == 3 * (1 + 5 + 10)
# Test gradients:
@test gradient(r, [1, 1, 1])[1] == [1, 5, 10]
d = @compact(in = 5, out = 7, W = randn(out, in), b = zeros(out), act = relu) do x
y = W * x
act.(y .+ b)
end
@test size.(Flux.params(d)) == [(7, 5), (7,)]
@test size(d(ones(5, 10))) == (7, 10)
@test all(d(randn(5, 10)) .>= 0)
# Test gradients:
y, โ = Flux.withgradient(Flux.params(d)) do
input = randn(5, 32)
desired_output = randn(7, 32)
prediction = d(input)
sum((prediction - desired_output) .^ 2)
end
@test typeof(y) == Float64
grads = โ.grads
@test typeof(grads) <: IdDict
@test length(grads) == 3
@test Set(size.(values(grads))) == Set([(7, 5), (), (7,)])
# MLP:
n_in = 1
n_out = 1
nlayers = 3
model = @compact(
w1 = Dense(n_in, 128),
w2 = [Dense(128, 128) for i = 1:nlayers],
w3 = Dense(128, n_out),
act = relu
) do x
embed = act(w1(x))
for w in w2
embed = act(w(embed))
end
out = w3(embed)
return out
end
@test size.(Flux.params(model)) == [
(128, 1),
(128,),
(128, 128),
(128,),
(128, 128),
(128,),
(128, 128),
(128,),
(1, 128),
(1,),
]
@test size(model(randn(n_in, 32))) == (1, 32)
# Test string representations:
model = @compact(w=Dense(32 => 32)) do x, y
tmp = sum(w(x))
return tmp + y
end
expected_string = """@compact(
w = Dense(32=>32), #1_056 parameters
) do x, y
tmp = sum(w(x))
return tmp + y
end"""
@test similar_strings(get_model_string(model), expected_string)
# Custom naming:
model = @compact(w=Dense(32, 32), name="Linear(...)") do x, y
tmp = sum(w(x))
return tmp + y
end
expected_string = "Linear(...) # 1_056 parameters"
@test similar_strings(get_model_string(model), expected_string)
# Hierarchical models should work too:
model1 = @compact(w1=Dense(32=>32, relu), w2=Dense(32=>32, relu)) do x
w2(w1(x))
end
model2 = @compact(w1=model1, w2=Dense(32=>32, relu)) do x
w2(w1(x))
end
expected_string = """@compact(
w1 = @compact(
w1 = Dense(32 => 32, relu), # 1_056 parameters
w2 = Dense(32 => 32, relu), # 1_056 parameters
) do x
w2(w1(x))
end,
w2 = Dense(32 => 32, relu), # 1_056 parameters
) do x
w2(w1(x))
end # Total: 6 arrays, 3_168 parameters, 13.271 KiB."""
@test similar_strings(get_model_string(model2), expected_string)
# With array params:
model = @compact(x=randn(32), w=Dense(32=>32)) do s
w(x .* s)
end
expected_string = """@compact(
x = randn(32), # 32 parameters
w = Dense(32 => 32), # 1_056 parameters
) do s
w(x .* s)
end # Total: 3 arrays, 1_088 parameters, 4.734 KiB."""
@test similar_strings(get_model_string(model), expected_string)
# Hierarchy with inner model named:
model = @compact(
w1=@compact(w1=randn(32, 32), name="Model(32)") do x
w1 * x
end,
w2=randn(32, 32),
w3=randn(32),
) do x
w2 * w1(x)
end
expected_string = """@compact(
Model(32), # 1_024 parameters
w2 = randn(32, 32), # 1_024 parameters
w3 = randn(32), # 32 parameters
) do x
w2 * w1(x)
end # Total: 3 arrays, 2_080 parameters, 17.089 KiB."""
@test similar_strings(get_model_string(model), expected_string)
# Hierarchy with outer model named:
model = @compact(
w1=@compact(w1=randn(32, 32)) do x
w1 * x
end,
w2=randn(32, 32),
w3=randn(32),
name="Model(32)"
) do x
w2 * w1(x)
end
expected_string = """Model(32) # Total: 3 arrays, 2_080 parameters, 17.057KiB."""
@test similar_strings(get_model_string(model), expected_string)
end
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
|
[
"MIT"
] | 0.1.3 | cf74064e00ff51978e436f2ccab83779503e7adf | code | 154 | using Test
using Flux, Fluxperimental
@testset "Fluxperimental.jl" begin
include("split_join.jl")
include("chain.jl")
include("compact.jl")
end
| Fluxperimental | https://github.com/FluxML/Fluxperimental.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.