GPUImageView 源码简析
最近有点时间了,来看下GPUImageView的源码,这里主要以静态的为GPUImageView设置一个Bitmap并添加滤镜的流程为基础,分析下工作原理。
简单使用
GPUImageView的使用很简单:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent">
<jp.co.cyberagent.android.gpuimage.GPUImageView android:id="@+id/gpuImageView" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_centerInParent="true" app:gpuimage_surface_type="texture_view" />
</RelativeLayout>
很简单,直接在xml文件中添加 GPUImageView
,这里设置了一个属性 app:gpuimage_surface_type="texture_view"
,意为用 textureView
作为代理绘制图片,然后在代码中:
val bitmap = xxx//获取bitmap对象
val ratio = bitmap.width.toFloat() / bitmap.height.toFloat()//这一步是设置固定宽高比
gpuImageView.setImage(bitmap)//设置图片资源
gpuImageView.filter = GPUImageFilter()//设置滤镜
gpuImageView.requestRender()//需要更新绘制时都要调用
ok,流程很简单,添加了一个bitmap图片并添加了一个默认滤镜(原图)。
GPUImageView
先看下 GPUImageView
的定义:
public class GPUImageView extends FrameLayout {
private int surfaceType = SURFACE_TYPE_SURFACE_VIEW;
private View surfaceView;
private View coverView;
private GPUImage gpuImage;
private boolean isShowLoading = true;
private GPUImageFilter filter;
public Size forceSize = null;
private float ratio = 0.0f;
public final static int RENDERMODE_WHEN_DIRTY = 0;
public final static int RENDERMODE_CONTINUOUSLY = 1;
public GPUImageView(Context context) {
super(context);
init(context, null);
}
public GPUImageView(Context context, AttributeSet attrs) {
super(context, attrs);
init(context, attrs);
}
private void init(Context context, AttributeSet attrs) {
GLUtil.init(context);//保存一份当前Context的引用 等等 这tm岂不是会内存泄漏?
if (attrs != null) {//解析xml中写的一些属性 也不多 就两个
TypedArray a = context.getTheme().obtainStyledAttributes(attrs, R.styleable.GPUImageView, 0, 0);
try {
surfaceType = a.getInt(R.styleable.GPUImageView_gpuimage_surface_type, surfaceType);//surface的类型 SurfaceView还是TextureView
isShowLoading = a.getBoolean(R.styleable.GPUImageView_gpuimage_show_loading, isShowLoading);//是否需要显示loading 貌似没啥用
} finally {
a.recycle();
}
}
gpuImage = new GPUImage(context);//创建一个GPUImage的对象,并持有其引用
if (surfaceType == SURFACE_TYPE_TEXTURE_VIEW) {
surfaceView = new GPUImageGLTextureView(context, attrs);//GPUImageGLTextureView,并持有其引用
gpuImage.setGLTextureView((GLTextureView) surfaceView);//让GPUImage也持有一份其引用
} else {
surfaceView = new GPUImageGLSurfaceView(context, attrs);//GPUImageGLSurfaceView,并持有其引用
gpuImage.setGLSurfaceView((GLSurfaceView) surfaceView);//让GPUImage也持有一份其引用
}
addView(surfaceView);//将创建的surfaceView添加给自己FrameLayout
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
if (ratio != 0.0f) {//根据ratio调整宽高
int width = MeasureSpec.getSize(widthMeasureSpec);
int height = MeasureSpec.getSize(heightMeasureSpec);
int newHeight;
int newWidth;
if (width / ratio < height) {
newWidth = width;
newHeight = Math.round(width / ratio);
} else {
newHeight = height;
newWidth = Math.round(height * ratio);
}
int newWidthSpec = MeasureSpec.makeMeasureSpec(newWidth, MeasureSpec.EXACTLY);
int newHeightSpec = MeasureSpec.makeMeasureSpec(newHeight, MeasureSpec.EXACTLY);
super.onMeasure(newWidthSpec, newHeightSpec);
} else {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
}
}
}
GPUImageView本身是一个FrameLayout,内部会添加一个SurfaceView,然后持有一个GPUImage对象的引用,如果有设置ratio的话,宽高会根据ratio来变。在初始化的时候,会对OpenGL工具类做一个初始化,其实就是保存context的引用;另外会根据当前xml熟悉的配置,选择surfaceView的实现。
然后看看GPUImage:
public class GPUImage {
private final Context context;
private final GPUImageRenderer renderer;
/** * Instantiates a new GPUImage object. * * @param context the context */
public GPUImage(final Context context) {
if (!supportsOpenGLES2(context)) {
throw new IllegalStateException("OpenGL ES 2.0 is not supported on this phone.");
}
this.context = context;
filter = new GPUImageFilter();//默认滤镜是GPUImageFilter,也就是原画
renderer = new GPUImageRenderer(filter);//创建一个GPUImageRender对象并持有其引用
}
}
这个 GPUImage
初始化时会默认使用原图滤镜,也就是 GPUImageFilter
,然后用这个滤镜创建一个 GPUImageRenderer
对象。滤镜的实现后面再看,这里看看Renderer:
public class GPUImageRenderer implements GLSurfaceView.Renderer, GLTextureView.Renderer, PreviewCallback {
public static final float CUBE[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
private GPUImageFilter filter;
private final FloatBuffer glCubeBuffer;
private final FloatBuffer glTextureBuffer;
private final Queue<Runnable> runOnDraw;
private final Queue<Runnable> runOnDrawEnd;
public GPUImageRenderer(final GPUImageFilter filter) {
this.filter = filter;
//两个Runnable队列,就是任务队列
runOnDraw = new LinkedList<>();
runOnDrawEnd = new LinkedList<>();
//顶点数组和纹理数组
glCubeBuffer = ByteBuffer.allocateDirect(CUBE.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
glCubeBuffer.put(CUBE).position(0);
glTextureBuffer = ByteBuffer.allocateDirect(TEXTURE_NO_ROTATION.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
//坐标旋转
setRotation(Rotation.NORMAL, false, false);
}
//...
}
这个Render会保存一份滤镜的引用,内部有两个任务队列,看名字就知道应该是执行绘制任务的队列。另外这里会用FloatBuffer,创建两个坐标数组。buffer这里是Java NIO的内容,坐标数组有点复杂,涉及到多种坐标变换。首先我们知道Android中屏幕坐标都是左上角为原点,向右为x轴,向下为y轴,除了屏幕之外,每一个View的坐标也是如此。而OpenGL中,有很多很多坐标概念,其中主要用的世界坐标系,他的原点在正中间,向右为x轴正方向,向上为y轴正方向,这个概念就跟我们平时数学中的平面坐标系是一个意思了。除此之外,还有一个纹理坐标系,原点是该纹理的左下角,向右为x轴正方向,向上为y轴正方向,且OpenGL的坐标数值是归一化的-1到1或0-1的值。,举个例子,假如手机屏幕宽度是1080,在Android中某个点横坐标是540,那么在OpenGL中就是 540/1080 = 0.5。如果要显示一张图片作为OpenGL纹理的话,涉及到一个坐标转换或者说翻转的问题,这里我也不大清楚具体逻辑。
再回到前面 GPUImageView
,其中surfaceView的有两个实现,一个是 GPUImageGLTextureView
,一个是 GPUImageGLSurfaceView
。在创建好surfaceView对象后,会调用 GPUImage
的 setGLTextureView
或 setGLSurfaceView
使其保存一份surfaceView的引用:
/** * Sets the GLSurfaceView which will display the preview. * * @param view the GLSurfaceView */
public void setGLSurfaceView(final GLSurfaceView view) {
surfaceType = SURFACE_TYPE_SURFACE_VIEW;
glSurfaceView = view;
//做一些初始化的配置
glSurfaceView.setEGLContextClientVersion(2);
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.RGBA_8888);
glSurfaceView.setRenderer(renderer);//让Surface持有Render的引用
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
glSurfaceView.requestRender();
}
/** * Sets the GLTextureView which will display the preview. * * @param view the GLTextureView */
public void setGLTextureView(final GLTextureView view) {
surfaceType = SURFACE_TYPE_TEXTURE_VIEW;
glTextureView = view;
//同样是做一些初始化配置
glTextureView.setEGLContextClientVersion(2);
glTextureView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glTextureView.setOpaque(false);
glTextureView.setRenderer(renderer);//同样是类似于SurfaceView,持有一个Render的引用
glTextureView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
glTextureView.requestRender();
}
做的处理几乎一样,都是保存一份view的引用,然后设置OpenGL的一些配置,比如版本为2,颜色啊深度什么的,完了会把调 setRenderer()
方法,让view再保存一份当前 GPUImage
对象中的 renderer
对象的引用,最后设置一下渲染模式为主动刷新(字面意思,画布脏了再去刷新,相对的还有一个模式是被动刷新,也就是自动的不断刷新)。之后主动调用 requestRender()
去渲染一次。
看到这里会发现,两中surfaceView做的操作几乎一模一样,会不会这俩其实就是一个View呢,再来看看他们各自的实现:
private class GPUImageGLSurfaceView extends GLSurfaceView {
public GPUImageGLSurfaceView(Context context) {
super(context);
}
public GPUImageGLSurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
if (forceSize != null) {
super.onMeasure(MeasureSpec.makeMeasureSpec(forceSize.width, MeasureSpec.EXACTLY),
MeasureSpec.makeMeasureSpec(forceSize.height, MeasureSpec.EXACTLY));
} else {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
}
}
}
private class GPUImageGLTextureView extends GLTextureView {
public GPUImageGLTextureView(Context context) {
super(context);
}
public GPUImageGLTextureView(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
if (forceSize != null) {
super.onMeasure(MeasureSpec.makeMeasureSpec(forceSize.width, MeasureSpec.EXACTLY),
MeasureSpec.makeMeasureSpec(forceSize.height, MeasureSpec.EXACTLY));
} else {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
}
}
}
GPUImageGLSurfaceView
直接就是继承自Android原生的 GLSurfaceView
,GPUImageGLTextureView
继承自GPUImage的 GLTextureView
类,但是这个 GLTextureView
又是继承自 Android原生的 TextureView
,而 GLTextureView
的实现又很有意思,其实就是完全照着 GLSurfaceView
的样子设计的。再来回到前面为 GPUImage
设置render的地方,以 setGLTextureView
为例,可以看到会调textureView的 setRenderer()
方法:
public void setRenderer(Renderer renderer) {
checkRenderThreadState();
if (eglConfigChooser == null) {
eglConfigChooser = new SimpleEGLConfigChooser(true);
}
if (eglContextFactory == null) {
eglContextFactory = new DefaultContextFactory();
}
if (eglWindowSurfaceFactory == null) {
eglWindowSurfaceFactory = new DefaultWindowSurfaceFactory();
}
this.renderer = renderer;
//创建一个GLThread,是一个线程,调start()方法开始运行
glThread = new GLThread(mThisWeakRef);
glThread.start();
}
在确保配置正确的前提下,会创建一个 GLThread
对象并调用 start()
方法,猜测这就是一个一直运行的线程,专门用来绘制?看看实现:
static class GLThread extends Thread {
GLThread(WeakReference<GLTextureView> glTextureViewWeakRef) {
super();
width = 0;
height = 0;
requestRender = true;
renderMode = RENDERMODE_CONTINUOUSLY;
this.glTextureViewWeakRef = glTextureViewWeakRef;
}
@Override
public void run() {
setName("GLThread " + getId());
if (LOG_THREADS) {
Log.i("GLThread", "starting tid=" + getId());
}
try {
guardedRun();//进行了异常检测的run(),所以是guarded(保护)
} catch (InterruptedException e) {
// fall thru and exit normally
} finally {
glThreadManager.threadExiting(this);
}
}
}
mThisWeakRef
是对View自身的一个弱引用,线程的 run()
方法内会用 try catch
来较安全的调用 guardeRun()
方法:
private void guardedRun() throws InterruptedException {
eglHelper = new EglHelper(glTextureViewWeakRef);
haveEglContext = false;
haveEglSurface = false;
try {
GL10 gl = null;
boolean createEglContext = false;
boolean createEglSurface = false;
boolean createGlInterface = false;
boolean lostEglContext = false;
boolean sizeChanged = false;
boolean wantRenderNotification = false;
boolean doRenderNotification = false;
boolean askedToReleaseEglContext = false;
int w = 0;
int h = 0;
Runnable event = null;
while (true) {
synchronized (glThreadManager) {
while (true) {
if (shouldExit) {
return;
}
if (!eventQueue.isEmpty()) {
event = eventQueue.remove(0);
break;
}
// Update the pause state.
boolean pausing = false;
if (paused != requestPaused) {
pausing = requestPaused;
paused = requestPaused;
glThreadManager.notifyAll();
if (LOG_PAUSE_RESUME) {
Log.i("GLThread", "paused is now " + paused + " tid=" + getId());
}
}
// Do we need to give up the EGL context?
if (shouldReleaseEglContext) {
if (LOG_SURFACE) {
Log.i("GLThread", "releasing EGL context because asked to tid=" + getId());
}
stopEglSurfaceLocked();
stopEglContextLocked();
shouldReleaseEglContext = false;
askedToReleaseEglContext = true;
}
// Have we lost the EGL context?
if (lostEglContext) {
stopEglSurfaceLocked();
stopEglContextLocked();
lostEglContext = false;
}
// When pausing, release the EGL surface:
if (pausing && haveEglSurface) {
if (LOG_SURFACE) {
Log.i("GLThread", "releasing EGL surface because paused tid=" + getId());
}
stopEglSurfaceLocked();
}
// When pausing, optionally release the EGL Context:
if (pausing && haveEglContext) {
GLTextureView view = glTextureViewWeakRef.get();
boolean preserveEglContextOnPause =
view == null ? false : view.preserveEGLContextOnPause;
if (!preserveEglContextOnPause
|| glThreadManager.shouldReleaseEGLContextWhenPausing()) {
stopEglContextLocked();
if (LOG_SURFACE) {
Log.i("GLThread", "releasing EGL context because paused tid=" + getId());
}
}
}
// When pausing, optionally terminate EGL:
if (pausing) {
if (glThreadManager.shouldTerminateEGLWhenPausing()) {
eglHelper.finish();
if (LOG_SURFACE) {
Log.i("GLThread", "terminating EGL because paused tid=" + getId());
}
}
}
// Have we lost the TextureView surface?
if ((!hasSurface) && (!waitingForSurface)) {
if (LOG_SURFACE) {
Log.i("GLThread", "noticed textureView surface lost tid=" + getId());
}
if (haveEglSurface) {
stopEglSurfaceLocked();
}
waitingForSurface = true;
surfaceIsBad = false;
glThreadManager.notifyAll();
}
// Have we acquired the surface view surface?
if (hasSurface && waitingForSurface) {
if (LOG_SURFACE) {
Log.i("GLThread", "noticed textureView surface acquired tid=" + getId());
}
waitingForSurface = false;
glThreadManager.notifyAll();
}
if (doRenderNotification) {
if (LOG_SURFACE) {
Log.i("GLThread", "sending render notification tid=" + getId());
}
wantRenderNotification = false;
doRenderNotification = false;
renderComplete = true;
glThreadManager.notifyAll();
}
// Ready to draw?
if (readyToDraw()) {
// If we don't have an EGL context, try to acquire one.
if (!haveEglContext) {
if (askedToReleaseEglContext) {
askedToReleaseEglContext = false;
} else if (glThreadManager.tryAcquireEglContextLocked(this)) {
try {
eglHelper.start();
} catch (RuntimeException t) {
glThreadManager.releaseEglContextLocked(this);
throw t;
}
haveEglContext = true;
createEglContext = true;
glThreadManager.notifyAll();
}
}
if (haveEglContext && !haveEglSurface) {
haveEglSurface = true;
createEglSurface = true;
createGlInterface = true;
sizeChanged = true;
}
if (haveEglSurface) {
if (this.sizeChanged) {
sizeChanged = true;
w = width;
h = height;
wantRenderNotification = true;
if (LOG_SURFACE) {
Log.i("GLThread", "noticing that we want render notification tid=" + getId());
}
// Destroy and recreate the EGL surface.
createEglSurface = true;
this.sizeChanged = false;
}
requestRender = false;
glThreadManager.notifyAll();
break;
}
}
// By design, this is the only place in a GLThread thread where we wait().
if (LOG_THREADS) {
Log.i("GLThread", "waiting tid=" + getId() + " haveEglContext: " + haveEglContext
+ " haveEglSurface: " + haveEglSurface + " paused: " + paused + " hasSurface: "
+ hasSurface + " surfaceIsBad: " + surfaceIsBad + " waitingForSurface: "
+ waitingForSurface + " width: " + width + " height: " + height
+ " requestRender: " + requestRender + " renderMode: " + renderMode);
}
glThreadManager.wait();
}
} // end of synchronized(glThreadManager)
if (event != null) {
event.run();
event = null;
continue;
}
if (createEglSurface) {
if (LOG_SURFACE) {
Log.w("GLThread", "egl createSurface");
}
if (!eglHelper.createSurface()) {
synchronized (glThreadManager) {
surfaceIsBad = true;
glThreadManager.notifyAll();
}
continue;
}
createEglSurface = false;
}
if (createGlInterface) {
gl = (GL10) eglHelper.createGL();
glThreadManager.checkGLDriver(gl);
createGlInterface = false;
}
if (createEglContext) {
if (LOG_RENDERER) {
Log.w("GLThread", "onSurfaceCreated");
}
GLTextureView view = glTextureViewWeakRef.get();
if (view != null) {
view.renderer.onSurfaceCreated(gl, eglHelper.eglConfig);
}
createEglContext = false;
}
if (sizeChanged) {
if (LOG_RENDERER) {
Log.w("GLThread", "onSurfaceChanged(" + w + ", " + h + ")");
}
GLTextureView view = glTextureViewWeakRef.get();
if (view != null) {
view.renderer.onSurfaceChanged(gl, w, h);
}
sizeChanged = false;
}
if (LOG_RENDERER_DRAW_FRAME) {
Log.w("GLThread", "onDrawFrame tid=" + getId());
}
{
GLTextureView view = glTextureViewWeakRef.get();
if (view != null) {
view.renderer.onDrawFrame(gl);
}
}
int swapError = eglHelper.swap();
switch (swapError) {
case EGL10.EGL_SUCCESS:
break;
case EGL11.EGL_CONTEXT_LOST:
if (LOG_SURFACE) {
Log.i("GLThread", "egl context lost tid=" + getId());
}
lostEglContext = true;
break;
default:
// Other errors typically mean that the current surface is bad,
// probably because the TextureView surface has been destroyed,
// but we haven't been notified yet.
// Log the error to help developers understand why rendering stopped.
EglHelper.logEglErrorAsWarning("GLThread", "eglSwapBuffers", swapError);
synchronized (glThreadManager) {
surfaceIsBad = true;
glThreadManager.notifyAll();
}
break;
}
if (wantRenderNotification) {
doRenderNotification = true;
}
}
} finally {
/* * clean-up everything... */
synchronized (glThreadManager) {
stopEglSurfaceLocked();
stopEglContextLocked();
}
}
}
代码很长,这里只关注几个点:首先是一个 while(true)
无限循环的执行,然后重点是会通过 GLTextureView view = glTextureViewWeakRef.get();
获取当前 GLTextureView
对象的引用,调用他内部的 render
的 onSurfaceCreated
onSurfaceChanged
onDrawFrame
方法。这三个方法在Render中的实现是这样的:
@Override
public void onSurfaceCreated(final GL10 unused, final EGLConfig config) {
GLES20.glClearColor(backgroundRed, backgroundGreen, backgroundBlue, 1);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
filter.ifNeedInit();//调用滤镜的初始化方法
}
@Override
public void onSurfaceChanged(final GL10 gl, final int width, final int height) {
outputWidth = width;
outputHeight = height;
GLES20.glViewport(0, 0, width, height);//控制当前SurfaceView的绘制区域
GLES20.glUseProgram(filter.getProgram());
filter.onOutputSizeChanged(width, height);//为滤镜设置当前的宽高
adjustImageScaling();//字面意思就是绘制的纹理(图片)的宽高适应 其实也就是通过修改纹理坐标来适应宽高
synchronized (surfaceChangedWaiter) {
surfaceChangedWaiter.notifyAll();
}
}
@Override
public void onDrawFrame(final GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
runAll(runOnDraw);//这里是从循环拿出队列中的任务,依次执行
generateOesTexture();//生成纹理,主要是获取当前的 glTextureId, glCubeBuffer, glTextureBuffer
filter.onDraw(glTextureId, glCubeBuffer, glTextureBuffer);//滤镜的应用
runAll(runOnDrawEnd);//最后再执行第二个任务队列
}
private void runAll(Queue<Runnable> queue) {
synchronized (queue) {
while (!queue.isEmpty()) {
queue.poll().run();
}
}
}
前面说Render有两个绘制任务队列,就是在 onDrawFrame()
方法中,把队列中的任务拿出来执行了。
setImage()
接下来看设置图片资源并渲染的具体流程。
/** * Sets the image on which the filter should be applied. * * @param bitmap the new image */
public void setImage(final Bitmap bitmap) {
gpuImage.setImage(bitmap);
}
/** * Sets the image on which the filter should be applied from a Uri. * * @param uri the uri of the new image */
public void setImage(final Uri uri) {
gpuImage.setImage(uri);
}
/** * Sets the image on which the filter should be applied from a File. * * @param file the file of the new image */
public void setImage(final File file) {
gpuImage.setImage(file);
}
GPUImageView
的 setImage()
会调用 GPUImage
的 setImage()
方法,且有多个重载方法,分别可以接收不同类型的输入, bitmap
uri
和 file
。后两种输入,是会自己构建一个图片加载流程,最终还是要获取 bitmap
:
/** * Sets the image on which the filter should be applied from a Uri. * * @param uri the uri of the new image */
public void setImage(final Uri uri) {
new LoadImageUriTask(this, uri).execute();
}
/** * Sets the image on which the filter should be applied from a File. * * @param file the file of the new image */
public void setImage(final File file) {
new LoadImageFileTask(this, file).execute();
}
就直接以 bitmap
为例看一下:
/** * Sets the image on which the filter should be applied. * * @param bitmap the new image */
public void setImage(final Bitmap bitmap) {
currentBitmap = bitmap;
renderer.setImageBitmap(bitmap, false);
requestRender();
}
保存一份bitmap的引用,把bitmap传给render,然后主动请求一次渲染。那先看一下给render干了什么:
public void setImageBitmap(final Bitmap bitmap, final boolean recycle) {
if (bitmap == null) {
return;
}
runOnDraw(new Runnable() {
@Override
public void run() {
Bitmap resizedBitmap = null;
if (bitmap.getWidth() % 2 == 1) {
resizedBitmap = Bitmap.createBitmap(bitmap.getWidth() + 1, bitmap.getHeight(),
Bitmap.Config.ARGB_8888);
resizedBitmap.setDensity(bitmap.getDensity());
Canvas can = new Canvas(resizedBitmap);
can.drawARGB(0x00, 0x00, 0x00, 0x00);
can.drawBitmap(bitmap, 0, 0, null);
addedPadding = 1;
} else {
addedPadding = 0;
}
glTextureId = OpenGlUtils.loadTexture(
resizedBitmap != null ? resizedBitmap : bitmap, glTextureId, recycle);
if (resizedBitmap != null) {
resizedBitmap.recycle();
}
imageWidth = bitmap.getWidth();
imageHeight = bitmap.getHeight();
adjustImageScaling();
}
});
}
protected void runOnDraw(final Runnable runnable) {
synchronized (runOnDraw) {
runOnDraw.add(runnable);
}
}
runOnDraw()
是把当前要做的操作以一个Runnable任务的形式,插入到绘制任务队列末尾。具体的操作内容呢,首先会检测下图片宽度,如果是奇数,就会先创建一个原图宽度+1、高度等于原图的图片,用Canvas的方式绘制一张新的图片,效果等同于为图片补边。之后会调用 adjustImageScaleing()
方法,这个方法前面有看到过,做的其实就是通过纹理坐标的变换来做宽高适配和旋转:
private void adjustImageScaling() {
float outputWidth = this.outputWidth;
float outputHeight = this.outputHeight;
if (rotation == Rotation.ROTATION_270 || rotation == Rotation.ROTATION_90) {
outputWidth = this.outputHeight;
outputHeight = this.outputWidth;
}
float ratio1 = outputWidth / imageWidth;
float ratio2 = outputHeight / imageHeight;
float ratioMax = Math.max(ratio1, ratio2);
int imageWidthNew = Math.round(imageWidth * ratioMax);
int imageHeightNew = Math.round(imageHeight * ratioMax);
float ratioWidth = imageWidthNew / outputWidth;
float ratioHeight = imageHeightNew / outputHeight;
float[] cube = CUBE;
float[] textureCords = TextureRotationUtil.getRotation(rotation, flipHorizontal, flipVertical);
if (scaleType == GPUImage.ScaleType.CENTER_CROP) {
float distHorizontal = (1 - 1 / ratioWidth) / 2;
float distVertical = (1 - 1 / ratioHeight) / 2;
textureCords = new float[]{
addDistance(textureCords[0], distHorizontal), addDistance(textureCords[1], distVertical),
addDistance(textureCords[2], distHorizontal), addDistance(textureCords[3], distVertical),
addDistance(textureCords[4], distHorizontal), addDistance(textureCords[5], distVertical),
addDistance(textureCords[6], distHorizontal), addDistance(textureCords[7], distVertical),
};
} else {
cube = new float[]{
CUBE[0] / ratioHeight, CUBE[1] / ratioWidth,
CUBE[2] / ratioHeight, CUBE[3] / ratioWidth,
CUBE[4] / ratioHeight, CUBE[5] / ratioWidth,
CUBE[6] / ratioHeight, CUBE[7] / ratioWidth,
};
}
glCubeBuffer.clear();
glCubeBuffer.put(cube).position(0);
glTextureBuffer.clear();
glTextureBuffer.put(textureCords).position(0);
}
setFilter()
接下来看添加滤镜:
/** * Set the filter to be applied on the image. * * @param filter Filter that should be applied on the image. */
public void setFilter(GPUImageFilter filter) {
this.filter = filter;
gpuImage.setFilter(filter);
requestRender();
}
首先保存一份filter的引用,然后调 GPUImage
的 setFilter()
方法把filter传过去,最后再主动刷新一次。看看 GPUImage
的 setFilter()
实现:
/** * Sets the filter which should be applied to the image which was (or will * be) set by setImage(...). * * @param filter the new filter */
public void setFilter(final GPUImageFilter filter) {
this.filter = filter;
renderer.setFilter(this.filter);
requestRender();
}
类似的,最终还是去到了render中:
public void setFilter(final GPUImageFilter filter) {
runOnDraw(new Runnable() {
@Override
public void run() {
final GPUImageFilter oldFilter = GPUImageRenderer.this.filter;
GPUImageRenderer.this.filter = filter;
if (oldFilter != null) {
oldFilter.destroy();
}
GPUImageRenderer.this.filter.ifNeedInit();
GLES20.glUseProgram(GPUImageRenderer.this.filter.getProgram());
GPUImageRenderer.this.filter.onOutputSizeChanged(outputWidth, outputHeight);
}
});
}
创建了一个新的绘制任务,具体做的是:先获取Render中保存的Filter,这个是上一次绘制的。在更新了Filter后,调用旧Filter的 destroy()
方法清理一些旧滤镜的资源等,之后调用新Filter的初始化方法,将当前宽高数据传给新的Filter。
GPUImageFilter
GPUImageFilter的工作原理其实就是OpengGL ES的渲染,GPUImage框架相当于是以滤镜这个概念做了抽象,所有的滤镜都有一个共同基类,也就是基础滤镜 GPUImageFilter
。他的构造器如下:
public GPUImageFilter(final String vertexShader, final String fragmentShader) {
runOnDraw = new LinkedList<>();
this.vertexShader = vertexShader;
this.fragmentShader = fragmentShader;
}
首先是创建了一个一看名字就知道是任务队列的Runnable对象的list,然后接受两个string对象,分别是滤镜的顶点着色器和片元着色器,这个概念是OpenGL ES中的,简单来说就是顶点着色器决定每个像素点的位置,片元着色器决定每一个确定位置的点上图像做怎样的渲染。默认的基础滤镜自带的着色器代码如下:
public static final String NO_FILTER_VERTEX_SHADER = "" +
"attribute vec4 position;\n" +
"attribute vec4 inputTextureCoordinate;\n" +
" \n" +
"varying vec2 textureCoordinate;\n" +
" \n" +
"void main()\n" +
"{\n" +
" gl_Position = position;\n" +
" textureCoordinate = inputTextureCoordinate.xy;\n" +
"}";
public static final String NO_FILTER_FRAGMENT_SHADER = "" +
"varying highp vec2 textureCoordinate;\n" +
" \n" +
"uniform sampler2D inputImageTexture;\n" +
" \n" +
"void main()\n" +
"{\n" +
" gl_FragColor = texture2D(inputImageTexture, textureCoordinate);\n" +
"}";
默认的顶点着色器就是简单的位置坐标做了一一对应,片元着色器就直接把纹理上的点渲染到屏幕上,没有做任何处理。根据前面的分析,滤镜的初始化过程是在render中调用了 GPUImageRenderer.this.filter.ifNeedInit()
方法,来看下具体实现:
private final void init() {
onInit();
onInitialized();
}
public void onInit() {
glProgId = OpenGlUtils.loadProgram(vertexShader, fragmentShader);
glAttribPosition = GLES20.glGetAttribLocation(glProgId, "position");
glUniformTexture = GLES20.glGetUniformLocation(glProgId, "inputImageTexture");
glAttribTextureCoordinate = GLES20.glGetAttribLocation(glProgId, "inputTextureCoordinate");
isInitialized = true;
}
public void onInitialized() {
}
public void ifNeedInit() {
if (!isInitialized) init();
}
具体的初始化方法是 onInit()
,而在这之后提供了一个空的 onInitialized()
方法,这样设计的目的是供我们继承他实现自己滤镜时做扩展。初始化过程做的操作就是OpenGL ES中的操作,包括用给定的顶点、片元着色器创建一个着色器程序并保存其id,设置uniform和attrib变量等。
具体的滤镜渲染在 onDraw()
方法中:
public void onDraw(final int textureId, final FloatBuffer cubeBuffer, final FloatBuffer textureBuffer) {
//这里的程序id就是前面用给的着色器代码创建的着色器程序的id
GLES20.glUseProgram(glProgId);
//执行一绘制队列中的任务
runPendingOnDrawTasks();
if (!isInitialized) {
return;
}
//设置纹理坐标等数据
cubeBuffer.position(0);
GLES20.glVertexAttribPointer(glAttribPosition, 2, GLES20.GL_FLOAT, false, 0, cubeBuffer);
GLES20.glEnableVertexAttribArray(glAttribPosition);
textureBuffer.position(0);
GLES20.glVertexAttribPointer(glAttribTextureCoordinate, 2, GLES20.GL_FLOAT, false, 0, textureBuffer);
GLES20.glEnableVertexAttribArray(glAttribTextureCoordinate);
// if (textureId != OpenGlUtils.NO_TEXTURE) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
GLES20.glUniform1i(glUniformTexture, 0);
// }
//一个空的方法,提供子类扩展
onDrawArraysPre();
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glDisableVertexAttribArray(glAttribPosition);
GLES20.glDisableVertexAttribArray(glAttribTextureCoordinate);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
}
protected void onDrawArraysPre() {
}
protected void runPendingOnDrawTasks() {
synchronized (runOnDraw) {
while (!runOnDraw.isEmpty()) {
runOnDraw.removeFirst().run();
}
}
}
在具体渲染的时候,首先是执行了在任务队列中待执行的任务,然后设置纹理坐标等数据,最后再最终调用 glDrawAyyays()
方法渲染前,调了一个空方法,这个是方便自定义滤镜做扩展的,最后将渲染好的结果当做纹理绑定即可,具体原理还是OpenGL ES的那套东西。
GPUImageFilterGroup
GPUImageFilterGroup跟GPUImageFilter,就类似于Android中View和ViewGroup一样。GPUImageFilter是一切滤镜的基类,同时每个子滤镜都只是做单次处理的滤镜,比如对比度,亮度,饱和度调整等。GPUImageFilterGroup继承自GPUImageFilter,却像一个容器一样,可以保存一组GPUImageFilter,然后同时运用这些滤镜的效果。所以他额外提供了像一系列容器一样的方法来管理滤镜:
public void addFilter(GPUImageFilter aFilter) {
if (aFilter == null) {
return;
}
filters.add(aFilter);
updateMergedFilters();
}
public void updateMergedFilters() {
if (filters == null) {
return;
}
if (mergedFilters == null) {
mergedFilters = new ArrayList<>();
} else {
mergedFilters.clear();
}
List<GPUImageFilter> filters;
for (GPUImageFilter filter : this.filters) {
if (filter instanceof GPUImageFilterGroup) {
((GPUImageFilterGroup) filter).updateMergedFilters();
filters = ((GPUImageFilterGroup) filter).getMergedFilters();
if (filters == null || filters.isEmpty())
continue;
mergedFilters.addAll(filters);
continue;
}
mergedFilters.add(filter);
}
}
addFilter
就是往滤镜组中添加一个滤镜,这个添加的滤镜暂时先放到 filters
这个list中。updateMergedFilters()
则是把 filters
中的所有滤镜一个个拿出来,类似 flatmap
操作,把里面的每个滤镜和其子滤镜(假如他本身也是一个group)都取出来放到另一个list mergedFilters
中,这里看起来又有点 View
和 ViewGroup
的味道了。
GPUImageFilterGroup 的渲染用到了一个叫帧缓冲对象FBO(Frame Buffer Object)的东西,简单来说就是单个滤镜是渲染完就把渲染结果作为纹理绘制在屏幕上(Surface),而滤镜组的情况显然是需要多个滤镜一起渲染,所以会依次取滤镜组中的每一个单个滤镜,做一次渲染,渲染完的结果作为一个帧缓冲保存起来,直到最后一个滤镜也渲染完,就将最终丢给Surface。
@Override
public void onOutputSizeChanged(final int width, final int height) {
super.onOutputSizeChanged(width, height);
if (frameBuffers != null) {
destroyFramebuffers();
}
int size = filters.size();
for (int i = 0; i < size; i++) {
filters.get(i).onOutputSizeChanged(width, height);
}
if (mergedFilters != null && mergedFilters.size() > 0) {
size = mergedFilters.size();
//分别创建帧缓冲数组和每一个缓冲对应的纹理id
frameBuffers = new int[size - 1];
frameBufferTextures = new int[size - 1];
for (int i = 0; i < size - 1; i++) {
//生成一个缓冲
GLES20.glGenFramebuffers(1, frameBuffers, i);
//生成对应纹理
GLES20.glGenTextures(1, frameBufferTextures, i);
//当前的绘制绑定纹理GL_TEXTURE_2D
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, frameBufferTextures[i]);
//绘制纹理
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
//一系列参数
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
//绑定一个FBO对象
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frameBuffers[i]);
//这个FBO对应的纹理ID做绑定
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, frameBufferTextures[i], 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
}
}
}
首先是在 onOutputSizeChanged()
方法中,创建了 frameBuffers
和 frameBufferTextures
,这就是我们的帧缓冲和对应的纹理数组。可以看到这个数据的大小刚好是总滤镜数-1,是因为,最后一个滤镜不用缓冲,直接渲染。举个例子,有三个滤镜ABC,渲染时先用A渲染,结果保存在1号缓冲中,再拿1号缓冲结合滤镜B进行渲染,结果保存在2号缓冲中,再拿2号缓冲结合滤镜C渲染,结果直接丢给屏幕渲染(Surface)。
public void onDraw(final int textureId, final FloatBuffer cubeBuffer, final FloatBuffer textureBuffer) {
runPendingOnDrawTasks();
if (!isInitialized() || frameBuffers == null || frameBufferTextures == null) {
return;
}
if (mergedFilters != null) {
int size = mergedFilters.size();
int previousTexture = textureId;
for (int i = 0; i < size; i++) {
if (i >= mergedFilters.size()) {
break;
}
GPUImageFilter filter = mergedFilters.get(i);
boolean isNotLast = i < size - 1;
if (isNotLast) {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frameBuffers[i]);
GLES20.glClearColor(0, 0, 0, 0);
}
if (i == 0) {
filter.onDraw(previousTexture, cubeBuffer, textureBuffer);
} else {
filter.onDraw(previousTexture, glCubeBuffer, glTextureFlipBuffer);
}
if (isNotLast) {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
previousTexture = frameBufferTextures[i];
}
}
}
}
在 onDraw()
中,取出每一个单个滤镜,先绑定好对应的帧缓冲,然后拿上一个帧缓冲对应的纹理id,也就是上次渲染的结果,作为这次渲染的输入,进行一次渲染,结果保存在这次的帧缓冲中。
今天的文章GPUImageView 源码简析分享到此就结束了,感谢您的阅读。
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/21659.html