因一些业务,需要从JAVA后端发送上百M的数据到前端进行渲染,从服务器到前端的传输时间不能多0.5秒
从网上找过了网站感觉效果不大,这里也分享下自己的优化经验
目录
一、TOMCAT压缩机制
Tomcat自带的一个压缩机制,可以数据进行压缩,压缩量可以让数据减少到20%左右,对带宽可以起到一个很好的效果,但在提速方面表现不佳,大数据量(100M)的压缩情况下,数据传输速度没有明显的提高,反而会更久。数据的压缩和解压也是需要一定的时间进行处理。所以该方案否决
参考地址:https://www.cnblogs.com/DDgougou/p/8675504.html
二、JAVA 过滤器压缩
Java过滤器压缩,基本和Tomcat的压缩机制一样,使用的压缩方式也是Gzip。测试后效果基本和Tomcat的差不多。
import javax.servlet.ServletOutputStream;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpServletResponseWrapper;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
public class GZIPResponseWrapper extends HttpServletResponseWrapper {
protected HttpServletResponse origResponse = null;
protected ServletOutputStream stream = null;
protected PrintWriter writer = null;
public GZIPResponseWrapper(HttpServletResponse response) {
super(response);
origResponse = response;
}
public ServletOutputStream createOutputStream() throws IOException {
return (new GZIPResponseStream(origResponse));
}
public void finishResponse() {
try {
if (writer != null) {
writer.close();
} else {
if (stream != null) {
stream.close();
}
}
} catch (IOException e) {}
}
@Override
public void flushBuffer() throws IOException {
stream.flush();
}
@Override
public ServletOutputStream getOutputStream() throws IOException {
if (writer != null) {
throw new IllegalStateException("getWriter() has already been called!");
}
if (stream == null)
stream = createOutputStream();
return (stream);
}
@Override
public PrintWriter getWriter() throws IOException {
if (writer != null) {
return (writer);
}
if (stream != null) {
throw new IllegalStateException("getOutputStream() has already been called!");
}
stream = createOutputStream();
writer = new PrintWriter(new OutputStreamWriter(stream, "UTF-8"));
return (writer);
}
@Override
public void setContentLength(int length) {}
}
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.http.HttpServletResponse;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPOutputStream;
public class GZIPResponseStream extends ServletOutputStream {
protected ByteArrayOutputStream baos = null;
protected GZIPOutputStream gzipstream = null;
protected boolean closed = false;
protected HttpServletResponse response = null;
protected ServletOutputStream output = null;
public GZIPResponseStream(HttpServletResponse response) throws IOException {
super();
closed = false;
this.response = response;
this.output = response.getOutputStream();
baos = new ByteArrayOutputStream();
gzipstream = new GZIPOutputStream(baos);
}
@Override
public void close() throws IOException {
if (closed) {
throw new IOException("This output stream has already been closed");
}
gzipstream.finish();
byte[] bytes = baos.toByteArray();
response.addHeader("Content-Length",
Integer.toString(bytes.length));
response.addHeader("Content-Encoding", "gzip");
output.write(bytes);
output.flush();
output.close();
closed = true;
}
@Override
public void flush() throws IOException {
if (closed) {
throw new IOException("Cannot flush a closed output stream");
}
gzipstream.flush();
}
@Override
public void write(int b) throws IOException {
if (closed) {
throw new IOException("Cannot write to a closed output stream");
}
gzipstream.write((byte)b);
}
public void write(byte b[]) throws IOException {
write(b, 0, b.length);
}
public void write(byte b[], int off, int len) throws IOException {
if (closed) {
throw new IOException("Cannot write to a closed output stream");
}
gzipstream.write(b, off, len);
}
public boolean closed() {
return (this.closed);
}
public void reset() {
//noop
}
@Override
public boolean isReady() {
return false;
}
@Override
public void setWriteListener(WriteListener writeListener) {
}
}
package com.spdb.web.base;
import java.io.*;
import java.util.zip.GZIPOutputStream;
import javax.servlet.*;
import javax.servlet.http.*;
/**
* Description springmvc 压缩返回参数
* @param
* @Author junwei
* @Date 17:49 2020/3/26
**/
public class GZIPFilter implements Filter {
@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain)
throws IOException, ServletException {
if (req instanceof HttpServletRequest) {
HttpServletRequest request = (HttpServletRequest) req;
HttpServletResponse response = (HttpServletResponse) res;
String ae = request.getHeader("accept-encoding");
if (ae != null && ae.indexOf("gzip") != -1) {
GZIPResponseWrapper wrappedResponse = new GZIPResponseWrapper(response);
chain.doFilter(req, wrappedResponse);
wrappedResponse.finishResponse();
return;
}
chain.doFilter(req, res);
}
}
@Override
public void init(FilterConfig filterConfig) {
// noop
}
@Override
public void destroy() {
// noop
}
}
最后在web.xml文件加上过滤(com.spdb.web.base 为自己项目的文件相对位置)
<filter>
<filter-name>GZIPFilter</filter-name>
<filter-class>com.spdb.web.base.GZIPFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>GZIPFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
三、msgpack压缩技术
Msgpack是个很好的压缩插件,有自己的压缩转换算法,几乎包含大部分编程语言,但是经过一系列的测试后,发现JAVA和JS
的压缩解压不是用的同一个算法,两者间的数据交换无法共通,遂放弃。
参考地址:https://msgpack.org/
四、ajax轮询机制
Ajax轮询机制,这里我是作为一个想法来测试的,一次请求分解成多次请求,在数据没有执行完毕的情况,继续请求。
好处是分解多次请求,数据能立刻渲染加载,但是总体的加载时间会更长。而且后端和前端的ajax逻辑代码 需要比较完善才行,需要考虑到异步同步的一系列问题,尝试过一些测试,不实用,遂弃用。
参考地址:https://www.cnblogs.com/YangJieCheng/p/8367586.html
五、返回数据格式 集合对象类型
优化到到这里的时候,压缩其实就是减少数据量,那么从根源上减少数据量的话,需要怎么整。减少集合的总量是不可能,
但是可以对 集合的里面对象的key做优化
例如正常JSON ,正常的分页集合返回到前端都是这样的。按照字符串的数据量来决定 这次传输的内存大小的话,这里有将近150个字符
{"total":28,"rows":[
{"productid":"FI-SW-01","productname":"Koi","unitcost":10.00,"status":"P","listprice":36.50,"attr1":"Large","itemid":"EST-1"},
通过对key做简化处理,只剩下100个字符不到,数据的内存大小 减少了30%,这样相应的传输速度也因为 数据量的减少而提高了
{"total":28,"rows":[ {"A":"FI-SW-01","B":"Koi","C":10.00,"D":"P","E":36.50,"F":"Large","G":"EST-1"},
结论是通常这种简化key的方式,可以提速30%-50%。缺点就是对字段的识别不明确,没有约定好的话,不知道这个数据是什么内容
六、返回数据格式 数组类型
通过第五种方案,在这个基础上再进行优化,key可以简化的,那转变下数据类型换成数组,直接就把key给去掉了
字符数量只有约50个左右,相对于最开始150个字符,现在只剩下了1/3,相应的传输速度也提高了60%-70%。但是缺点也一样比简化key更难读懂这是什么数据。需要前端和后端约定好数据渲染的逻辑。
[[28],[FI-SW-01,Koi,10.00,P,36.50,Large,EST-1]]
最后也是采用的第六种方案来处理渲染数据,从结果来传输时间减少了很多,但是过程也增加一定的难度