flume學習(四):Flume Channel Selectors使用

問題導讀
1、怎樣將不同項目的的日誌輸出到不同的channel?
2、如何理解一個sink爲hdfs,一個sink爲logger的拓撲結構?
3、怎樣在Log4jExtAppender.java類裏擴展一個參數?





前幾篇文章只有一個項目的日誌,現在我們考慮多個項目的日誌的收集,我拷貝了一份flumedemo項目,重命名爲flumedemo2,添加了一個WriteLog2.java類,稍微改動了一下JSON字符串的輸出,將以前requestUrl中的"reporter-api"改爲了"image-api",以便和WriteLog類的輸出稍微區分開來,如下:
  1. package com.besttone.flume;  
  2.   
  3. import java.util.Date;  
  4.   
  5. import org.apache.commons.logging.Log;  
  6. import org.apache.commons.logging.LogFactory;  
  7.   
  8. public class WriteLog2 {  
  9.     protected static final Log logger = LogFactory.getLog(WriteLog2.class);  
  10.   
  11.     /** 
  12.      * @param args 
  13.      * @throws InterruptedException 
  14.      */  
  15.     public static void main(String[] args) throws InterruptedException {  
  16.         // TODO Auto-generated method stub  
  17.         while (true) {  
  18.             logger.info(new Date().getTime());  
  19.             logger.info("{\"requestTime\":"  
  20.                     + System.currentTimeMillis()  
  21.                     + ",\"requestParams\":{\"timestamp\":1405499314238,\"phone\":\"02038824941\",\"cardName\":\"測試商家名稱\",\"provinceCode\":\"440000\",\"cityCode\":\"440106\"},\"requestUrl\":\"/image-api/reporter/reporter12/init.do\"}");  
  22.             Thread.sleep(2000);  
  23.   
  24.         }  
  25.     }  
  26. }
複製代碼


現在有這麼一個需求描述:要求flumedemo的項目的log4j日誌輸出到hdfs,而flumedemo2項目的log4j日誌輸出到agent的log日誌中。

我們還是採用log4jappender來配置log4j輸出給flume的souce,現在的需求明顯是有兩個sink了,一個sink爲hdfs,一個sink爲logger。於是現在的拓撲結構應該是這樣的:
 


需要實現這麼一個拓撲接口,就需要使用到channel selectors,讓不同的項目日誌通過不同的channel到不同的sink中去。

官方文檔上channel selectors 有兩種類型:

Replicating Channel Selector (default)

Multiplexing Channel Selector

這兩種selector的區別是:Replicating 會將source過來的events發往所有channel,而Multiplexing 可以選擇該發往哪些channel。對於上面的例子來說,如果採用Replicating ,那麼demo和demo2的日誌會同時發往channel1和channel2,這顯然是和需求不符的,需求只是讓demo的日誌發往channel1,而demo2的日誌發往channel2。

綜上所述,我們選擇Multiplexing Channel Selector。這裏我們有遇到一個棘手的問題,Multiplexing 需要判斷header裏指定key的值來決定分發到某個具體的channel,我們現在demo和demo2同時運行在同一個服務器上,如果在不同的服務器上運行,我們可以在 source1上加上一個 host 攔截器(上一篇有介紹過),這樣可以通過header中的host來判斷event該分發給哪個channel,而這裏是在同一個服務器上,由host是區分不出來日誌的來源的,我們必須想辦法在header中添加一個key來區分日誌的來源。

設想一下,如果header中有一個key:flume.client.log4j.logger.source,我們通過設置這個key的值,demo設爲app1,demo2設爲app2,這樣我們就能通過設置:
  1. tier1.sources.source1.channels=channel1 channel2
  2. tier1.sources.source1.selector.type=multiplexing
  3. tier1.sources.source1.selector.header=flume.client.log4j.logger.source
  4. tier1.sources.source1.selector.mapping.app1=channel1
  5. tier1.sources.source1.selector.mapping.app2=channel2
複製代碼

來將不同項目的的日誌輸出到不同的channel了。


我們按照這個思路繼續下去,遇到了困難,log4jappender沒有這樣的參數來讓你設置。怎麼辦?翻看了一下log4jappender的源碼,發現可以很容易的實現擴展參數,於是我複製了一份log4jappender代碼,新加了一個類叫Log4jExtAppender.java,裏面擴展了一個參數叫:source,代碼如下:

  1. package com.besttone.flume;  
  2.   
  3. import java.io.ByteArrayOutputStream;  
  4. import java.io.IOException;  
  5. import java.nio.charset.Charset;  
  6. import java.util.HashMap;  
  7. import java.util.Map;  
  8. import java.util.Properties;  
  9.   
  10. import org.apache.avro.Schema;  
  11. import org.apache.avro.generic.GenericRecord;  
  12. import org.apache.avro.io.BinaryEncoder;  
  13. import org.apache.avro.io.DatumWriter;  
  14. import org.apache.avro.io.EncoderFactory;  
  15. import org.apache.avro.reflect.ReflectData;  
  16. import org.apache.avro.reflect.ReflectDatumWriter;  
  17. import org.apache.avro.specific.SpecificRecord;  
  18. import org.apache.flume.Event;  
  19. import org.apache.flume.EventDeliveryException;  
  20. import org.apache.flume.FlumeException;  
  21. import org.apache.flume.api.RpcClient;  
  22. import org.apache.flume.api.RpcClientConfigurationConstants;  
  23. import org.apache.flume.api.RpcClientFactory;  
  24. import org.apache.flume.clients.log4jappender.Log4jAvroHeaders;  
  25. import org.apache.flume.event.EventBuilder;  
  26. import org.apache.log4j.AppenderSkeleton;  
  27. import org.apache.log4j.helpers.LogLog;  
  28. import org.apache.log4j.spi.LoggingEvent;  
  29.   
  30. /** 
  31. *  
  32. * Appends Log4j Events to an external Flume client which is decribed by the 
  33. * Log4j configuration file. The appender takes two required parameters: 
  34. * <p> 
  35. * <strong>Hostname</strong> : This is the hostname of the first hop at which 
  36. * Flume (through an AvroSource) is listening for events. 
  37. * </p> 
  38. * <p> 
  39. * <strong>Port</strong> : This the port on the above host where the Flume 
  40. * Source is listening for events. 
  41. * </p> 
  42. * A sample log4j properties file which appends to a source would look like: 
  43. *  
  44. * <pre> 
  45. * <p> 
  46. * log4j.appender.out2 = org.apache.flume.clients.log4jappender.Log4jAppender 
  47. * log4j.appender.out2.Port = 25430 
  48. * log4j.appender.out2.Hostname = foobarflumesource.com 
  49. * log4j.logger.org.apache.flume.clients.log4jappender = DEBUG,out2</p> 
  50. * </pre> 
  51. * <p> 
  52. * <i>Note: Change the last line to the package of the class(es), that will do 
  53. * the appending.For example if classes from the package com.bar.foo are 
  54. * appending, the last line would be:</i> 
  55. * </p> 
  56. *  
  57. * <pre> 
  58. * <p>log4j.logger.com.bar.foo = DEBUG,out2</p> 
  59. * </pre> 
  60. *  
  61. *  
  62. */  
  63. public class Log4jExtAppender extends AppenderSkeleton {  
  64.   
  65.     private String hostname;  
  66.     private int port;  
  67.     private String source;  
  68.   
  69.     public String getSource() {  
  70.         return source;  
  71.     }  
  72.   
  73.     public void setSource(String source) {  
  74.         this.source = source;  
  75.     }  
  76.   
  77.     private boolean unsafeMode = false;  
  78.     private long timeout = RpcClientConfigurationConstants.DEFAULT_REQUEST_TIMEOUT_MILLIS;  
  79.     private boolean avroReflectionEnabled;  
  80.     private String avroSchemaUrl;  
  81.   
  82.     RpcClient rpcClient = null;  
  83.   
  84.     /** 
  85.      * If this constructor is used programmatically rather than from a log4j 
  86.      * conf you must set the <tt>port</tt> and <tt>hostname</tt> and then call 
  87.      * <tt>activateOptions()</tt> before calling <tt>append()</tt>. 
  88.      */  
  89.     public Log4jExtAppender() {  
  90.     }  
  91.   
  92.     /** 
  93.      * Sets the hostname and port. Even if these are passed the 
  94.      * <tt>activateOptions()</tt> function must be called before calling 
  95.      * <tt>append()</tt>, else <tt>append()</tt> will throw an Exception. 
  96.      *  
  97.      * @param hostname 
  98.      *            The first hop where the client should connect to. 
  99.      * @param port 
  100.      *            The port to connect on the host. 
  101.      *  
  102.      */  
  103.     public Log4jExtAppender(String hostname, int port, String source) {  
  104.         this.hostname = hostname;  
  105.         this.port = port;  
  106.         this.source = source;  
  107.     }  
  108.   
  109.     /** 
  110.      * Append the LoggingEvent, to send to the first Flume hop. 
  111.      *  
  112.      * @param event 
  113.      *            The LoggingEvent to be appended to the flume. 
  114.      * @throws FlumeException 
  115.      *             if the appender was closed, or the hostname and port were not 
  116.      *             setup, there was a timeout, or there was a connection error. 
  117.      */  
  118.     @Override  
  119.     public synchronized void append(LoggingEvent event) throws FlumeException {  
  120.         // If rpcClient is null, it means either this appender object was never  
  121.         // setup by setting hostname and port and then calling activateOptions  
  122.         // or this appender object was closed by calling close(), so we throw an  
  123.         // exception to show the appender is no longer accessible.  
  124.         if (rpcClient == null) {  
  125.             String errorMsg = "Cannot Append to Appender! Appender either closed or"  
  126.                     + " not setup correctly!";  
  127.             LogLog.error(errorMsg);  
  128.             if (unsafeMode) {  
  129.                 return;  
  130.             }  
  131.             throw new FlumeException(errorMsg);  
  132.         }  
  133.   
  134.         if (!rpcClient.isActive()) {  
  135.             reconnect();  
  136.         }  
  137.   
  138.         // Client created first time append is called.  
  139.         Map<String, String> hdrs = new HashMap<String, String>();  
  140.         hdrs.put(Log4jAvroHeaders.LOGGER_NAME.toString(), event.getLoggerName());  
  141.         hdrs.put(Log4jAvroHeaders.TIMESTAMP.toString(),  
  142.                 String.valueOf(event.timeStamp));  
  143.   
  144.         // 添加日誌來源  
  145.         if (this.source == null || this.source.equals("")) {  
  146.             this.source = "unknown";  
  147.         }  
  148.         hdrs.put("flume.client.log4j.logger.source", this.source);  
  149.         // To get the level back simply use  
  150.         // LoggerEvent.toLevel(hdrs.get(Integer.parseInt(  
  151.         // Log4jAvroHeaders.LOG_LEVEL.toString()))  
  152.         hdrs.put(Log4jAvroHeaders.LOG_LEVEL.toString(),  
  153.                 String.valueOf(event.getLevel().toInt()));  
  154.   
  155.         Event flumeEvent;  
  156.         Object message = event.getMessage();  
  157.         if (message instanceof GenericRecord) {  
  158.             GenericRecord record = (GenericRecord) message;  
  159.             populateAvroHeaders(hdrs, record.getSchema(), message);  
  160.             flumeEvent = EventBuilder.withBody(  
  161.                     serialize(record, record.getSchema()), hdrs);  
  162.         } else if (message instanceof SpecificRecord || avroReflectionEnabled) {  
  163.             Schema schema = ReflectData.get().getSchema(message.getClass());  
  164.             populateAvroHeaders(hdrs, schema, message);  
  165.             flumeEvent = EventBuilder  
  166.                     .withBody(serialize(message, schema), hdrs);  
  167.         } else {  
  168.             hdrs.put(Log4jAvroHeaders.MESSAGE_ENCODING.toString(), "UTF8");  
  169.             String msg = layout != null ? layout.format(event) : message  
  170.                     .toString();  
  171.             flumeEvent = EventBuilder.withBody(msg, Charset.forName("UTF8"),  
  172.                     hdrs);  
  173.         }  
  174.   
  175.         try {  
  176.             rpcClient.append(flumeEvent);  
  177.         } catch (EventDeliveryException e) {  
  178.             String msg = "Flume append() failed.";  
  179.             LogLog.error(msg);  
  180.             if (unsafeMode) {  
  181.                 return;  
  182.             }  
  183.             throw new FlumeException(msg + " Exception follows.", e);  
  184.         }  
  185.     }  
  186.   
  187.     private Schema schema;  
  188.     private ByteArrayOutputStream out;  
  189.     private DatumWriter<Object> writer;  
  190.     private BinaryEncoder encoder;  
  191.   
  192.     protected void populateAvroHeaders(Map<String, String> hdrs, Schema schema,  
  193.             Object message) {  
  194.         if (avroSchemaUrl != null) {  
  195.             hdrs.put(Log4jAvroHeaders.AVRO_SCHEMA_URL.toString(), avroSchemaUrl);  
  196.             return;  
  197.         }  
  198.         LogLog.warn("Cannot find ID for schema. Adding header for schema, "  
  199.                 + "which may be inefficient. Consider setting up an Avro Schema Cache.");  
  200.         hdrs.put(Log4jAvroHeaders.AVRO_SCHEMA_LITERAL.toString(),  
  201.                 schema.toString());  
  202.     }  
  203.   
  204.     private byte[] serialize(Object datum, Schema datumSchema)  
  205.             throws FlumeException {  
  206.         if (schema == null || !datumSchema.equals(schema)) {  
  207.             schema = datumSchema;  
  208.             out = new ByteArrayOutputStream();  
  209.             writer = new ReflectDatumWriter<Object>(schema);  
  210.             encoder = EncoderFactory.get().binaryEncoder(out, null);  
  211.         }  
  212.         out.reset();  
  213.         try {  
  214.             writer.write(datum, encoder);  
  215.             encoder.flush();  
  216.             return out.toByteArray();  
  217.         } catch (IOException e) {  
  218.             throw new FlumeException(e);  
  219.         }  
  220.     }  
  221.   
  222.     // This function should be synchronized to make sure one thread  
  223.     // does not close an appender another thread is using, and hence risking  
  224.     // a null pointer exception.  
  225.     /** 
  226.      * Closes underlying client. If <tt>append()</tt> is called after this 
  227.      * function is called, it will throw an exception. 
  228.      *  
  229.      * @throws FlumeException 
  230.      *             if errors occur during close 
  231.      */  
  232.     @Override  
  233.     public synchronized void close() throws FlumeException {  
  234.         // Any append calls after this will result in an Exception.  
  235.         if (rpcClient != null) {  
  236.             try {  
  237.                 rpcClient.close();  
  238.             } catch (FlumeException ex) {  
  239.                 LogLog.error("Error while trying to close RpcClient.", ex);  
  240.                 if (unsafeMode) {  
  241.                     return;  
  242.                 }  
  243.                 throw ex;  
  244.             } finally {  
  245.                 rpcClient = null;  
  246.             }  
  247.         } else {  
  248.             String errorMsg = "Flume log4jappender already closed!";  
  249.             LogLog.error(errorMsg);  
  250.             if (unsafeMode) {  
  251.                 return;  
  252.             }  
  253.             throw new FlumeException(errorMsg);  
  254.         }  
  255.     }  
  256.   
  257.     @Override  
  258.     public boolean requiresLayout() {  
  259.         // This method is named quite incorrectly in the interface. It should  
  260.         // probably be called canUseLayout or something. According to the docs,  
  261.         // even if the appender can work without a layout, if it can work with  
  262.         // one,  
  263.         // this method must return true.  
  264.         return true;  
  265.     }  
  266.   
  267.     /** 
  268.      * Set the first flume hop hostname. 
  269.      *  
  270.      * @param hostname 
  271.      *            The first hop where the client should connect to. 
  272.      */  
  273.     public void setHostname(String hostname) {  
  274.         this.hostname = hostname;  
  275.     }  
  276.   
  277.     /** 
  278.      * Set the port on the hostname to connect to. 
  279.      *  
  280.      * @param port 
  281.      *            The port to connect on the host. 
  282.      */  
  283.     public void setPort(int port) {  
  284.         this.port = port;  
  285.     }  
  286.   
  287.     public void setUnsafeMode(boolean unsafeMode) {  
  288.         this.unsafeMode = unsafeMode;  
  289.     }  
  290.   
  291.     public boolean getUnsafeMode() {  
  292.         return unsafeMode;  
  293.     }  
  294.   
  295.     public void setTimeout(long timeout) {  
  296.         this.timeout = timeout;  
  297.     }  
  298.   
  299.     public long getTimeout() {  
  300.         return this.timeout;  
  301.     }  
  302.   
  303.     public void setAvroReflectionEnabled(boolean avroReflectionEnabled) {  
  304.         this.avroReflectionEnabled = avroReflectionEnabled;  
  305.     }  
  306.   
  307.     public void setAvroSchemaUrl(String avroSchemaUrl) {  
  308.         this.avroSchemaUrl = avroSchemaUrl;  
  309.     }  
  310.   
  311.     /** 
  312.      * Activate the options set using <tt>setPort()</tt> and 
  313.      * <tt>setHostname()</tt> 
  314.      *  
  315.      * @throws FlumeException 
  316.      *             if the <tt>hostname</tt> and <tt>port</tt> combination is 
  317.      *             invalid. 
  318.      */  
  319.     @Override  
  320.     public void activateOptions() throws FlumeException {  
  321.         Properties props = new Properties();  
  322.         props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS, "h1");  
  323.         props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS_PREFIX  
  324.                 + "h1", hostname + ":" + port);  
  325.         props.setProperty(  
  326.                 RpcClientConfigurationConstants.CONFIG_CONNECT_TIMEOUT,  
  327.                 String.valueOf(timeout));  
  328.         props.setProperty(  
  329.                 RpcClientConfigurationConstants.CONFIG_REQUEST_TIMEOUT,  
  330.                 String.valueOf(timeout));  
  331.         try {  
  332.             rpcClient = RpcClientFactory.getInstance(props);  
  333.             if (layout != null) {  
  334.                 layout.activateOptions();  
  335.             }  
  336.         } catch (FlumeException e) {  
  337.             String errormsg = "RPC client creation failed! " + e.getMessage();  
  338.             LogLog.error(errormsg);  
  339.             if (unsafeMode) {  
  340.                 return;  
  341.             }  
  342.             throw e;  
  343.         }  
  344.     }  
  345.   
  346.     /** 
  347.      * Make it easy to reconnect on failure 
  348.      *  
  349.      * @throws FlumeException 
  350.      */  
  351.     private void reconnect() throws FlumeException {  
  352.         close();  
  353.         activateOptions();  
  354.     }  
  355. }  
複製代碼



然後然後將這個類打了一個jar包:Log4jExtAppender.jar,扔在了flumedemo和flumedemo2的lib目錄下。
這時候flumedemo的log4j.properties如下:
  1. log4j.rootLogger=INFO  
  2.   
  3.   
  4. log4j.category.com.besttone=INFO,flume,console,LogFile  
  5.   
  6. #log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jExtAppender  
  7. log4j.appender.flume = com.besttone.flume.Log4jExtAppender  
  8. log4j.appender.flume.Hostname = localhost  
  9. log4j.appender.flume.Port = 44444  
  10. log4j.appender.flume.UnsafeMode = false  
  11. log4j.appender.flume.Source = app1  
  12.   
  13.   
  14. log4j.appender.console= org.apache.log4j.ConsoleAppender  
  15. log4j.appender.console.Target= System.out  
  16. log4j.appender.console.layout= org.apache.log4j.PatternLayout  
  17. log4j.appender.console.layout.ConversionPattern= %d{yyyy-MM-dd HH:mm:ss} %5p %c{1}: %L - %m%n  
  18.   
  19. log4j.appender.LogFile= org.apache.log4j.DailyRollingFileAppender  
  20. log4j.appender.LogFile.File= logs/app.log  
  21. log4j.appender.LogFile.MaxFileSize=10KB  
  22. log4j.appender.LogFile.Append= true  
  23. log4j.appender.LogFile.Threshold= DEBUG  
  24. log4j.appender.LogFile.layout= org.apache.log4j.PatternLayout  
  25. log4j.appender.LogFile.layout.ConversionPattern= %-d{yyyy-MM-dd HH:mm:ss} [%t:%r] - [%5p] %m%n
複製代碼



flumedemo2的如下:
  1. log4j.rootLogger=INFO  
  2.   
  3.   
  4. log4j.category.com.besttone=INFO,flume,console,LogFile  
  5.   
  6. #log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jExtAppender  
  7. log4j.appender.flume = com.besttone.flume.Log4jExtAppender  
  8. log4j.appender.flume.Hostname = localhost  
  9. log4j.appender.flume.Port = 44444  
  10. log4j.appender.flume.UnsafeMode = false  
  11. log4j.appender.flume.Source = app2  
  12.   
  13.   
  14. log4j.appender.console= org.apache.log4j.ConsoleAppender  
  15. log4j.appender.console.Target= System.out  
  16. log4j.appender.console.layout= org.apache.log4j.PatternLayout  
  17. log4j.appender.console.layout.ConversionPattern= %d{yyyy-MM-dd HH:mm:ss} %5p %c{1}: %L - %m%n  
  18.   
  19. log4j.appender.LogFile= org.apache.log4j.DailyRollingFileAppender  
  20. log4j.appender.LogFile.File= logs/app.log  
  21. log4j.appender.LogFile.MaxFileSize=10KB  
  22. log4j.appender.LogFile.Append= true  
  23. log4j.appender.LogFile.Threshold= DEBUG  
  24. log4j.appender.LogFile.layout= org.apache.log4j.PatternLayout  
  25. log4j.appender.LogFile.layout.ConversionPattern= %-d{yyyy-MM-dd HH:mm:ss} [%t:%r] - [%5p] %m%n  
複製代碼



將原來的log4j.appender.flume 由org.apache.flume.clients.log4jappender.Log4jExtAppender改爲了我重新實現添加了source參數的com.besttone.flume.Log4jExtAppender

然後flumedemo的log4j.appender.flume.Source = app1,flumedemo2的log4j.appender.flume.Source = app2。
運行flumedemo的WriteLog類,和flumedemo2的WriteLog2類,分別去hdfs上和agent的log文件中看看內容,發現hdfs上都是app1的日誌,log文件中都是app2的日誌,功能實現。

完整的flume.conf如下:

  1. tier1.sources=source1  
  2. tier1.channels=channel1 channel2  
  3. tier1.sinks=sink1 sink2  
  4. tier1.sources.source1.type=avro  
  5. tier1.sources.source1.bind=0.0.0.0  
  6. tier1.sources.source1.port=44444  
  7. tier1.sources.source1.channels=channel1 channel2  
  8. tier1.sources.source1.selector.type=multiplexing  
  9. tier1.sources.source1.selector.header=flume.client.log4j.logger.source  
  10. tier1.sources.source1.selector.mapping.app1=channel1  
  11. tier1.sources.source1.selector.mapping.app2=channel2  
  12. tier1.sources.source1.interceptors=i1 i2  
  13. tier1.sources.source1.interceptors.i1.type=regex_filter  
  14. tier1.sources.source1.interceptors.i1.regex=\\{.*\\}  
  15. tier1.sources.source1.interceptors.i2.type=timestamp  
  16. tier1.channels.channel1.type=memory  
  17. tier1.channels.channel1.capacity=10000  
  18. tier1.channels.channel1.transactionCapacity=1000  
  19. tier1.channels.channel1.keep-alive=30  
  20. tier1.channels.channel2.type=memory  
  21. tier1.channels.channel2.capacity=10000  
  22. tier1.channels.channel2.transactionCapacity=1000  
  23. tier1.channels.channel2.keep-alive=30  
  24. tier1.sinks.sink1.type=hdfs  
  25. tier1.sinks.sink1.channel=channel1  
  26. tier1.sinks.sink1.hdfs.path=hdfs://master68:8020/flume/events/%y-%m-%d  
  27. tier1.sinks.sink1.hdfs.round=true  
  28. tier1.sinks.sink1.hdfs.roundValue=10  
  29. tier1.sinks.sink1.hdfs.roundUnit=minute  
  30. tier1.sinks.sink1.hdfs.fileType=DataStream  
  31. tier1.sinks.sink1.hdfs.writeFormat=Text  
  32. tier1.sinks.sink1.hdfs.rollInterval=0  
  33. tier1.sinks.sink1.hdfs.rollSize=10240  
  34. tier1.sinks.sink1.hdfs.rollCount=0  
  35. tier1.sinks.sink1.hdfs.idleTimeout=60  
  36. tier1.sinks.sink2.type=logger  
  37. tier1.sinks.sink2.channel=channel2  
複製代碼

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章