SQUID框架(一)

SQUID框架
1.
Introduction

The Squid source code has evolved more from empirical observation and tinkering, rather than a solid design process. It carries a legacy of being ``touched'' by numerous individuals, each with somewhat different techniques and terminology.
Squid is a single-process proxy server. Every request is handled by the main process, with the exception of FTP. However, Squid does not use a ``threads package'' such has Pthreads. While this might be easier to code, it suffers from portability and performance problems. Instead Squid maintains data structures and state information for each active request.
The code is often difficult to follow because there are no explicit state variables for the active requests. Instead, thread execution progresses as a sequence of ``callback functions'' which get executed when I/O is ready to occur, or some other event has happened. As a callback function completes, it is responsible for registering the next callback function for subsequent I/O.
Note there is only a pseudo-consistent naming scheme. In most cases functions are named like moduleFooBar(). However, there are also some functions named like module_foo_bar().
Note that the Squid source changes rapidly, and some parts of this document may become out-of-date. If you find any inconsistencies, please feel free to notify
the Squid Developers
.
1.1
Conventions

Function names and file names will be written in a courier font, such as store.c and storeRegister(). Data structures and their members will be written in an italicized font, such asStoreEntry.
2.
Coding Conventions
2.1
Infrastructure

Most custom types and tools are documented in the code or the relevant portions of this manual. Some key points apply globally however.
Fixed width types
If you need to use specific width types - such as a 16 bit unsigned integer, use one of the following types. To access them simply include "config.h".
int16_t - 16 bit signed.
u_int16_t - 16 bit unsigned.
int32t - 32 bit signed.
u_int32_t - 32 bit unsigned.
int64_t - 64 bit signed.
u_int64_t - 64 bit unsigned.
3.
Overview of Squid Components

Squid consists of the following major components
3.1
Client Side Socket

Here new client connections are accepted, parsed, and reply data sent. Per-connection state information is held in a data structure calledConnStateData. Per-request state information is stored in the clientSocketContext structure. With HTTP/1.1 we may have multiple requests from a single TCP connection.
3.2
Client Side Request

This is where requests are processed. We determine if the request is to be redirected, if it passes access lists, and setup the initial client stream for internal requests. Temporary state for this processing is held in aclientRequestContext struct.
3.3
Client Side Reply

This is where we determine if the request is cache HIT, REFRESH, MISS, etc. This involves querying the store (possibly multiple times) to work through Vary lists and the list. Per-request state information is stored in theclientReplyContext structure.
3.4
Client Streams

These routines implement a unidirectional, non-blocking, pull pipeline. They allow code to be inserted into the reply logic on an as-needed basis. For instance, transfer-encoding logic is only needed when sending a HTTP/1.1 reply.
3.5
Server Side

These routines are responsible for forwarding cache misses to other servers, depending on the protocol. Cache misses may be forwarded to either origin servers, or other proxy caches. Note that all requests (FTP, Gopher) to other proxies are sent as HTTP requests. gopher.c is somewhat complex and gross because it must convert from the Gopher protocol to HTTP. Wais and Gopher don't receive much attention because they comprise a relatively insignificant portion of Internet traffic.
3.6
Storage Manager

The Storage Manager is the glue between client and server sides. Every object saved in the cache is allocated aStoreEntry structure. While the object is being accessed, it also has a MemObject structure.
Squid can quickly locate cached objects because it keeps (in memory) a hash table of allStoreEntry's. The keys for the hash table are MD5 checksums of the objects URI. In addition there is also a storage policy such as LRU that keeps track of the objects and determines the removal order when space needs to be reclaimed. For the LRU policy this is implemented as a doubly linked list.
For each object the StoreEntry maps to a cache_dir and location via sdirn and sfilen. For the "ufs" store this file number (sfilen) is converted to a disk pathname by a simple modulo of L2 and L1, but other storage drivers may map sfilen in other ways. A cache swap file consists of two parts: the cache metadata, and the object data. Note the object data includes the full HTTP reply---headers and body. The HTTP reply headers are not the same as the cache metadata.
Client-side requests register themselves with a StoreEntry to be notified when new data arrives. Multiple clients may receive data via a singleStoreEntry. For POST and PUT request, this process works in reverse. Server-side functions are notified when additional data is read from the client.
3.7
Request Forwarding

3.8
Peer Selection

These functions are responsible for selecting one (or none) of the neighbor caches as the appropriate forwarding location.
3.9
Access Control

These functions are responsible for allowing or denying a request, based on a number of different parameters. These parameters include the client's IP address, the hostname of the requested resource, the request method, etc. Some of the necessary information may not be immediately available, for example the origin server's IP address. In these cases, the ACL routines initiate lookups for the necessary information and continues the access control checks when the information is available.
3.10
Authentication Framework

These functions are responsible for handling HTTP authentication. They follow a modular framework allow different authentication schemes to be added at will. For information on working with the authentication schemes See the chapter Authentication Framework.
3.11
Network Communication

These are the routines for communicating over TCP and UDP network sockets. Here is where sockets are opened, closed, read, and written. In addition, note that the heart of Squid (comm_select() or comm_poll()) exists here, even though it handles all file descriptors, not just network sockets. These routines do not support queuing multiple blocks of data for writing. Consequently, a callback occurs for every write request.
3.12
File/Disk I/O

Routines for reading and writing disk files (and FIFOs). Reasons for separating network and disk I/O functions are partly historical, and partly because of different behaviors. For example, we don't worry about getting a ``No space left on device'' error for network sockets. The disk I/O routines support queuing of multiple blocks for writing. In some cases, it is possible to merge multiple blocks into a single write request. The write callback does not necessarily occur for every write request.
3.13
Neighbors

Maintains the list of neighbor caches. Sends and receives ICP messages to neighbors. Decides which neighbors to query for a given request. File: neighbors.c.
3.14
IP/FQDN Cache

A cache of name-to-address and address-to-name lookups. These are hash tables keyed on the names and addresses. ipcache_nbgethostbyname() and fqdncache_nbgethostbyaddr() implement the non-blocking lookups. Files: ipcache.c, fqdncache.c.
3.15
Cache Manager

This provides access to certain information needed by the cache administrator. A companion program,cachemgr.cgi can be used to make this information available via a Web browser. Cache manager requests to Squid are made with a special URL of the form
        cache_object://hostname/operation
The cache manager provides essentially ``read-only'' access to information. It does not provide a method for configuring Squid while it is running.
3.16
Network Measurement Database

In a number of situation, Squid finds it useful to know the estimated network round-trip time (RTT) between itself and origin servers. A particularly useful is example is the peer selection algorithm. By making RTT measurements, a Squid cache will know if it, or one if its neighbors, is closest to a given origin server. The actual measurements are made with thepinger program, described below. The measured values are stored in a database indexed under two keys. The primary index field is the /24 prefix of the origin server's IP address. Secondly, a hash table of fully-qualified host names have have data structures with links to the appropriate network entry. This allows Squid to quickly look up measurements when given either an IP address, or a host name. The /24 prefix aggregation is used to reduce the overall database size. File: net_db.c.
3.17
Redirectors

Squid has the ability to rewrite requests from clients. After checking the access controls, but before checking for cache hits, requested URLs may optionally be written to an externalredirector process. This program, which can be highly customized, may return a new URL to replace the original request. Common applications for this feature are extended access controls and local mirroring. File: redirect.c.
3.18
Autonomous System Numbers

Squid supports Autonomous System (AS) numbers as another access control element. The routines in asn.c query databases which map AS numbers into lists of CIDR prefixes. These results are stored in a radix tree which allows fast searching of the AS number for a given IP address.
3.19
Configuration File Parsing

The primary configuration file specification is in the file cf.data.pre. A simple utility program, cf_gen, reads the cf.data.pre file and generates cf_parser.c and squid.conf. cf_parser.c is included directly into cache_cf.c at compile time.
3.20
Callback Data Allocator

Squid's extensive use of callback functions makes it very susceptible to memory access errors. Care must be taken so that the callback_data memory is still valid when the callback function is executed. The routines in cbdata.c provide a uniform method for managing callback data memory, canceling callbacks, and preventing erroneous memory accesses.
3.21
Refcount Data Allocator (C++ Only)

Manual reference counting such as cbdata uses is error prone, and time consuming for the programmer. C++'s operator overloading allows us to create automatic reference counting pointers, that will free objects when they are no longer needed. With some care these objects can be passed to functions needed Callback Data pointers.
3.22
Debugging

Squid includes extensive debugging statements to assist in tracking down bugs and strange behavior. Every debug statement is assigned a section and level. Usually, every debug statement in the same source file has the same section. Levels are chosen depending on how much output will be generated, or how useful the provided information will be. Thedebug_options line in the configuration file determines which debug statements will be shown and which will not. Thedebug_options line assigns a maximum level for every section. If a given debug statement has a level less than or equal to the configured level for that section, it will be shown. This description probably sounds more complicated than it really is. File: debug.c. Note that debug() itself is a macro.
3.23
Error Generation

The routines in errorpage.c generate error messages from a template file and specific request parameters. This allows for customized error messages and multilingual support.
3.24
Event Queue

The routines in event.c maintain a linked-list event queue for functions to be executed at a future time. The event queue is used for periodic functions such as performing cache replacement, cleaning swap directories, as well as one-time functions such as ICP query timeouts.
3.25
Filedescriptor Management

Here we track the number of filedescriptors in use, and the number of bytes which has been read from or written to each file descriptor.
3.26
Hashtable Support

These routines implement generic hash tables. A hash table is created with a function for hashing the key values, and a function for comparing the key values.
3.27
HTTP Anonymization

These routines support anonymizing of HTTP requests leaving the cache. Either specific request headers will be removed (the ``standard'' mode), or only specific request headers will be allowed (the ``paranoid'' mode).
3.28
Delay Pools

Delay pools provide bandwidth regulation by restricting the rate at which squid reads from a server before sending to a client. They do not prevent cache hits from being sent at maximal capacity. Delay pools can aggregate the bandwidth from multiple machines and users to provide more or less general restrictions.
3.29
Internet Cache Protocol

Here we implement the Internet Cache Protocol. This protocol is documented in the RFC 2186 and RFC 2187. The bulk of code is in the icp_v2.c file. The other, icp_v3.c is a single function for handling ICP queries from Netcache/Netapp caches; they use a different version number and a slightly different message format.
3.30
Ident Lookups

These routines support RFC 931 ``Ident'' lookups. An ident server running on a host will report the user name associated with a connected TCP socket. Some sites use this facility for access control and logging purposes.
3.31
Memory Management

These routines allocate and manage pools of memory for frequently-used data structures. When thememory_pools configuration option is enabled, unused memory is not actually freed. Instead it is kept for future use. This may result in more efficient use of memory at the expense of a larger process size.
3.32
Multicast Support

Currently, multicast is only used for ICP queries. The routines in this file implement joining a UDP socket to a multicast group (or groups), and setting the multicast TTL value on outgoing packets.
3.33
Persistent Server Connections

These routines manage idle, persistent HTTP connections to origin servers and neighbor caches. Idle sockets are indexed in a hash table by their socket address (IP address and port number). Up to 10 idle sockets will be kept for each socket address, but only for 15 seconds. After 15 seconds, idle socket connections are closed.
3.34
Refresh Rules

These routines decide whether a cached object is stale or fresh, based on the refresh_pattern configuration options. If an object is fresh, it can be returned as a cache hit. If it is stale, then it must be revalidated with an If-Modified-Since request.
3.35
SNMP Support

These routines implement SNMP for Squid. At the present time, we have made almost all of the cachemgr information available via SNMP.
3.36
URN Support

We are experimenting with URN support in Squid version 1.2. Note, we're not talking full-blown generic URN's here. This is primarily targeted toward using URN's as an smart way of handling lists of mirror sites. For more details, please see
URN support in Squid
.
3.37
ESI

ESI is an implementation of Edge Side Includes (
http://www.esi.org
.) ESI is implemented as a client side stream and a small modification to client_side_reply.c to check whether ESI should be inserted into the reply stream or not.
4.
External Programs

4.1
dnsserver

Because the standard gethostbyname(3) library call blocks, Squid must use external processes to actually make these calls. Typically there will be ten dnsserver processes spawned from Squid. Communication occurs via TCP sockets bound to the loopback interface. The functions in dns.c are primarily concerned with starting and stopping the dnsservers. Reading and writing to and from the dnsservers occurs in the IP and FQDN cache modules.
4.2
pinger

Although it would be possible for Squid to send and receive ICMP messages directly, we use an external process for two important reasons:
Because squid handles many filedescriptors simultaneously, we get much more accurate RTT measurements when ICMP is handled by a separate process.
Superuser privileges are required to send and receive ICMP. Rather than require Squid to be started as root, we prefer to have the smaller and simplerpinger program installed with setuid permissions.
4.3
unlinkd

The unlink(2) system call can cause a process to block for a significant amount of time. Therefore we do not want to make unlink() calls from Squid. Instead we pass them to this external process.
4.4
redirector

A redirector process reads URLs on stdin and writes (possibly changed) URLs on stdout. It is implemented as an external process to maximize flexibility.
5.
Flow of a Typical Request

A client connection is accepted by the client-side socket support and parsed, or is directly created viaclientBeginRequest.
The access controls are checked. The client-side-request builds an ACL state data structure and registers a callback function for notification when access control checking is completed.
After the access controls have been verified, the request may be redirected.
The client-side-request is forwarded up the client stream to GetMoreData which looks for the requested object in the cache, and or Vary: versions of the same. If is a cache hit, then the client-side registers its interest in theStoreEntry. Otherwise, Squid needs to forward the request, perhaps with an If-Modified-Since header.
The request-forwarding process begins with protoDispatch. This function begins the peer selection procedure, which may involve sending ICP queries and receiving ICP replies. The peer selection procedure also involves checking configuration options such asnever_direct and always_direct.
When the ICP replies (if any) have been processed, we end up at protoStart. This function calls an appropriate protocol-specific function for forwarding the request. Here we will assume it is an HTTP request.
The HTTP module first opens a connection to the origin server or cache peer. If there is no idle persistent socket available, a new connection request is given to the Network Communication module with a callback function. The comm.c routines may try establishing a connection multiple times before giving up.
When a TCP connection has been established, HTTP builds a request buffer and submits it for writing on the socket. It then registers a read handler to receive and process the HTTP reply.
As the reply is initially received, the HTTP reply headers are parsed and placed into a reply data structure. As reply data is read, it is appended to theStoreEntry. Every time data is appended to the StoreEntry, the client-side is notified of the new data via a callback function. The rate at which reading occurs is regulated by the delay pools routines, via the deferred read mechanism.
As the client-side is notified of new data, it copies the data from the StoreEntry and submits it for writing on the client socket.
As data is appended to the StoreEntry, and the client(s) read it, the data may be submitted for writing to disk.
When the HTTP module finishes reading the reply from the upstream server, it marks theStoreEntry as ``complete.'' The server socket is either closed or given to the persistent connection pool for future use.
When the client-side has written all of the object data, it unregisters itself from theStoreEntry. At the same time it either waits for another request from the client, or closes the client connection.
6.
Callback Functions

7.
The Main Loop: comm_select()

At the core of Squid is the select(2) system call. Squid uses select() or poll(2) to process I/O on all open file descriptors. Hereafter we'll only use ``select'' to refer generically to either system call.
The select() and poll() system calls work by waiting for I/O events on a set of file descriptors. Squid only checks forread and write events. Squid knows that it should check for reading or writing when there is a read or write handler registered for a given file descriptor. Handler functions are registered with the commSetSelect function. For example:
        commSetSelect(fd, COMM_SELECT_READ, clientReadRequest, conn, 0);
In this example, fd is a TCP socket to a client connection. When there is data to be read from the socket, then the select loop will execute
        clientReadRequest(fd, conn);
The I/O handlers are reset every time they are called. In other words, a handler function must re-register itself with commSetSelect if it wants to continue reading or writing on a file descriptor. The I/O handler may be canceled before being called by providing NULL arguments, e.g.:
        commSetSelect(fd, COMM_SELECT_READ, NULL, NULL, 0);
These I/O handlers (and others) and their associated callback data pointers are saved in thefde data structure:
        struct _fde {
                ...
                PF *read_handler;
                void *read_data;
                PF *write_handler;
                void *write_data;
                close_handler *close_handler;
                DEFER *defer_check;
                void *defer_data;
        };
read_handler and write_handler are called when the file descriptor is ready for reading or writing, respectively. Theclose_handler is called when the filedescriptor is closed. The close_handler is actually a linked list of callback functions to be called.
In some situations we want to defer reading from a filedescriptor, even though it has data for us to read. This may be the case when data arrives from the server-side faster than it can be written to the client-side. Before adding a filedescriptor to the ``read set'' for select, we call defer_check (if it is non-NULL). If defer_check returns 1, then we skip the filedescriptor for that time through the select loop.
These handlers are stored in the FD_ENTRY structure as defined in comm.h. fd_table[] is the global array ofFD_ENTRY structures. The handler functions are of type PF, which is a typedef:
    typedef void (*PF) (int, void *);
The close handler is really a linked list of handler functions. Each handler also has an associated pointer (void *data) to some kind of data structure.
comm_select() is the function which issues the select() system call. It scans the entire fd_table[] array looking for handler functions. Each file descriptor with a read handler will be set in the fd_set read bitmask. Similarly, write handlers are scanned and bits set for the write bitmask. select() is then called, and the return read and write bitmasks are scanned for descriptors with pending I/O. For each ready descriptor, the handler is called. Note that the handler is cleared from theFD_ENTRY before it is called.
After each handler is called, comm_select_incoming() is called to process new HTTP and ICP requests.
Typical read handlers are httpReadReply(), diskHandleRead(), icpHandleUdp(), and ipcache_dnsHandleRead(). Typical write handlers are commHandleWrite(), diskHandleWrite(), and icpUdpReply(). The handler function is set with commSetSelect(), with the exception of the close handlers, which are set with comm_add_close_handler().
The close handlers are normally called from comm_close(). The job of the close handlers is to deallocate data structures associated with the file descriptor. For this reason comm_close() must normally be the last function in a sequence to prevent accessing just-freed memory.
The timeout and lifetime handlers are called for file descriptors which have been idle for too long. They are further discussed in a following chapter.
8.
Client Streams
8.1
Introduction

A clientStream is a uni-directional loosely coupled pipe. Each node consists of four methods - read, callback, detach, and status, along with the stream housekeeping variables (a dlink node and pointer to the head of the list), context data for the node, and read request parameters - readbuf, readlen and readoff (in the body).
clientStream is the basic unit for scheduling, and the clientStreamRead and clientStreamCallback calls allow for deferred scheduled activity if desired.
Theory on stream operation:
Something creates a pipeline. At a minimum it needs a head with a status method and a read method, and a tail with a callback method and a valid initial read request.
Other nodes may be added into the pipeline.
The tail-1th node's read method is called.
for each node going up the pipeline, the node either:
satisfies the read request, or
inserts a new node above it and calls clientStreamRead, or
calls clientStreamRead
There is no requirement for the Read parameters from different nodes to have any correspondence, as long as the callbacks provided are correct.
The first node that satisfies the read request MUST generate an httpReply to be passed down the pipeline. Body data MAY be provided.
On the first callback a node MAY insert further downstream nodes in the pipeline, but MAY NOT do so thereafter.
the callbacks progress down the pipeline until a node makes further reads instead of satisfying the callback (go to 4) or the end of the pipe line is reached, where a new read sequence may be scheduled.
8.2
Implementation notes

ClientStreams have been implemented for the client side reply logic, starting with either a client socket (tail of the list is clientSocketRecipient) or a custom handler for in-squid requests, and with the pipeline HEAD being clientGetMoreData, which uses clientSendMoreData to send data down the pipeline.
client POST bodies do not use a pipeline currently, they use the previous code to send the data. This is a TODO when time permits.
8.3
Whats in a node

Each node must have:
read method - to allow loose coupling in the pipeline. (The reader may therefore change if the pipeline is altered, even mid-flow).
callback method - likewise.
status method - likewise.
detach method - used to ensure all resources are cleaned up properly.
dlink head pointer - to allow list inserts and deletes from within a node.
context data - to allow the called back nodes to maintain their private information.
read request parameters - For two reasons:
To allow a node to determine the requested data offset, length and target buffer dynamically. Again, this is to promote loose coupling.
Because of the callback nature of squid, every node would have to keep these parameters in their context anyway, so this reduces programmer overhead.
8.4
Method details

The first parameter is always the 'this' reference for the client stream - a clientStreamNode *.
Read
Parameters:
clientHttpRequest * - superset of request data, being winnowed down over time. MUST NOT be NULL.
offset, length, buffer - what, how much and where.
Side effects:
Triggers a read of data that satisfies the httpClientRequest metainformation and (if appropriate) the offset,length and buffer parameters.
Callback
Parameters:
clientHttpRequest * - superset of request data, being winnowed down over time. MUST NOT be NULL.
httpReply * - not NULL on the first call back only. Ownership is passed down the pipeline. Each node may alter the reply if appropriate.
buffer, length - where and how much.
Side effects:
Return data to the next node in the stream. The data may be returned immediately, or may be delayed for a later scheduling cycle.
Detach
Parameters:
clienthttpRequest * - MUST NOT be NULL.
Side effects:
Removes this node from a clientStream. The stream infrastructure handles the removal. This node MUST have cleaned up all context data, UNLESS scheduled callbacks will take care of that.
Informs the prev node in the list of this nodes detachment.
Status
Parameters:
clienthttpRequest * - MUST NOT be NULL.
Side effects:
Allows nodes to query the upstream nodes for :
stream ABORTS - request cancelled for some reason. upstream will not accept further reads().
stream COMPLETION - upstream has completed and will not accept further reads().
stream UNPLANNED COMPLETION - upstream has completed, but not at a pre-planned location (used for keepalive checking), and will not accept further reads().
stream NONE - no special status, further reads permitted.
Abort
Parameters:
clienthttpRequest * - MUST NOT be NULL.
Side effects:
Detachs the tail of the stream. CURRENTLY DOES NOT clean up the tail node data - this must be done separately. Thus Abort may ONLY be called by the tail node.
9.
Processing Client Requests

To be written...
10.
Delay Pools

10.1
Introduction

A DelayPool is a Composite used to manage bandwidth for any request assigned to the pool by an access expression. DelayId's are a used to manage the bandwith on a given request, whereas a DelayPool manages the bandwidth availability and assigned DelayId's.
10.2
Extending Delay Pools

A CompositePoolNode is the base type for all members of a DelayPool. Any child must implement the RefCounting primitives, as well as five delay pool functions:
stats() - provide cachemanager statistics for itself.
dump() - generate squid.conf syntax for the current configuration of the item.
update() - allocate more bandwith to all buckets in the item.
parse() - accept squid.conf syntax for the item, and configure for use appropriately.
id() - return a DelayId entry for the current item.
A DelayIdComposite is the base type for all delay Id's. Concrete Delay Id's must implement the refcounting primitives, as well as two delay id functions:
bytesWanted() - return the largest amount of bytes that this delay id allows by policy.
bytesIn() - record the use of bandwidth by the request(s) that this delayId is monitoring.
Composite creation is currently under design review, so see the DelayPool class and follow the parse() code path for details.
10.3
Neat things that could be done.

With the composite structure, some neat things have become possible. For instance:
Dynamically defined pool arrangements - for instance an aggregate (class 1) combined with the per-class-C-net tracking of a class 3 pool, without the individual host tracking. This differs from a class 3 pool with -1/-1 in the host bucket, because no memory or cpu would be used on hosts, whereas with a class 3 pool, they are allocated and used.
Per request bandwidth limits - a delayId that contains it's own bucket could limit each request independently to a given policy, with no aggregate restrictions.
11.
Storage Manager

11.1
Introduction

The Storage Manager is the glue between client and server sides. Every object saved in the cache is allocated aStoreEntry structure. While the object is being accessed, it also has a MemObject structure.
Squid can quickly locate cached objects because it keeps (in memory) a hash table of allStoreEntry's. The keys for the hash table are MD5 checksums of the objects URI. In addition there is also a storage policy such as LRU that keeps track of the objects and determines the removal order when space needs to be reclaimed. For the LRU policy this is implemented as a doubly linked list.
For each object the StoreEntry maps to a cache_dir and location via sdirn and sfilen. For the "ufs" store this file number (sfilen) is converted to a disk pathname by a simple modulo of L2 and L1, but other storage drivers may map sfilen in other ways. A cache swap file consists of two parts: the cache metadata, and the object data. Note the object data includes the full HTTP reply---headers and body. The HTTP reply headers are not the same as the cache metadata.
Client-side requests register themselves with a StoreEntry to be notified when new data arrives. Multiple clients may receive data via a singleStoreEntry. For POST and PUT request, this process works in reverse. Server-side functions are notified when additional data is read from the client.
11.2
Object storage

To be written...
11.3
Object retrieval

To be written...
from:
http://old.squid-cache.org/
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章