AB资源网(www.xxab.cn)服务器导航站-找服务器商就上AB资源网
百度360必应搜狗本站头条
【本站公告】:本站互助计划,欢迎有活动的服务器商免费投稿,免费收录,最新收录会在首页展示! - 站长QQ:6502567
当前位置:网站首页 > 技术文档 > 正文

简易教程:如何在VPS上设置VLAN网络? (vlan vps)

AB资源网 2023-09-19 10:13 3421 浏览 0 评论

简易教程:如何在VPS上设置VLAN网络? (vlan vps)

随着互联网的不断发展,VPS(Virtual Private Server)逐渐成为了云计算领域中不可或缺的一环。通过VPS,用户可以获得与独立服务器相似的控制权和配置能力,在相对较低的成本内获得高性能和灵活性。但是在VPS的使用中,网络配置是一个非常关键的环节。本文将会介绍在VPS上如何设置VLAN网络,以帮助读者更好地掌握VPS的使用技巧。

一、什么是VLAN网络?

需要了解什么是VLAN。VLAN全称为Virtual Local Area Network,意为虚拟局域网,是通过交换机对网络的逻辑性进行分割的技术。当一个交换机被配置为使用VLAN时,它将把LAN(VLAN)内的所有数据帧划分为不同的VLAN,并在每个VLAN内进行独立交换,以达到资源共享和网络流量控制的目的。

但是,在VPS上创建VLAN并不是那么容易的事情,需要进行一系列的步骤和设置,下面就来具体介绍一下。

二、创建VLAN网络的前提条件

在开始创建VLAN之前,需要满足以下条件:

1. VPS必须支持VLAN网络技术。

2. 必须有ROOT权限登录VPS服务器,可以进行网络设置和更改。

3. 加入VLAN的网口必须支持802.1Q VLAN标准。

三、安装必要的组件

在进行VLAN网络配置之前,需要安装必要的组件,可以通过以下命令安装:

yum install vconfig

四、创建VLAN网络

1.创建一个新的VLAN ID。VLAN ID是组成VLAN的重要参数,可以通过以下命令创建:

vconfig add eth0 10

这里的eth0是网卡名称,10是创建的VLAN ID。

2.设置VLAN的IP地址。通过以下命令设置VLAN的IP地址:

ifconfig eth0.10 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 up

这里的eth0.10是VLAN接口名称,192.168.1.1是设置的IP地址,netmask和broadcast用于设置网络子网和广播地址。

3.使VLAN进入up状态。通过以下命令使VLAN进入up状态:

ifconfig eth0.10 up

5、测试VLAN配置

在完成VLAN网络配置后,可以通过以下命令进行测试:

ping 192.168.1.1

如果ping通,说明VLAN配置成功。

五、

通过以上步骤,就可以在VPS上成功创建VLAN网络了,从而实现更高效的网络资源共享和流量控制。当然,VLAN网络配置中还包括很多复杂的设置,需要根据不同的网络环境和需求进行调整和优化。希望以上介绍能够给读者带来一些帮助。

相关问题拓展阅读:

  • 求ATM(异步传输模式)相关英文文章
  • 为什么香港新世界机房路由经常变?

求ATM(异步传输模式)相关英文文章

求指扮ATM(异步传输模式)相关英文文章

求ATM(异步传输模式)相关英文文章

求ATM(异步传输模式)相关英文文章

求ATM(异步传输模式)相关英文文章

求ATM(异步传输薯闹模式)相关英文文唯手灶章

您好,对于你的遇到的问题,我很高兴能为你提供帮助,我之前也遇到过哟,以下是我的个人看法,希望能帮助到你,若有错误,还望见谅!

Asynchronous Transfer Mode (ATM) is a cell relay, packet switching network and data link layer protocol which encodes data traffic into all (53 bytes; 48 bytes of data and 5 bytes of header information) fixed-sized cells. ATM provides data link layer services that run over Layer 1 links. This differs from other technologies based on packet-switched networks (such as the Internet Protocol or Ethernet), in which variable sized packets (known as frames when referencing Layer 2) are used. ATM is a connection-oriented technology, in which a logical connection is established between the two endpoints before the actual data exchange begins.

The standards for ATM were first developed in the mid 1980s. The goal was to design a single networking strategy that could transport real-time video and audio as well as image files, text and email. Two groups, the International Telecommunications Union and the ATM Forum were involved in the creation of the standards. ATM has been used primarily with telephone and IP networks.

ATM Addressing

A Virtual Channel (VC) denotes the transport of ATM cells which have the same unique identifier, called the Virtual Channel Identifier (VCI). This identifier is encoded in the cell header. A virtual channel represents the basic means of communication between two end-points, and is ogous to an X.25 virtual circuit.

A Virtual Path (VP) denotes the transport of ATM cells belonging to virtual channels which share a common identifier, called the Virtual Path Identifier (VPI), which is also encoded in the cell header. A virtual path, in other words, is a grouping of virtual channels which connect the same end-points, and which share a traffic allocation. This two layer approach can be used to separate the management of routes and bandwidth from the setup of individual connections.

Successes and failures of ATM technology

ATM has proved very successful in the WAN scenario and numerous telecommunication providers have implemented ATM in their wide-area network cores. Many ADSL implementations also use ATM. However, ATM has failed to gain wide use as a LAN technology, and its complexity has held back its full deployment as the single integrating network technology in the way that its inventors originally intended. Since there will always be both brand-new and obsolescent link-layer technologies, particularly in the LAN area, not all of them will fit neatly into the synchronous optical networking model for which ATM was designed. Therefore, a protocol is needed to provide a unifying layer over both ATM and non-ATM link layers, as ATM itself cannot fill that role. IP already does that; therefore, there is often no point in implementing ATM at the network layer.

In addition, the need for cells to reduce jitter has declined as transport speeds increased (see below), and improvements in Voice over IP (VoIP) have made the integration of speech and data possible at the IP layer, again removing the incentive for ubiquitous deployment of ATM. Most Telcos are now planning to integrate their voice network activities into their IP networks, rather than their IP networks into the voice infrastructure.

Many technically sound ideas from ATM were adopted by MPLS, a generic Layer 2 packet switching protocol. ATM remains widely deployed, and is used as a multiplexing service in DSL networks, where its compromises fit DSL’s low-data-rate needs well. In turn, DSL networks support IP (and IP services such as VoIP) via PPP over ATM and Ethernet over ATM (RFC 2684).

ATM will remain deployed for some time in higher-speed interconnects where carriers have already committed themselves to existing ATM deployments; ATM is used here as a way of unifying PDH/SDH traffic and packet-switched traffic under a single infrastructure. However, ATM is increasingly challenged by speed and traffic shaping requirements of converged networks. In particular, the complexity of SAR imposes a performance bottleneck, as the fastest SARs known run at 10 Gbit/s and have limited traffic shaping capabilities. Currently it seems likely that gigabit Ethernet implementations (10Gbit-Ethernet, Metro Ethernet) will replace ATM as a technology of choice in new WAN implementions.

Recent developments

Interest in using native ATM for carrying live video and audio has increased recently. In these environments, low latency and very high quality of service are required to handle linear audio and video streams. Towards this goal standards are being developed such as AES47 (IEC 62365), which provides a standard for professional uncompressed audio transport over ATM. This is worth comparing with professional video over IP.

ATM concepts

IBM Turboways ATM 155 PCI network interface card

Why cells?

The motivation for the use of all data cells was the reduction of jitter (delay variance, in this case) in the multiplexing of data streams; reduction of this (and also end-to-end round-trip delays) is particularly important when carrying voice traffic.

This is because the conversion of digitized voice back into an og audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess – and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed.

Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (i.e. some of the data packets will be large). No matter how all the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays.

At the time ATM was designed, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the USA (2 to 34 Mbit/s in Europe).

At this rate, a typical full-length 1500 byte (12023-bit) data packet would take 77.42 µs to tranit. In a lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.

A queueing delay induced by several such data packets might be several times the figure of 7.8 ms, in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this in a number of ways:

Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows oothing out the jitter, but the delay introduced by passage through the buffer would be such that echo cancellers would be required even in local networks; this was considered too expensive at the time. Also, it would have increased the delay across the channel, and conversation is difficult over high-delay channels.

Build a system which can inherently provide low jitter (and minimal overall delay) to traffic which needs it.

Operate on a 1:1 user basis (i.e., a dedicated pipe).

ATM was designed to implement a low-jitter network interface. However, to be able to provide short queueing delays, but also be able to carry large datagrams, it had to have cells. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was, as is all too often the case, political instead of technical. When the CCITT was standardizing ATM, parties from the United States wanted a 64-byte payload because having the size be a power of 2 made working with the data easier and this size was felt to be a good compromise between larger payloads optimized for data tranission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the all size (and therefore short tranission times) simplify voice applications with respect to echo cancellation. Most of the interested European parties eventually came around to the arguments made by the Americans, but France and a few allies held out until the bitter end. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides, but it was ideal for neither and everybody has had to live with it ever since. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets. Doing so reduced the worst-case queuing jitter by a factor of almost 30, removing the need for echo cancellers.

Cells in practice

Different types of services are supported by ATM via ATM Adaptation Layers (AAL). Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. AAL2 through AAL4 are used for variable bit rate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.

Since the time ATM was designed, networks have become much faster. A 1500 byte (12023-bit) full-size Ethernet packet takes only 1.2 µs to tranit on a 10 Gbit/s optical network, removing the need for all cells to reduce jitter. Some consider that this removes the need for ATM in the network backbone. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. Specifically, the cost of segmentation and reassembly (SAR) hardware at OC-3 and above speeds makes ATM less competitive for IP than Packet Over SONET (POS). SAR performance limits mean that the fastest IP router ATM interfaces are OC12 – OC48 (STM4 – STM16), while (as of 2023) POS can operate at OC-192 (STM64) with higher speeds expected in the future.

On slow links (2 Mbit/s and below). ATM still makes sense, and this is why so many ADSL systems use ATM as an intermediate layer between the physical link layer and a Layer 2 protocol like PPP or Ethernet.

At these lower speeds, ATM’s ability to carry multiple logical circuits on a single physical or virtual medium is useful, although other techniques exist, such as PPP and Ethernet VLANs, which are optional in VDSL implementations. DSL can be used as an access method for an ATM network, allowing a DSL termination point in a telephone central office to connect to many internet service providers across a wide-area ATM network. In the United States, at least, this has allowed DSL providers to provide DSL access to the customers of many internet service providers. Since one DSL termination point can support multiple ISPs, the economic feasibility of DSL is substantially improved.

Why virtual circuits?

ATM is a channel-based transport layer, using Virtual circuits (VCs). This is encompassed in the concept of the Virtual Paths (VP) and Virtual Channels. Every ATM cell has an 8- or 12-bit Virtual Path Identifier (VPI) and 16-bit Virtual Channel Identifier (VCI) pair defined in its header. Together, these identify the virtual circuit used by the connection. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network).

As these cells traverse an ATM network, switching is achieved by changing the VPI/VCI values. Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others).

Another advantage of the use of virtual circuits is the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n*64 channels , IP).

Using cells and virtual circuits for traffic engineering

Another key ATM concept is that of the traffic contract. When an ATM circuit is set up each switch is informed of the traffic class of the connection.

ATM traffic contracts are part of the mechani by which “Quality of Service” (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection.

CBR – Constant bit rate: you specify a Peak Cell Rate (PCR), which is constant.

VBR – Variable bit rate: you specify an average cell rate, which can peak at a certain level for a maximum interval before being problematic.

ABR – Available bit rate: you specify a minimum guaranteed rate.

UBR – Unspecified bit rate: your traffic is allocated all remaining tranission capacity.

VBR has real-time and non-real-time variants, and is used for “bursty” traffic. Non-real-time is usually abbreviated to vbr-nrt.

Most traffic classes also introduce the concept of Cell Delay Variation Tolerance (CDVT) which defines the “clumping” of cells in time.

Traffic contracts are usually maintained by the use of “Shaping”, a combination of queuing and marking of cells, and enforced by “Policing”.

Traffic shaping

Traffic shaping is usually done at the entry point to an ATM network and attempts to ensure that the cell flow will meet its traffic contract.

Traffic policing

To maintain network performance it is possible to police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as discardable farther down the line). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as Partial Packet Discard (PPD) and Early Packet Discard (EPD) have been created that will discard a whole series of cells until the next frame starts. This reduces the number of redundant cells in the network, saving bandwidth for full frames. EPD and PPD work with AAL5 connections as they use the frame end bit to detect the end of packets.

Types of virtual circuits and paths

Virtual circuits and virtual paths can be built statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the provisioner must build the circuit as a series of segments, one for each pair of interfaces through which it passes.

PVPs and PVCs are conceptually simple, but require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service “contract”) and the two endpoints.

Finally, switched virtual circuits (SVCs) are built and torn down on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected by ATM. SVCs were also used in attempts to replace local area networks with ATM.

Virtual circuit routing

Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or Private Network-to-Network Interface (PNNI) protocol. PNNI uses the same shortest path first algorithm used by OSPF and IS-IS to route IP packets to share topology information between switches and select a route through a network. PNNI also includes a very powerful summarization mechani to allow construction of very large networks, as well as a call admission control (CAC) algorithm that determines whether sufficient bandwidth is available on a proposed route through a network to satisfy the service requirements of a VC or VP.

Call admission and connection establishment

A connection has to be established for two parties to be able to send cells to each other. In ATM this is called a VC (“Virtual Connection”). It can be a PVC (“Permanent Virtual Connection”), which is created administratively, or an SVC(“Switched Virtual Connection”), which is created as needed by the communicating parties. SVC creation is done by “signaling” in which the requesting party indicates the address of the receiving party, the type of service requested, and traffic parameters if applicable to the selected service. “Call admission” is then done by the network to confirm that the requested resources are available, and that a route exists for the connection.

Structure of an ATM cell

An ATM cell consists of a 5 byte header and a 48 byte payload. The payload size of 48 bytes was chosen as described above (“Why Cells?”).

ATM defines two different cell formats: NNI (Network-Network Interface) and UNI (User-Network Interface). Most ATM links use UNI cell format.

Diagram of the UNI ATM Cell

7

4

3

GFC VPI

VPI

VCI

VCI

VCI PT CLP

HEC

Payload (48 bytes)

Diagram of the NNI ATM Cell

7

4

3

VPI

VPI

VCI

VCI

VCI PT CLP

HEC

Payload (48 bytes)

GFC = Generic Flow Control (4 bits) (default: 4-zero bits)

VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI)

VCI = Virtual channel identifier (16 bits)

PT = Payload Type (3 bits)

CLP = Cell Loss Priority (1-bit)

HEC = Header Error Correction (8-bit CRC, polynomial = X8 + X2 + X + 1)

The PT field is used to designate various special kinds of cells for operations, administration, and maintenance (OAM) purposes, and to delineate packet boundaries in some AALs.

Several of ATM’s link protocols use the HEC field to drive a CRC-Based Framing algorithm, which allows the position of the ATM cells to be found with no overhead required beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found.

In a UNI cell the GFC field is reserved for a local flow control/submultiplexing system between users. This was intended to allow several terminals to share a single network connection, in the same way that two ISDN phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default.

为什么香港新世界机房路由经常变?

饿

向您推荐

香港CLink机房(八联网络)——专业香亩手港服务器租用拿册托管、邮箱服务器、VPS

好产品不怕做广告,货比三家,祝你找到自己需要并适合的产品。

详情百度:八联网络

——————————————————————————————————迅敏嫌

八联云主机 八月一日即将上市

因为新世界用的美国的流量清洗,转到亚洲各地再 回来的

关于vlan vps的介绍到此就结束了,不知道你从中找到你需要的信息了吗 ?如果你还想了解更多这方面的信息,记得收藏关注本站。

腾讯云

相关推荐

踏入阿里云服务器代理商之门:步骤和技巧 (怎么成为阿里云服务器代理商)

作为目前全球更大的云计算服务提供商之一,阿里云在中国市场的份额一直占据领导地位。如果你希望在云计算领域开展业务,成为阿里云服务器代理商可能是个不错的选择。本文将为大家详细介绍具体的步骤和技巧,帮助你成...

高性价比,足够优惠! 30美元起,年付美国VPS,值得拥有! (美国vps 年付)

近年来,随着互联网的不断发展,越来越多的网站需要使用虚拟主机服务,以提供更加稳定的服务质量和更好的访问速度。而虚拟主机的使用也因此成为了越来越普遍的一种方式。但是,在选择虚拟主机服务时,除了考虑到稳定...

评测美国云服务器,推荐性价比高的品牌 (美国云服务器哪个好用)

近年来,随着人们对于云计算的需求不断增加,云服务器也逐渐成为了企业、机构和个人等用户参与服务的主要方式之一。而在众多的云服务器品牌中,美国云服务器更是备受欢迎。而对于那些想要评测美国云服务器的用户来说...

群晖服务器的登录方法详解 (群晖服务器如何登陆)

群晖服务器是一种高效可靠的存储和共享平台,它可以提供非常多的实用功能和服务。但是,在开始使用之前,你需要登录到你的群晖服务器。在本文中,我们将详细讲解群晖服务器的登录方法。一、了解群晖服务器的基本概...

紧急通知:CDN服务器可能遇到故障,需及时解决! (cdn服务器可能发生故障)

作为一个网站管理员,CDN服务器的重要性我们再清楚不过了。最近,我们收到了来自网络运营商的紧急通知,通知我们CDN服务器可能遇到故障,需要及时解决。CDN服务器,即内容分发网络服务器,是在全球各个位...

办公室网络瘫痪!无法连接局域服务器怎么办? (连接不了局域服务器)

办公室网络瘫痪是企业中常见的问题之一。一旦网络瘫痪,会影响到员工的工作效率和企业的营运。当你打开电脑,却发现无法连接局域服务器时,这时该如何应对呢?1.检查网络连接检查电缆是否连接正确,网线是否...

利润吗?买云主机,能否带来收益? (买云主机能赚)

随着互联网的飞速发展,云计算作为一种新型的计算模式,其广泛应用在各个领域之中。其中,云主机服务是云计算的重要组成部分,已经成为很多企业选择托管的首选方式。随着云主机的发展和普及,很多人开始关注,如果购...

.NET轻松打开FTP服务器文件夹,方便快捷管理文件 (.net打开ftp服务器文件夹)

在现代科技的浪潮下,越来越多的企业选择使用云服务器来存储和共享数据。FTP服务器是一个非常有用的工具,它允许用户上传、下载、删除和共享文件。然而,FTP管理文件需要一些特定技能和知识,否则操作可能会变...

2023企业服务器版:全面升级,助力企业发展 (2023 企业服务器版)

2023年,微软推出了全新的企业服务器版本,旨在为企业用户提供更为稳定、高效的IT系统支持,更好地助力企业发展。随着互联网和信息技术的飞速发展,企业面临着越来越大的信息化压力。如何建设一个高效、稳定...

享受超低价格!2023年付VPS,轻松搭建私人网站 (2023便宜年付vps)

当今时代,互联网充斥着各种各样的网站,无论是企业还是个人都会拥有一个网站,用来展示自己的产品或者知识。而搭建一个私人网站也是越来越普遍的事情。那么,如何搭建一个便捷而且又不贵的私人网站呢?本文将为大家...

回顾2023服务器系统:经典之作还是过时技术? (2023服务器系统)

2023年,微软公司推出了WindowsServer2023,这是一款非常成功的服务器操作系统,许多企业和机构使用它进行各种任务和应用程序。然而,随着时间的推移,新技术的发展以及安全漏洞的增加,2...

2023 Q3服务器排名发布:领先厂商与新兴品牌争夺冠军 (2023 Q3服务器排名)

近年来,随着互联网技术的迅速发展,服务器市场也持续升温。市场上主要的服务器品牌包括戴尔、惠普、联想、IBM等。而在这些老牌企业的竞争下,新兴的服务器品牌也在不停崛起。根据2023Q3服务器排名发布,...

「低成本高性能!100g云服务器价格惊喜震撼!」 (100g云服务器价格)

低成本高性能!100g云服务器价格惊喜震撼!现今的互联网时代,任何一家公司都需要拥有自己的网站,以便宣传公司产品、服务和品牌。一个高速、可靠的云服务器是每个公司的必备工具之一。就在不久之前,网络服务...

1u服务器显卡:强大性能让服务器效率提升 (1u服务器显卡)

1U服务器显卡:强大性能让服务器效率提升随着信息技术的飞速发展,人们对数据处理和存储的需求越来越高,尤其对企业级服务器的性能要求也越来越苛刻。随着、大数据、云计算等诸多新技术的不断涌现,服务器的效率...

韩国将于2023年推出云服务器服务,助力云计算技术的发展。 (2023韩国云服务器)

韩国将于2023年推出云服务器服务,助力云计算技术的发展随着互联网的高速发展,云计算技术已经成为了数字化时代的一项重要技术。云计算技术是一种以互联网为基础,将不同的底层服务封装成云服务器,以便用户可...

取消回复欢迎 发表评论: