Many resources are needed to download a project. Please understand that we have to compensate our server costs. Thank you in advance. Project price only 1 $
You can buy this project and download/modify it how often you want.
The Netty project is an effort to provide an asynchronous event-driven
network application framework and tools for rapid development of
maintainable high performance and high scalability protocol servers and
clients. In other words, Netty is a NIO client server framework which
enables quick and easy development of network applications such as protocol
servers and clients. It greatly simplifies and streamlines network
programming such as TCP and UDP socket server.
/*
* Copyright 2015 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License, version 2.0 (the
* "License"); you may not use this file except in compliance with the License. You may obtain a
* copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
* the License.
*/
package org.jboss.netty.handler.execution;
import org.jboss.netty.channel.ChannelEvent;
import org.jboss.netty.channel.ChannelState;
import org.jboss.netty.channel.ChannelStateEvent;
import org.jboss.netty.util.ObjectSizeEstimator;
import org.jboss.netty.util.internal.ConcurrentIdentityWeakKeyHashMap;
import java.util.concurrent.ConcurrentMap;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReferenceFieldUpdater;
/**
* This is a fair alternative of {@link OrderedMemoryAwareThreadPoolExecutor} .
*
*
Unfair of {@link OrderedMemoryAwareThreadPoolExecutor}
* The task executed in {@link OrderedMemoryAwareThreadPoolExecutor} is unfair in some situations.
* For example, let's say there is only one executor thread that handle the events from the two channels, and events
* are submitted in sequence:
*
* Channel A (Event A1) , Channel B (Event B), Channel A (Event A2) , ... , Channel A (Event An)
*
* Then the events maybe executed in this unfair order:
*
* ----------------------------------------> Timeline -------------------------------->
* Channel A (Event A1) , Channel A (Event A2) , ... , Channel A (Event An), Channel B (Event B)
*
* As we see above, Channel B (Event B) maybe executed unfairly late.
* Even more, if there are too much events come in Channel A, and one-by-one closely, then Channel B (Event B) would be
* waiting for a long while and become "hungry".
*
*
Fair of FairOrderedMemoryAwareThreadPoolExecutor
* In the same case above ( one executor thread and two channels ) , this implement will guarantee execution order as:
*
* ----------------------------------------> Timeline -------------------------------->
* Channel A (Event A1) , Channel B (Event B), Channel A (Event A2) , ... , Channel A (Event An),
*
*
* NOTE: For convenience the case above use one single executor thread, but the fair mechanism is suitable
* for multiple executor threads situations.
*/
public class FairOrderedMemoryAwareThreadPoolExecutor extends MemoryAwareThreadPoolExecutor {
// end sign
private final EventTask end = new EventTask(null);
private final AtomicReferenceFieldUpdater fieldUpdater =
AtomicReferenceFieldUpdater.newUpdater(EventTask.class, EventTask.class, "next");
protected final ConcurrentMap