Global event triggering implicates calling all the event handlers bound for a certain event, on all available elements. It is performed by calling jQuery.trigger() without passing any DOM element as context. It is nearly the same as calling trigger() on all the elements that have one or more bindings to the corresponding event, something like this:
jQuery('#a1,#a2,div.b5').trigger('someEvent');
Triggering globally is obviously simpler because you don’t need to know all the elements that need to be triggered. It’s quite useful for certain situations but can also be a slow process at times. Although it’s been optimized since jQuery 1.3, it still requires going through all the elements registered to jQuery’s event system. This can cause short (or not so short) hangs every time an event is triggered like this.
One possible solution is to have one or more global objects that will act as event listeners. These elements can be DOM elements or not. All global events will be bound and triggered on one of these elements.
Instead of doing something like this:
jQuery('#text1').bind('change-page', function(e, title){
jQuery(this).text( 'Page is ' + title );
});
jQuery('#text2').bind('change-page', function(e, title){
jQuery(this).text( 'At ' + title + ' Page' );
});
jQuery.trigger('change-page', 'Inbox');
you’d do something like this:
jQuery.page = jQuery({}); // Just an empty object
jQuery.page.bind('change', function(e, title){
jQuery('#text1').text( 'Page is ' + title );
});
jQuery.page.bind('change', function(e, title){
jQuery('#text2').text( 'At ' + title + ' Page' );
});
jQuery.page.trigger('change', 'Inbox');
The syntax seems pretty much the same, but each call to trigger won’t be iterating
jQuery’s data registry (aka jQuery.cache). Even if you decide to use a DOM element, the principle is the same. DOM elements can be more appropriate at times. If, for example, you’re creating a table-related plugin, then it’d make sense to use each element as an event listener.
The problem with DOM elements in many browsers is that they’re the main source of
memory leaks. Memory leaks occur when there are certain amounts of RAM memory
that cannot be freed by the JavaScript engine as the user leaves a page.
You should be much more careful about how you save data into the objects when you
use DOM elements. That’s why jQuery provides the data() method.
Still, I’d personally use regular JavaScript objects in most situations. You can add attributes and functions to them, and the likelihood (and magnitude) of memory leaks will be smaller.
This approach is faster. You will be always triggering events on single objects, instead of the n entries on jQuery.cache. The downside of this approach is that everyone needs to know the event listener object (jQuery.page in the example) in order to bind or trigger one of its known events. This can be negative if you’re aiming to keep your code encapsulated.*
The concept of encapsulation is highly enforced in object-oriented programming,
where this is one of the things you should be very cautious about.
This is generally not such a great concern with jQuery programming, because it is not
object oriented and most users don’t get too worried about code encapsulation. Still,
it’s worth mentioning.
The listener objects mentioned don’t have to be simple dummy objects with nothing
but bind(), unbind(), and trigger() (as far as we’re concerned).
These objects could actually have methods and attributes that would make them much more useful.
The only problem, though, is that if we do something like this:
jQuery.page = jQuery({ number:1 });
to access the number attribute, we would be forced to do this:
jQuery.page.number; // undefined
jQuery.page[0].number; //
This is how jQuery works on HTML nodes and anything else.
But don’t give up on me yet! It’s easy to work around this. Let’s make a small plugin:
(function( $ ){
// These methods will be copied from jQuery.fn to our prototype
var copiedMethods = 'bind unbind one trigger triggerHandler'.split(' ');
// Empty constructor
function Listener(){
};
$.each(copiedMethods, function(i,name){
Listener.prototype[name] = $.fn[name];
});
// Our "jQuery.fn.each" needs to be replaced
Listener.prototype.each = function(fn) {
fn.call(this);
return this;
};
$.listener = function( data ){
return $.extend(new Listener(), data);
};
})( jQuery );
Now we can create objects that will have all the jQuery methods we need that are related to events, but the scope of the functions we pass to bind(), unbind(), etc., will be the object itself (jQuery.page in our example).
Note that our listener objects won’t have all jQuery methods but just the ones we
copied. While you could add some more methods, most of them won’t work. That
would require a more complex implementation; we’ll stick to this one, which satisfies
our needs for events.
Now that we have this mini plugin, we can do this:
jQuery.page = jQuery.listener({
title: 'Start',
changeTo: function( title ){
this.title = title;
this.trigger('change');
}
});
jQuery.page.changeTo('Inbox');
Because you can now access the object from within the handlers, using the this, you
don’t need to pass certain values like the title as arguments to the handler. Instead, you can simply use this.title to access the value:
jQuery.page.bind('change', function(e){
jQuery('#text1').text( 'Page is ' + this.title );
});
When swallowing small doses of code, JavaScript interpreters tend to process data speedily. But if you throw a ton of complex and deeply nested code at a browser, you may notice some latency, even after all the data has been downloaded in the browser.
Here are a handful of useful tips to help you unclog potential processing bottlenecks in your code:
• Avoid using the eval( ) function.
• Avoid the with construction.
• Minimize repetitive expression evaluation.
• Use simulated hash tables for lookups in large arrays of objects.
• Avoid excessive string concatenation.
• Investigate download performance.
• Avoid multiple document.write( ) method calls.
Look for these culprits especially inside loops, where delays become magnified.
One of the most inefficient functions in the JavaScript language is eval( ). This function converts a string representation of an object to a genuine object reference. It becomes a common crutch when you find yourself with a string of an object’s name or ID, and you need to build a reference to the actual object. For example, if you
have a sequence of mouse rollover images comprising a menu, and their names are menuImg1, menuImg2, and so on, you might be tempted to create a function that restores all images to their normal image with the following construction:
for (var i = 0; i < 6; i++)
{
var imgObj = eval("document.menuImg" + i);
imgObj.src = "images/menuImg" + i + "_normal.jpg";
}
The temptation is there because you are also using string concatenation to assemble the URL of the associated image file. Unfortunately, the eval( ) function in this loop is very wasteful. When it comes to referencing element objects, there is almost always a way to get from a string reference to the actual object reference without using the eval( ) function. In the case of images, the document.images collection (array) provides the avenue. Here is the revised, more streamlined loop:
for (var i = 0; i < 6; i++)
{
var imgObj = document.images["menuImg" + i];
imgObj.src = "images/menuImg" + i + "_normal.jpg";
}
If an element object has a name or ID, you can reach it through some collection that contains that element. The W3C DOM syntax for document.getElementById( ) is a natural choice when working in browsers that support the syntax and you have the element’s ID as a string. But even for older code that supports names of things like images and form controls, there are collections to use, such as document.images and the elements collection of a form object (document.myForm.elements["elementName"]). For custom objects, see the later discussion about simulated hash tables. Hunt down every eval( ) function in your code and find a suitable, speedier replacement. Another performance grabber is the with construction. The purpose of this control statement is to help narrow the scope of statements within a block. For example, if you have a series of statements that work primarily with a single object’s properties and/or methods, you can limit the scope of the block so that the statements assume properties and methods belong to that object. In the following script fragment, the statements inside the block invoke the sort( ) method of an array and read the array’s length property:
with myArray
{ sort( );
var howMany = length;
}
Yes, it may look efficient, but the interpreter goes to extra lengths to fill in the object references before evaluating the nested expressions. Don’t use this construction. It takes processing cycles to evaluate any expression or reference. The more “dots” in a reference, the longer it takes to evaluate the reference. Therefore, you want to avoid repeating a lengthy object reference or expression if it isn’t necessary, especially inside a loop. Here is a fragment that may look familiar to you from your own coding experience:
function myFunction(elemID)
{
for (i = 0; i < document.getElementById(elemID).childNodes.length; i++)
{
if (document.getElementById(elemID).childNodes[i].nodeType = = 1)
{ // process element nodes here }
}
}
In the course of this function’s execution, the expression document.getElementById( ) evaluates twice as many times as there are child nodes in the element whose ID is passed to the function. At each start of the for loop’s execution, the limit expression evaluates the method; then the nested if condition evaluates the same expression each time through the loop. More than likely, additional statements in the loop evaluate that expression to access a child node of the outer element object. This is very wasteful of processing time. Instead, at the cost of one local variable, you can eliminate all of this repetitive expression evaluation. Evaluate the unchanging part just once, and then use the variable reference as a substitute thereafter:
function myFunction(elemID)
{
var elem = document.getElementById(elemID); for (i = 0; i < elem .childNodes.length; i++)
{
if (elem .childNodes[i].nodeType = = 1)
{ // process element nodes here
}
}
}
If all of the processing inside the loop is with only child nodes of the outer loop, you can further compact the expression evaluations:
function myFunction(elemID)
{
var elemNodes = document.getElementById(elemID).childNodes; for (i = 0; i < elemNodes.length; i++)
{
if (elemNodes[i].nodeType = = 1)
{ // process element nodes here
}
}
}
As an added bonus, you have also reduced the source code size. If you find instances of repetitive expressions whose values don’t change during the course of the affected script segment, consider them candidates for pre-assignment to a local variable. Next, eliminate time-consuming iterations through arrays, especially multidimensional arrays or arrays of objects. If you have a large array (say, more than about 100 entries), even the average lookup time may be noticeable. Instead, perform a one-time generation of a simulated hash table of the array. Assemble the hash table while the page loads so that any delay caused by creating the table is blended into the overall page-loading time. Thereafter, lookups into the array will be nearly instantaneous, even if the item found is the last item in the many-hundred member array. String concatenation can be a resource drain. Using arrays as temporary storage of string blocks can streamline execution. Getting a ton of JavaScript code from server to browser can be a bottleneck on its own. Bear in mind that each external .js file loaded into a page incurs the overhead of an HTTP request (with at most two simultaneous connections possible). Various techniques for condensing .js source files are available, such as utilities that remove whitespace and shorten identifiers (often at the cost of ease of source code management and debugging). Most modern browsers can also accept external JavaScript files compressed with gzip (although IE 6 exhibits problems). As you can see, no single solution is guaranteed to work in every situation. One other impact on loading time is where in the page you place
For Creating a REST Web Service , See my previous Post
For Consuming a Web Service , Use file_get_contents( ):
<?php
$base = 'http://music.example.org/search.php';
$params = array('composer' => 'beethoven',
'instrument' => 'cello');
$url = $base . '?' . http_build_query($params);
$response = file_get_contents($url);
?>
Or use cURL:
<?php
$base = 'http://music.example.org/search.php';
$params = array('composer' => 'beethoven',
'instrument' => 'cello');
$url = $base . '?' . http_build_query($params);
$c = curl_init($url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($c);
curl_close($c);
?>
REST is a style of web services in which you make requests using HTTP methods such as get and post, and the method type tells the server what action it should take. For example, get tells the server you want to retrieve existing data, whereas post means you
want to update existing data. The server then replies with the results in an XML document that you can process. The brilliance of REST is in its simplicity and use of existing standards. PHP’s been
letting you make HTTP requests and process XML documents for years, so everything you need to make and process REST requests is old hat.
There are many ways to execute HTTP requests in PHP, including
file_get_contents( ), the cURL extension, and PEAR packages.
Once you’ve retrieved the XML document, use any of PHP’s XML extensions to process it. Given the nature of REST documents, and that you’re usually familiar with the schema of the response, the SimpleXML extension is often the best choice.
The most basic REST server is a page that accepts query arguments and returns XML:
<?php
// data
$music_database = <<<_MUSIC_
<?xml version="1.0" encoding="utf-8" ?>
<music>
<album id="1">
<name>Revolver</name>
<artist>The Beatles</artist>
</album>
<!-- 941 more albums here -->
<album id="943">
<name>Miles And Coltrane</name>
<artist>Miles Davis</artist>
<artist>John Coltrane</artist>
</album>
</music>
_MUSIC_;
// load data
$s = simplexml_load_string($music_database);
// query data
$artist = addslashes($_GET['artist']);
$query = "/music/album[artist = '$artist']";
$albums = $s->xpath($query);
// display query results as XML
print "<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n";
print "<music>\n\t";
foreach ($albums as $a) {
print $a->asXML();
}
print "\n</music>";
?>
When this page is stored at http://api.example.org/music, an HTTP GET request to
http://api.example.org/music?artist=The+Beatles returns:
<?xml version="1.0" encoding="utf-8" ?>
<music>
<album id="1">
<name>Revolver</name>
<artist>The Beatles</artist>
</album>
</music>
At its most basic level, serving a REST request is no different than processing an HTML
form. The key difference is that you’re replying with XML instead of HTML. Input parameters come in as query parameters, so PHP parses them into $_GET. You then process the values in $_GET to determine the correct query for your data, which you use to retrieve the proper records to return.
For instance, Next Example uses code that queries an XML document using XPath for all the albums put out by the artist passed in via the artist get variable.
Example - Implementing a REST query server
<?php
// data
$music_database = <<<_MUSIC_
<?xml version="1.0" encoding="utf-8" ?>
<music>
<album id="1">
<name>Revolver</name>
<artist>The Beatles</artist>
</album>
<!-- 941 more albums here -->
<album id="943">
<name>Miles And Coltrane</name>
<artist>Miles Davis</artist>
<artist>John Coltrane</artist>
</album>
</music>
_MUSIC_;
// load data
$s = simplexml_load_string($music_database);
// query data
$artist = addslashes($_GET['artist']);
$query = "/music/album[artist = '$artist']";
$albums = $s->xpath($query);
// display query results as XML
print "<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n";
print "<music>\n\t";
foreach ($albums as $a) {
print $a->asXML();
}
print "\n</music>";
?>
For simplicity, Example uses XML as the data source and XPath as the query language. This eliminates the need to convert the results to XML. It’s likely that you will query a database using SQL. That’s okay! For the purposes of REST, the particular backend system is irrelevant.
The important part is outputting your results as XML. In this case, since the data started as XML, you can wrap it inside a root element and echo it without any conversion:
<?php
// display query results as XML
print "<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n";
print "<music>\n\t";
foreach ($albums as $a) {
print $a->asXML();
}
print "\n</music>";
?>
This gives you:
<?xml version="1.0" encoding="utf-8" ?>
<music>
<album id="1">
<name>Revolver</name>
<artist>The Beatles</artist>
</album>
</music>
Now your work is done and it’s up to the REST client to process the XML you returned, using the XML-processing tool of its choice. In PHP 5, this is frequently SimpleXML. It’s useful to publish a data schema for your REST responses. This lets people know what to expect from your replies and lets them validate the data to ensure its properly formatted. XML Schema and RelaxNG are two good choices for your schema. REST isn’t restricted to read-only operations, such as search. REST supports reading
and writing data, including adding, updating, and deleting records.
There are two popular ways to expose this complete set of features:
- Accepting an additional parameter on the query string.
- Using HTTP verbs, such as post and put.
Both options are relatively straightforward to implement. This first is marginally easier, on both you and REST clients, but limits the size of the data you can accept and has potentially negative side effects.When you use get for everything, it’s very easy for people to construct requests because they can use just standard URLs with a query string. This is a familiar operation and people can even test their code by replicating their requests through the location bars
on their web browsers. However, many web servers place a limit on the size of the URLs they can process. People often need to pass large amounts of data when they add a new record. There’s
no such limitation on the size of post data. Therefore, get is not a good choice for adding or updating records. Additionally, according to the HTTP specification, get requests are not supposed to
alter backend data. You should design your site so that when a person makes two identical get requests, she gets two identical replies. When you allow people to add, update, or delete records via get, you’re violating this principle of HTTP. While this is normally not a problem, it can bite you when you’re not looking. For instance, automated scripts, such as the Google spider, try to index
your pages. If you expose destructive operations as URLs in the href attribute inside of HTML anchor tags, the spider may follow them, and delete information from your database in the process.
Still, adding another get parameter is straightforward and requires minimal edits, as shown in this Example.
Example - Implementing a REST server with multiple operations
<?php
// Add more action specific logic inside switch()
switch ($_GET['action']) {
case 'search':
$action = 'search';
break;
case 'add':
$action = 'add';
break;
case 'update':
$action = 'update';
break;
case 'delete':
$action = 'delete';
break;
default:
// invalid action
exit();
}
// Music Database XML document moved to file
$s = simplexml_load_string('music_database.xml');
if ($action == 'search') {
$artist = $_GET['artist'];
$query = "/music/album[artist = '$artist']";
$albums = $s->xpath($query);
// Display results here
} elseif ($action == 'add') {
$artist = $_GET['artist'];
$album = $_GET['album'];
// Insert new node from input data
}
// ... other actions here
?>
At the top of the page, check $_GET['action'] for a valid set of actions, and set the
$action variable when you find one.
Then, load in the data source (which is where the XML flat file is less of a good choice, since you don’t get locking out of the box like you do with databases).
Now you can perform your operation. For a search, query your data and print it out, just like in First Example.
For an addition, you should update the data store, and then reply with a brief message saying everything succeeded.
For example:
<?xml version="1.0" encoding="UTF-8"?>
<response code="200">Album added</response>
Alternatively, if there’s a failure, send an error message:
<?xml version="1.0" encoding="UTF-8"?>
<response code="400">Invalid request</response>
While most people use this method of checking an action query parameter to decide what action to take, your other option is to use HTTP verbs, such as get, post, put, and delete. This is more “true” REST style, and allows you to not only comfortably process larger requests, but is also safer because it’s far less likely that your data will be accidentally deleted.
This Table shows the general link between between SQL commands and HTTP verbs.
Table - SQL commands, HTTP verbs, and REST actions
SQL REST
CREATE POST
SELECT GET
UPDATE PUT
DELETE DELETE
To use HTTP verbs, check the value of $_SERVER['REQUEST_METHOD'] instead of
$_GET['action'], as shown in this Example.
Example - Implementing a REST server that uses HTTP verbs
<?php
// Add more action specific logic inside switch()
// Convert to UPPER CASE
$request_method = strtoupper($_SERVER['REQUEST_METHOD']);
switch ($request_method) {
case 'GET':
$action = 'search';
break;
case 'POST':
$action = 'add';
break;
case 'PUT':
$action = 'update';
break;
case 'DELETE':
$action = 'delete';
break;
default:
// invalid action
exit();
}
// ... other actions here
?>
Beyond switching to use the REQUEST_METHOD at the top of the Last Example, you must also update your code to check the HTTP verb names of get, post, put, and delete. And you must now use $_POST instead of $_GET when the verb isn’t get. Remember that $_SERVER['REQUEST_METHOD'] is just as secure as $_GET['action'], which is to say not secure at all. Both of these values are easy to set, so if you’re exposing sensitive data or allowing operations that can destroy data, make sure that the person
making the request has permission to do so.