{"id":353,"date":"2013-12-06T12:07:05","date_gmt":"2013-12-06T17:07:05","guid":{"rendered":"http:\/\/homepages.uc.edu\/~yaozo\/wordpress\/?p=353"},"modified":"2013-12-06T12:07:05","modified_gmt":"2013-12-06T17:07:05","slug":"image-classification-practical-2011-andrea-vedaldi-and-andrew-zisserman","status":"publish","type":"post","link":"https:\/\/zhuoyao.net\/index.php\/2013\/12\/06\/image-classification-practical-2011-andrea-vedaldi-and-andrew-zisserman\/","title":{"rendered":"Image Classification Practical, 2011 Andrea Vedaldi and Andrew Zisserman"},"content":{"rendered":"<div><span style=\"color: #0c343d; font-size: medium;\"><span style=\"font-size: x-large;\">Goal\u00a0<\/span><br \/>\n<\/span><span style=\"font-size: medium;\">In image classification, an image is classified according to its visual content. For example, does it contain an airplane or not. An important application is\u00a0<i>image retrieval<\/i>\u00a0&#8211; searching through an image dataset to obtain (or retrieve) those images with particular visual content.<\/p>\n<p>The goal of this session is to get basic practical experience with image classification. It includes: (i) training a visual classifier for five different image classes (<span style=\"font-size: medium;\"><i>aeroplanes, motorbikes, people, horses\u00a0<\/i>and<i>cars<\/i><\/span>); (ii) assessing the performance of the classifier by computing a precision-recall curve; (iii) varying the visual representation used for the feature vector, and the feature map used for the classifier; and (iv) obtaining training data for new classifiers using Bing image search.<br \/>\n<\/span><\/div>\n<p><span style=\"font-size: medium;\"><\/p>\n<p><\/span><\/p>\n<div><span style=\"color: #0c343d; font-size: medium;\"><span style=\"font-size: x-large;\">Getting started\u00a0<\/span><br \/>\n<\/span><\/p>\n<ul>\n<li><span style=\"color: #0c343d; font-size: medium;\">Download the code and data (<b><a id=\"ljow\" title=\"code only\" href=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/share\/practical-image-classification-code-only.tar.gz\">code only<\/a>,\u00a0<a id=\"on.g\" title=\"data only\" href=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/share\/practical-image-classification-data-only.tar.gz\">data only<\/a><\/b>\u00a0<span style=\"color: #0c343d; font-size: medium;\">~450Mb<\/span>). The data includes images and pre-computed features.<br \/>\n<\/span><\/li>\n<li><span style=\"color: #0c343d; font-size: medium;\">Unpack the code archive. This will make a directory called\u00a0<span style=\"color: #0c343d; font-size: medium;\">\u00a0<b>practical-image-classification<\/b>.\u00a0<\/span><\/span><\/li>\n<li><span style=\"color: #0c343d; font-size: medium;\">Unpack the data archive in the directory\u00a0<b>practical-image-classification<\/b>.<\/span><\/li>\n<li><span style=\"color: #0c343d; font-size: medium;\">Finally,\u00a0<span style=\"color: #0c343d; font-size: medium;\">start MATLAB in the directory\u00a0<b>practical-image-classification<\/b>.\u00a0<\/span><\/span><\/li>\n<li><span style=\"color: #0c343d; font-size: medium;\">Try running\u00a0<b><span style=\"color: #274e13;\">setup.m<\/span><\/b>\u00a0command (type\u00a0<\/span><span style=\"font-size: medium;\"><span style=\"color: #274e13;\"><b>setup<\/b><\/span><\/span><span style=\"color: #0c343d; font-size: medium;\">\u00a0without the .m suffix).\u00a0<\/span><span style=\"font-size: medium;\">If all goes well, you should obtain a greeting message.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\">\u00a0<\/span><\/p>\n<p><span style=\"font-size: medium;\">As you progress in the exercises you can use MATLAB\u00a0<span style=\"color: #274e13;\"><b>help<\/b><\/span>\u00a0command to display the help of the MATLAB functions that you need to use. For example, try typing\u00a0<span style=\"color: #274e13;\"><b>help setup<\/b><\/span>.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-size: x-large;\"><span style=\"color: #0c343d;\">Exercise description<\/span><\/span><\/p>\n<\/div>\n<div><span style=\"font-size: medium;\">Open and edit the script\u00a0<b><span style=\"color: #274e13;\">exercise1.m<\/span><\/b>\u00a0in the MATLAB editor. The script contains commented code and a description for all steps of this exercise. You can cut and paste this code into the MATLAB window<br \/>\nto run it, and will need to modify it as you go through the session.<br \/>\n<span style=\"font-size: x-large;\"><br \/>\nPart 1: Training and testing an Image Classifier<br \/>\n<span style=\"font-size: medium;\"><br \/>\n<span style=\"font-size: medium;\"><b>Stage A: Data Preparation<\/b><\/span><\/p>\n<p>The data provided in the directory\u00a0<b><span style=\"color: #274e13;\">data<\/span>\u00a0<\/b>consists of images and pre-computed feature vectors for each image. The JPEG images are contained in\u00a0<b><span style=\"color: #274e13;\">data\/images<\/span><\/b>. The data consists of three image classes<br \/>\n(containing\u00a0<i>aeroplanes<\/i>,\u00a0<i>motorbikes<\/i>\u00a0or\u00a0<i>persons<\/i>) and`background&#8217; images (i.e. images that do not contain these three classes). In the data preparation stage, this data is divided as<\/span><\/span><\/span>:<\/p>\n<table id=\"u7qe\" width=\"100%\" border=\"1\" cellspacing=\"0\" cellpadding=\"3\">\n<tbody>\n<tr>\n<td width=\"20%\"><span style=\"font-size: medium;\">\u00a0<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">aeroplane<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">motorbike<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">person<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">background<br \/>\n<\/span><\/td>\n<\/tr>\n<tr>\n<td width=\"20%\"><span style=\"font-size: medium;\">Training<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">112<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">120<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">1025<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">1019<\/span><\/td>\n<\/tr>\n<tr>\n<td width=\"20%\"><span style=\"font-size: medium;\">Test<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">126<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">125<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">983<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">1077<\/span><\/td>\n<\/tr>\n<tr>\n<td width=\"20%\"><span style=\"font-size: medium;\">Total<br \/>\n<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">238<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">245<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">2008<\/span><\/td>\n<td align=\"center\" width=\"20%\"><span style=\"font-size: medium;\">2096<\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><span style=\"font-size: medium;\"><br \/>\n<\/span><\/div>\n<p><span style=\"font-size: medium;\"><br \/>\nThe feature vector consists of SIFT features computed on a regular grid across the image (`dense SIFT&#8217;) and vector quantized into visual words. The frequency of each visual word is then recorded in a histogram for each tile of a spatial tiling as shown. The final feature vector for the image is a concatenation of these histograms. This process is summarized in the figure below:<\/p>\n<p><\/span><\/p>\n<div id=\"wth9\"><img decoding=\"async\" alt=\"\" src=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/share\/practical-image-classification_files\/File.png\" \/><\/div>\n<div id=\"wth9\"><\/div>\n<p><span style=\"font-size: medium;\">\u00a0<\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\">Why is spatial tiling used in the image representation?<\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><span style=\"font-size: medium;\"><br \/>\n<span style=\"font-size: medium;\">We will start by training a classifier for images that contain aeroplanes. The files<b><span style=\"color: #274e13;\">data\/aeroplane_train.txt<\/span><\/b>\u00a0and\u00a0<b><span style=\"color: #274e13;\">data\/aeroplane_val.txt\u00a0<\/span><\/b>list images that contain aeroplanes. Look through example images of the aeroplane class and the background images by browsing the image files in the data directory.<\/span><\/p>\n<p><\/span><b>Stage B: Train a classifier for images containing aeroplanes<\/b><\/p>\n<p><\/span><span style=\"font-size: medium;\">The aeroplane training images will be used as the positives, and the background images as the negatives. The classifier is a linear Support Vector Machine (SVM). Train the classifier by following the steps in\u00a0<\/span><span style=\"font-size: medium;\"><b><span style=\"color: #274e13;\">exercise1.m<\/span><\/b>.<\/p>\n<p><\/span><span style=\"font-size: medium;\">We will first assess qualitatively how well the classifier works by using it to rank all the\u00a0<b>training<\/b>images. What do you expect to happen? View the ranked list using the provided function<b>displayRankedImageList<\/b>\u00a0as shown in\u00a0<b><span style=\"color: #274e13;\">excercise1.m<\/span><\/b><\/span><span style=\"font-size: medium;\">.<br \/>\n<\/span><span style=\"font-size: medium;\"><br \/>\n<span style=\"font-size: medium;\">You can use the function\u00a0<b><span style=\"color: #274e13;\">displayRelevantVisualWords<\/span><\/b>\u00a0to display the image patches that correspond to the visual words which the classifier thinks are most related to the class (see the example embedded in\u00a0<b><span style=\"color: #274e13;\">exercise1.m<\/span><\/b>).<br \/>\n<\/span><br \/>\n<\/span><span style=\"font-size: medium;\"><b>Stage C: Classify the test images and assess the performance<\/b><\/p>\n<p>Now apply the learnt classifier to the test images. Again, you can look at the qualitative performance by using the classifier score to rank all the test images. Note the bias term is not needed for this ranking, only the classification vector\u00a0<b>w.\u00a0<\/b><span style=\"color: #ff0000;\">Why?<\/span><\/p>\n<p>Now we will measure the retrieval performance quantitatively by computing a Precision-Recall curve. Recall the definitions of Precision and Recall:<br \/>\n<\/span><\/p>\n<div id=\"k95c\"><img loading=\"lazy\" decoding=\"async\" alt=\"\" src=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/share\/practical-image-classification_files\/File.jpeg\" width=\"711\" height=\"258\" \/><\/div>\n<p><span style=\"font-size: medium;\">The\u00a0<span style=\"font-size: medium;\">Precision-Recall curve<\/span>\u00a0is computed by varying the threshold on the classifier (from high to low) and plotting the values of precision against recall for each threshold value.<\/span><span style=\"font-size: medium;\">\u00a0In order to assess the retrieval performance by a single number (rather than a curve), the Average Precision (AP, the area under the curve) is often computed.<\/p>\n<p><span style=\"font-size: medium;\"><b>Stage D: Learn a classifier for the other classes and assess its performance<\/p>\n<p><\/b>Now repeat stages (<b>B<\/b>) and (<b>C<\/b>) for each of the other two classes: motorbikes and persons. To do this you can simply rerun\u00a0<b><span style=\"color: #274e13;\">exercise1.m<\/span><\/b>\u00a0after changing the dataset loaded at the beginning in stage (<b>A<\/b>). Remember to change both the training and test data. In each case record the AP performance measure.<br \/>\n<b><br \/>\n<\/b><\/span><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Does the AP performance match your expectations based on the variation of the class images?<\/span><\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><br \/>\n<b>Stage E: Vary the image representation<\/p>\n<p><\/b>Up to this point, the image feature vector has used spatial tiling. Now, we are going to`turn this off&#8217; and see how the performance changes. In this part, the image will simply be represented by a single histogram recording the frequency of visual words (but not taking any account of their image position). This is a\u00a0<b>bag-of-visual-words<\/b>\u00a0representation.<\/p>\n<p><span style=\"font-size: medium;\">A spatial histogram can be converted back to a simple histogram by merging the tiles. Edit<b><span style=\"color: #274e13;\">exercise1.m\u00a0<\/span><\/b>to turn the part of the code that does so. Then evaluate the classifier performance on the test images.\u00a0<\/span><br \/>\n<b><br \/>\n<\/b><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Make sure you understand the reason for the change in performance.<\/span><\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><br \/>\n<b>Stage F: Vary the classifier<\/p>\n<p><\/b>Up to this point we have used a linear SVM, treating the histograms representing each image as vectors normalized to a unit Euclidean norm. Now we will use a Hellinger kernel classifier but instead of computing kernel values we will explicitly compute the feature map, so that the classifier remains linear (in the new feature space). The definition of the Hellinger kernel (also known as the Bhattacharyya coefficient) is<\/p>\n<p><\/span><\/p>\n<div id=\"ao1m\"><img decoding=\"async\" alt=\"\" src=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/share\/practical-image-classification_files\/File_003.jpeg\" \/><\/div>\n<p><span style=\"font-size: medium;\"><br \/>\nwhere h and h&#8217; are normalized histograms.<\/p>\n<p>So, in fact, all that is involved in computing the feature map is taking the square root of the histogram values and normalizing the resulting vector to unit Euclidean norm.\u00a0<span style=\"font-size: medium;\"><\/p>\n<p><\/span><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\">Edit\u00a0<b>exercise1.m\u00a0<\/b>so that the square root of the histograms are used for the feature vectors. Note, this involves writing a line of Matlab code for the training and test histograms.<\/span><\/li>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Retrain the classifier for the\u00a0<\/span><span style=\"color: #ff0000; font-size: medium;\">aeroplane<\/span><span style=\"color: #ff0000;\">\u00a0class, and measure its performance on the test data.<\/span><\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><b>\u00a0<\/b><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Make sure you understand why this procedure is equivalent to using the Hellinger kernel.<\/span><\/span><\/li>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Why is it an advantage to keep the classifier linear, rather than using a non-linear kernel?<br \/>\n<\/span><\/span><\/li>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Try removing the L2 normalization step. Does this affect the performance? Why?\u00a0<\/span><\/span><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">(Hint: the histogram are L1 normalized by construction)<\/span><\/span><\/li>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Go back to the linear kernel and remove the L2 normalization step. What do you observe?<\/span><\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><br \/>\n<b>Note<\/b>: when learning the SVM,\u00a0<span style=\"font-size: medium;\">to save training time\u00a0<\/span>we are not changing the C parameter. This parameter influences the generalization error and should be learnt on a validation set when the kernel is changed.<\/p>\n<p><span style=\"font-size: medium;\"><b>Stage G: Vary the number of training images<\/b><\/p>\n<p>Up to this point we have used all the available training images. Now edit\u00a0<span style=\"color: #274e13;\"><b>exercise1.m\u00a0<\/b><\/span>\u00a0the\u00a0<b><span style=\"color: #274e13;\">fraction<\/span><\/b>variable to use 10% and 50% of the training data.<\/p>\n<p><\/span><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">What performance do you get with the linear kernel? And with the Hellinger kernel?<\/span><\/span><\/li>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Do you think the performance has `saturated&#8217; if all the training images are used, or would adding more training images give an improvement?<br \/>\n<\/span><\/span><\/li>\n<\/ul>\n<p><span style=\"font-size: medium;\"><br \/>\n<\/span><span style=\"font-size: medium;\"><span style=\"font-size: x-large;\">P<\/span><span style=\"font-size: x-large;\">art 2: Training an Image Classifier for Retrieval using Bing images<\/span><\/p>\n<p>In Part 1 of this practical the training data was provided and all the feature vectors pre-computed. The goal of this second part is to choose the training data yourself in order to optimize the classifier performance. The task is the following: you are given a large corpus of images and asked to retrieve images of a certain class, e.g. containing a bicycle. You then need to obtain training images, e.g. using Bing Image Search, in order to train a classifier for images containing bicycles and optimize its retrieval performance.<\/p>\n<p>The MATLAB code\u00a0<span style=\"font-size: medium;\"><b><span style=\"color: #274e13;\">exercise2.m\u00a0<\/span><\/b>provides the following functionality: it uses the images in the directory\u00a0<b><span style=\"color: #274e13;\">data\/myImages<\/span><\/b>\u00a0and the default negative list\u00a0<b><span style=\"color: #274e13;\">data\/background_train.txt<\/span><\/b>\u00a0to train a classifier and rank the test images. To get started, we will train a classifier for horses:<\/p>\n<p><\/span><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><span style=\"color: #ff0000;\">Use Bing image search with `horses&#8217; as the text query (you can also set the photo option on)<\/span><\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li><span style=\"color: #ff0000;\">Pick 5 images and drag and drop (save) them into the directory\u00a0<\/span><b><span style=\"color: #274e13;\">data\/<\/span><span style=\"color: #274e13;\">myImages<\/span><\/b><span style=\"color: #ff0000;\">. These will provide the positive training examples.<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li><span style=\"color: #ff0000;\">Run the code<\/span>\u00a0<span style=\"font-size: medium;\"><b><span style=\"color: #274e13;\">exercise2.m<\/span><\/b><\/span><span style=\"color: #ff0000;\">\u00a0and view the ranked list of images. Note, since feature vectors must be computed for all the training images, this may take a few moments.<\/span><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li><span style=\"color: #ff0000;\">Now, add in 5 more images and retrain the classifier.<\/span><\/li>\n<\/ul>\n<p>The test data set contains 148 images with horses. Your goal is to train a classifier that can retrieve as many of these as possible in a high ranked position. You can measure your success by how many appear in the first 36 images (this performance measure is `precision at rank-36&#8242;). Here are some ways to improve the classifier:<\/p>\n<ul>\n<li><span style=\"color: #ff0000;\">Add more positive training images.<\/span><\/li>\n<li><span style=\"color: #ff0000;\">Add more positive training images, but choose these to be varied from those you already have<\/span><\/li>\n<\/ul>\n<p><b>Note<\/b>: all images are automatically normalized to a standard size, and descriptors are saved for<br \/>\neach new image added in the\u00a0<b>data\/cache<\/b>\u00a0directory.<\/p>\n<p>The test data also contains the category\u00a0<b>car<\/b>. Train classifiers for it and compare the difficulty of this and the horse class.<span style=\"font-size: medium;\"><\/p>\n<p><span style=\"font-size: x-large;\">L<\/span><span style=\"font-size: x-large;\">inks and further work:<\/p>\n<p><\/span><\/span><\/p>\n<ul>\n<li><span style=\"font-size: medium;\">The code for this practical is written using the software package\u00a0<b>VLFeat.\u00a0<\/b>This is a software library written in MATLAB and C, and is freely available as source code and binary, see http:\/\/www.vlfeat.org\/.<\/span><\/li>\n<li><span style=\"font-size: medium;\">The images for this practical are taken from the\u00a0<b>PASCAL VOC 2007<\/b>\u00a0benchmark, see http:\/\/pascallin.ecs.soton.ac.uk\/challenges\/VOC\/voc2007\/<\/span><\/li>\n<li><span style=\"font-size: medium;\">If there is a significant difference between the training and test performance, then that indicates over fitting. The difference can often be reduced, and the test performance (generalization) improved by changing the SVM C parameter.\u00a0<span style=\"font-size: medium;\">In Part I, vary the C parameter in the range 0.1 to 1000 (the default is C=100), and plot the AP on the training and test data as C varies for the linear and Hellinger kernels.<br \/>\n<\/span><\/span><\/li>\n<\/ul>\n<hr \/>\n<div><i>Acknowledgements:<br \/>\n<\/i><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><i>Guidance from Ivan Laptev and Josef Sivic<\/i><\/span><\/li>\n<li><span style=\"font-size: medium;\"><i>Comments from Relja Arandjelovic, Yusuf Aytar and Varun Gulshan<br \/>\n<\/i><\/span><\/li>\n<li><span style=\"font-size: medium;\"><i>Funding from ERC grant\u00a0<a href=\"http:\/\/www.robots.ox.ac.uk\/~vgg\/projects\/visrec\/\">VisRec\u00a0<\/a>Grant No. 228180<\/i><\/span><span style=\"font-size: medium;\"><br \/>\n<\/span><\/li>\n<\/ul>\n<p><i><br \/>\nHistory:<br \/>\n<\/i><\/p>\n<ul>\n<li><span style=\"font-size: medium;\"><i>Used at\u00a0<a href=\"http:\/\/www.di.ens.fr\/willow\/events\/cvml2011\/\">ENS\/INRIA Visual Recognition and Machine Learning Summer School, 2011<\/a><\/i><\/span><\/li>\n<li><span style=\"font-size: medium;\"><i>Used at\u00a0<a href=\"http:\/\/summerschool2011.graphicon.ru\/en\/courses\">Microsoft Computer Vision School, Moscow, 2011<\/a><\/i><\/span><\/li>\n<li><span style=\"font-size: medium;\"><i>Bug correction due to Oleg Tishutin<br \/>\n<\/i><\/span><\/li>\n<li><\/li>\n<\/ul>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Goal\u00a0 In image classification, an image is classified according to its visual content. For example, does it contain an airplane or not. An important application&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[],"class_list":["post-353","post","type-post","status-publish","format-standard","hentry","category-image-processing"],"_links":{"self":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/353","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/comments?post=353"}],"version-history":[{"count":0,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/353\/revisions"}],"wp:attachment":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/media?parent=353"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/categories?post=353"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/tags?post=353"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}