<?xml version="1.0" encoding="utf-8"?>
<!--
                                                                                     
 h       t     t                ::       /     /                     t             / 
 h       t     t                ::      //    //                     t            // 
 h     ttttt ttttt ppppp sssss         //    //  y   y       sssss ttttt         //  
 hhhh    t     t   p   p s            //    //   y   y       s       t          //   
 h  hh   t     t   ppppp sssss       //    //    yyyyy       sssss   t         //    
 h   h   t     t   p         s  ::   /     /         y  ..       s   t    ..   /     
 h   h   t     t   p     sssss  ::   /     /     yyyyy  ..   sssss   t    ..   /     
                                                                                     
	<https://y.st./>
	Copyright © 2015 Alex Yst <mailto:copyright@y.st>

	This program is free software: you can redistribute it and/or modify
	it under the terms of the GNU General Public License as published by
	the Free Software Foundation, either version 3 of the License, or
	(at your option) any later version.

	This program is distributed in the hope that it will be useful,
	but WITHOUT ANY WARRANTY; without even the implied warranty of
	MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
	GNU General Public License for more details.

	You should have received a copy of the GNU General Public License
	along with this program. If not, see <https://www.gnu.org./licenses/>.
-->
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
	<head>
		<base href="https://y.st./en/weblog/2015/12-December/26.xhtml"/>
		<title>Work on my onion search engine begins &lt;https://y.st./en/weblog/2015/12-December/26.xhtml&gt;</title>
		<link rel="icon" type="image/png" href="/link/CC_BY-SA_4.0/y.st./icon.png"/>
		<link rel="stylesheet" type="text/css" href="/link/main.css"/>
		<script type="text/javascript" src="/script/javascript.js"/>
		<meta name="viewport" content="width=device-width"/>
	</head>
	<body>
<nav>
	<p>
		<a href="/en/coursework/">Coursework</a> |
		<a href="/en/take-down/">Take-down requests</a> |
		<a href="/en/">Home</a> |
		<a href="/en/a/about.xhtml">About</a> |
		<a href="/en/a/contact.xhtml">Contact</a> |
		<a href="/a/canary.txt">Canary</a> |
		<a href="/en/URI_research/"><abbr title="Uniform Resource Identifier">URI</abbr> research</a> |
		<a href="/en/opinion/">Opinions</a> |
		<a href="/en/law/">Law</a> |
		<a href="/en/recipe/">Recipes</a> |
		<a href="/en/a/links.xhtml">Links</a> |
		<a href="/en/weblog/2015/12-December/26.xhtml.asc">{this page}.asc</a>
	</p>
	<hr/>
	<p>
		Weblog index:
		<a href="/en/weblog/memories">Memories</a> |
		<a href="/en/weblog/"><abbr title="American Standard Code for Information Interchange">ASCII</abbr> calendars</a> |
		<a href="/en/weblog/index_ol_ascending.xhtml">Ascending list</a> |
		<a href="/en/weblog/index_ol_descending.xhtml">Descending list</a>
	</p>
	<hr/>
	<p>
		Jump to entry:
		<a href="/en/weblog/2015/03-March/07.xhtml">&lt;&lt;First</a>
		<a rel="prev" href="/en/weblog/2015/12-December/25.xhtml">&lt;Previous</a>
		<a rel="next" href="/en/weblog/2015/12-December/27.xhtml">Next&gt;</a>
		<a href="/en/weblog/latest.xhtml">Latest&gt;&gt;</a>
			</p>
			<hr/>
</nav>
		<header>
			<h1>Work on my onion search engine begins</h1>
			<p>Day 00294: <time>Saturday, 2015 December 26</time></p>
		</header>
<p>
	<a href="/en/weblog/2015/12-December/23.xhtml">Three days ago</a>, I added a new navigation bar to my weblog pages.
	At that time, my compile script rebuilt all of my weblog pages.
	The next day, my compile script rebuilt all the pages again.
	Confused, I figured that I had messed something up, and that the initial rebuild had not happened for some reason.
	But last night, it happened again, and I now know what happened.
	Because I added a link to the latest weblog entry to every weblog page, every weblog page will now be recompiled every day.
	This is not what I wanted, but it is what I asked the computer to do.
	And now, being that my website is composed entirely of static files, I do not know how to remedy this.
	If I had dynamic files, I could have the &quot;latest page&quot; link always point to a single page that I update daily, one that always redirects to the latest weblog entry.
	I do not want to break my policy of only using static files here though.
	If this is the future of this website, it means two things.
	First, the approaching deadline of the first of next month no longer has meaningful context, and is therefor invalid, as signatures on weblog pages will be updated daily.
	In other words, I now have no deadline.
	Second, there is going to be an increasing number of disk writes done daily.
	It will grow to ridiculous levels.
	I cannot allow this to be.
	I need to find a way to fix this, and I need to do it by the first of the month as to not miss the deadline that fixing this preserves.
</p>
<p>
	So far, I have come up with three ways to accomplish my goal of not rebuilding the entire weblog each day.
	The first, is to set up that redirect page that always points to the latest weblog entry.
	This is suboptimal because it involves adding a scripted page to a website otherwise composed of only static files.
	The second is to create a new page that always displays a second copy of the latest weblog entry.
	This is suboptimal because it involves adding a redundant page.
	The third idea is to use the <code>&lt;object/&gt;</code> tag in the navigation to display the ever-changing &quot;latest entry&quot; hyperlink.
	This is suboptimal because it not only stupidly puts a single hyperlink in a window in the otherwise-clean navigation, but it also will not display in text-based Web browsers.
	Of course, there is always the option of generating the hyperlink via JavaScript, but JavaScript should never be used for something as important as navigation.
	JavaScript use must <strong>*always*</strong> be optional for the user, while navigation should <strong>*always*</strong> be usable by any client that can understand basic <code>&lt;a/&gt;</code> tags.
</p>
<p>
	While trying to work through what to do about my weblog pages and their incessant recompilation, I started work on my onion search engine.
	I could not get <abbr title="PHP: Hypertext Preprocessor">PHP</abbr> to use a the assigned proxy for its <abbr title="Domain Name System">DNS</abbr> queries when using functions such as <a href="https://secure.php.net/manual/en/function.file-get-contents.php"><code>file_get_contents()</code></a>.
	Nate from <a href="ircs://irc.oftc.net:6697/%23php">#php</a> recommended instead using <abbr title="PHP: Hypertext Preprocessor">PHP</abbr>&apos;s <a href="https://php.net/manual/en/ref.curl.php"><abbr title="Client for URLs/Client URL Request Library/Curl URL Request Library">cURL</abbr> functions</a>.
	Accessing these functions required installing Debian&apos;s <a href="apt:php5-curl"><code>php5-curl</code></a> package.
	While these functions worked the first time without any struggle whatsoever, they involve passing around a resource handle, which I have always thought of as a messy way to do things.
	Additionally, retrieving multiple Web pages requires modifying the resource handle between each Web request, which seems even more messy.
	The obvious answer seemed to me to build a wrapper class for the whole thing.
	Most of the way through, I realized that having my wrapper class behave exactly as I needed it to for today&apos;s use case was short-sighted.
	I had thrown in a method that behaved oddly if you were not expecting it, as it modified the resource handle in the same function call as returning data from a Web request, and I had created a strange constructor method that made for less work for me, but did not behave as the regular <abbr title="Client for URLs/Client URL Request Library/Curl URL Request Library">cURL</abbr> resource handle initiation function did.
	I decided that the best option was to split my class into two different classes.
	The first class does all the <abbr title="Client for URLs/Client URL Request Library/Curl URL Request Library">cURL</abbr> resource handle wrapping.
	It seems though that the <abbr title="Client for URLs/Client URL Request Library/Curl URL Request Library">cURL</abbr> <abbr title="PHP: Hypertext Preprocessor">PHP</abbr> module adds three types of resource handles, each with their own functions, so one class should not handle them all.
	This class handles only the main type, and if other resource handles are needed later, I can build more classes.
	The second class I built extends the first class, modifies the constructor to do what I want it to, and adds the function to modify the resource handle and retrieve a Web page.
</p>
<p>
	My <a href="/a/canary.txt">canary</a> still sings the tune of freedom and transparency.
</p>
		<hr/>
		<p>
			Copyright © 2015 Alex Yst;
			You may modify and/or redistribute this document under the terms of the <a rel="license" href="/license/gpl-3.0-standalone.xhtml"><abbr title="GNU&apos;s Not Unix">GNU</abbr> <abbr title="General Public License version Three or later">GPLv3+</abbr></a>.
			If for some reason you would prefer to modify and/or distribute this document under other free copyleft terms, please ask me via email.
			My address is in the source comments near the top of this document.
			This license also applies to embedded content such as images.
			For more information on that, see <a href="/en/a/licensing.xhtml">licensing</a>.
		</p>
		<p>
			<abbr title="World Wide Web Consortium">W3C</abbr> standards are important.
			This document conforms to the <a href="https://validator.w3.org./nu/?doc=https%3A%2F%2Fy.st.%2Fen%2Fweblog%2F2015%2F12-December%2F26.xhtml"><abbr title="Extensible Hypertext Markup Language">XHTML</abbr> 5.2</a> specification and uses style sheets that conform to the <a href="http://jigsaw.w3.org./css-validator/validator?uri=https%3A%2F%2Fy.st.%2Fen%2Fweblog%2F2015%2F12-December%2F26.xhtml"><abbr title="Cascading Style Sheets">CSS</abbr>3</a> specification.
		</p>
	</body>
</html>

