
Department of Computer
Science,


Telephone : 
 none  

Fax : 
 none  

email : 
p.fenwick@auckland.ac.nz 

Life Member IEEE 

After about 20 years in the Department of Computer Science (and 12 years before that in the Computer Centre) at the University of Auckland, I have taken early retirement from Jan 2005.
My retiring rank as Associate Professor in the Department of
Computer Science continues with a position as Honorary Associate
Professor until the end of 2012, allowing me to maintain a
relationship with the University.
My University email and other
addresses (but not telephone) should continue to be available, but
will be handled less often ....
Important
notice  I am NOT available for supervising student projects or
similar activities.
This most definitely includes internships for
foreign students, so don't even bother asking.
28 Nov 2014
This book has just been published (2014) by Bentham Publications. It provides an introduction, from first principles, to the internal (bitwise) representation of integers, real numbers, characters and text, checksums and universal coding of integers. It has about 270 pages and is available as both an eBook(eISBN 9781608058822, 2014) and printed copy (“PrintOnDemand” ISBN 9781608058839). While not a book for beginners, it is intended for reference by advanced students and professionals.
You may download a 1page flier, or
The primary link includes ordering information, the ability to download individual chapters and a free “sample” which is the first 34 pages of each chapter or other section. The first 3 sections (which are all free) give a good overview of the book.
My primary interests are in computer architecture (including arithmetic), communications and networks, and text compression.
Recently I have been concentrating on the new "block sorting" text compressor (or block reduction, or BurrowsWheeler Transform), first described by Burrows and Wheeler in April 1994. Early results on this are reported in Tech Report 111 and Tech Report 120, with a final report in Technical Report 130. Following from that is work in "symbol ranking" text compression.
The complete source code of a fast constantorder symbol ranking compressor is available as srank.c.
Other recent related work has involved fast text searching and universal or variablelength integer codes based on Fibonacci numbers and prime numbers.
And even more recently I have returned to the work that started me in computing 40 years ago (!) on the solution of the potential distribution in a Hall effect plate for a magnetometer.
A recent survey identified me as a "Top
Scholar in Systems and Software Engineering", based on a
publications count in important international journals.
See "An
assessment of systems and software engineering scholars and
institutions (1998  2002)",
R.L. Glass, T.Y. Chen, The
Journal of Systems and Software Vol 68 (2003) pp 77  84
I have written four chapters in the "Lossless Data Compression Handbook" (ed Khalid Sayood), Academic Press, 2003. ISBN 0126208611
Universal codes
This
chapter describes all that I have been able to find about Universal
Codes (or Variable Length Codes for the Integers). There is a
surprising variety of codes (and some varieties of surprising
codes).
Burrows Wheeler Compression
A comprehensive review of the work to date on Burrows Wheeler
compression.
Symbol Ranking and ACB
Compression
Essentially a revision of earlier work described
in Software  Practice and Experience and J.UCS.
Compression of Unicode Files
A brief discussion
of the problems of compressing Unicode files, based on the work for
the 1998 Data Compression Conference.
Some of my recent ideas on Burrows Wheeler compression were presented at a workshop at Rutgers University in August 2004, celebrating the 10^{th} anniversary of the Burrows Wheeler Transform. Here are a preliminary report and the presentation slides which include later work and go somewhat beyond the report.
The essence of this material has now
been published in
“Burrows–Wheeler
compression : Principles and reflections”
Theoretical
Computer Science Vol
387(2007) pp 200–219 (see my publications list below for an
abstract). You can read a preprint
of this paper; the "preliminary report" above contains
material that was omitted to save space when two earlier papers were
merged into the final version.
The ideas fall into three groups, in order of decreasing workdone.
Use a BW transform to derive all contexts of some fixed order, say 4, and use these to drive an escapefree PPM compressor/decompressor. This is the main substance of the report and presentation; it worked, but not nearly as well as hoped.
"Break
open" the reverse BW transform to generate partial contexts
during both compression and decompression, so aiding the operation
of the coder and decoder.
The report starts by developing
analogies between the Fourier Transform and its associated spectra,
autocorrelations, etc, on the one hand and the the Burrows Wheeler
transform and PPM on the other. Invoking ideas of symmetry leads to
a significant new understanding of the Burrows Wheeler operation.
(I have long thought that we did not really understand the
BurrowsWheeler Transform. But the forward transform is essentially
simple, at least in concept, and showed little scope for
improvement. With these ideas the reverse transform looks the more
likely for further study and deeper understanding. At least I hope
so.)
Is
there some relation between data compression and error correction
codes? Compression can be thought of as a very noisy channel
(encoding a single symbol into a range of possible symbols) and then
generation of the codes to select the correct symbol from the
"noise" codes; the encoder and decoder track each other
and the "corrections" are what the compressor emits. (The
ideas follow most readily from my earlier work on "Symbol
Ranking" compression which uses an explicit symbol
predictor/corrector mechanism).
This third idea may provide a
useful starting point for somebody with more imagination than me. It
leads to two other ideas relating to the possible relationship
between Error Correction and compression.
Can we omit some symbols (giving an erasure code) and then use a Viterbilike trellis to recover feasible decodings. (Whew!)
The
relatively new errorcorrecting "turbocodes" use two
interacting but complementary errorcorrecting codes to give
superlative performance. Can something similar be done for
compression? (It also ties in with the previous point.)
But do
we have suitably different compressors? The two compressors must
work with the same symbol order, probably precluding a BW and PPM
combination. But of those which retain the original "text"
order, PPM and LZ77 were shown to be equivalent by Bell, Cleary &
Witten (in their book "Text Compression"). So until
somebody invents say a symbolbysymbol compressor which does not
depend on contexts, there may be no suitable complementary
candidates!
If you have ideas, please try them!
Success!
The vague ideas above eventually (eventually) led to a PPM compressor without escapes. The unsuccessful ideas included 2pass compressors which found all contexts and their contents on the first pass, and then transmitted this information as needed on a second, compression, pass. The cure was worse than the disease.
The version that finally succeeded used binary contexts and a bitwise compressor. The contexts in normal PPM (more correctly the contents of the contexts) have three possible conditions – empty, or unused, partly full, and fully populated. It is last two that require escapes, to add hitherto absent symbols; I had hoped to eliminate the need for escapes in fullypopulated contexts. On the other hand, a binary context has only two states, absent and populated, with no need for escapes from within a context.
The final compressor uses byte contexts (as usual for PPM), but emits each byte as a sequence of bits, with bitwise contexts for each symbol within each PPM order. If a particular context is new, the coder and decoder can detect the absent context and silently (without escapes) descend to a known bitwise context. After processing the bit at this order, coder and decoder ascend up to higher orders creating new binary contexts as needed.
The final compressor gives quite acceptable compression, by amounts between GZIP and BZIP2. (Details given below.)
(The links for the electronic journal J.UCS, the "Journal for Universal Computer Science" are to the J.UCS Page for the appropriate month. The journal contains compressed Postscript files which may be copied and printed.)
Most of my papers and reports are available by anonymous ftp from ftp.cs.auckland.ac.nz/pub/staff/peterf
“PPM
Compression without Escapes”,
Software
– Practice and Experience,
online
:
25 APR 2011, DOI: 10.1002/spe.1070 ; printed Vol
42, no 2, pp 255–260, February 2012)
Abstract
A
significant cost in PPM data compression (and often the major cost)
is the provision and efficient coding of escapes while building
contexts.
This paper presents some recent work on eliminating
escapes in PPM compression, using bitwise compression with binary
contexts. It shows that PPM without escapes can achieve averages of
2.5 bits per character on the Calgary Corpus and 2.2 bpc on the
Canterbury Corpus, both values comparing well with accepted good
compressors.
“Burrows–Wheeler
compression : Principles and reflections”
Theoretical
Computer Science Vol
387(2007) pp 200–219
Abstract
After
a general description of the Burrows Wheeler Transform and a brief
survey of recent work on processing its output, the paper examines
the coding of the zeroruns from the MTF recoding stage, an aspect
with little prior treatment. It is concluded that the original
scheme proposed by Wheeler is extremely efficient and unlikely to be
much improved.
The paper then proposes some new interpretations
and uses of the Burrows Wheeler transform, with new insights and
approaches to lossless compression, perhaps including techniques
from error correction.
A preprint of this paper is available
from here
"Burrows
Wheeler  Alternatives to Move to Front",
Authors

Dr Peter Fenwick, Dr Mark Titchener, Michelle Lorenz. Presented at
Data
Compression Conference
DCC2003.
Abstract
The
Burrows Wheeler Transform, first published in 1994, is a relatively
new approach to text compression and has already proven to produce
excellent results. However, there has been much research directed
towards improving the efficiency of the Move to Front algorithm with
varying degrees of complexity. This paper examines a relatively
simple technique using a cached modeling system proposed by Fenwick
and achieves very promising results.
Information loss during the
Burrows Wheeler Transform is also examined using Deterministic
Information Theory measuring techniques developed by Titchener. This
provides an interesting insight into what is actually occurring
during the complete Burrows Wheeler Transform and can confirm
whether manipulating the MTF algorithm is the correct approach to
refining an already successful compression scheme.
Full papers
in PDF
and
in PostScript.
(Please tell me if you have problems with these files. On my
computer the Acrobat Reader has some trouble with fonts, but the
hardcopy print is fine.)
"Burrows
Wheeler Compression with Variable Length Integer Codes",
Software
 Practice and Experience,
Vol 32, No 13, Nov 2002, pp 1307  1316.
Abstract
The
final coder in BurrowsWheeler compression is usually either an
adaptive Huffman coder (for speed) or a complex of arithmetic coders
for better compression. This article describes the use of
conventional predefined variable length codes or universal codes
and shows that they too can give excellent compression. The paper
also describes a "sticky MovetoFront" modification which
gives a useful improvement in compression for most files.
"VariableLength
Integer Codes Based on the Goldbach Conjecture, and Other Additive
Codes",
IEEE
Trans. Inform. Theory,
Vol 48, No 8, pp 2412  2417,Aug 2002.
Abstract
This
paper introduces a new family of variable length codes for the
integers, initially based on the Goldbach conjecture that every even
integer is the sum of two primes. For an even integer we decompose
the value into its two constituent primes and encode the ordinal
numbers of those primes with an Elias gamma code. The method is then
elaborated to handle odd integers. The paper then develops a more
general method of encoding any integer as the sum of two integers
and developing suitable basis sets of integers. Although the codes
which are generated by these methods are characterised by
widelyvarying and unpredictable lengths, they are over some ranges
shorter than most other variable length codes.
"Zeckendorf
Integer Arithmetic",
Fibonacci
Quarterly
Nov
2003, pp 405  413.
For something quite different.... Fibonacci
numbers are the basis of the "Zeckendorf" number
representation which looks like a binary number but has no
consecutive 1 bits. (By sending the Zeckendorf representation
leastsignificant bit first and following its most significant 1bit
with another 1bit the illegal pair 11 acts as terminator. The
result is a very good variablelength code for integers.)
However,
can we perform arithmetic on these Zeckendorf representations? This
paper shows that we can, giving procedures for addition, subtraction
(and complementing), multiplication and division. "(preprint
in PDF format)"
"Some
Perils of Performance Prediction : a case study on Pattern
Matching",
Software
 Practice and Experience,
Vol 31, pp 835843 April 2001.
Abstract
Recent
experience with string searching or pattern matching algorithms
revealed wide discrepancies between predicted and observed
performance. Further observations on a variety of computers revealed
even greater variations between those algorithms on different
computers. These observations are collected here as an example of
problems in realworld comparison of algorithms.
This
is a companion to the following paper.
"Fast
string matching for multiple searches",
Software
 Practice and Experience,
Vol 31, pp 815833 April 2001.
Abstract
We
present a string matching or pattern matching method which is
especially useful when a single block of text must be searched
repeatedly for different patterns. The method combines linking the
text according to digrams, searching on the leastfrequent digram,
and probing selected characters as a preliminary filter before full
pattern comparison. Tests on real alphabetic data show that the
number of character comparisons may be decreased by two orders of
magnitude compared with KnuthMorrisPratt and similar searching,
but with an initialisation overhead comparable to 5 to 10
conventional searches.
But
see also the previous paper..
"Symbol
Ranking Text Compressors: Review and Implementation",
Software
 Practice and Experience
Vol
28 No 5 pp 547559 April 1998.
This paper reviews several
previous text compressors with 'symbol ranking' as a unifying theme.
It includes the compressor described in Tech Rep 145.
"Compression
of Unicode Files",
DCC98
Data
Compression Conference, Snowbird, Utah, March 1998.
[A poster
was presented at DCC98 on the compression of Unicode files, showing
that they can behave rather differently from conventional
byteoriented data. Experiments with a compressor using 16bit data
show that it often works better than standard byte compressors for
Unicode data.] A
pdf version of the paper is also available.
"Symbol
Ranking Text Compression with Shannon Recodings",
J.UCS
Vol
3, No 2 pp 70 85 Feb 1997.
[A general introduction to "symbol
ranking" text compression and a description of a
proofofconcept compressor. An extended version of Tech
Rep 132]
"The
BurrowsWheeler Transform for Block Sorting Text Compression 
Principles and Improvements",
The
Computer Journal,
Vol 39, No 9, pp 731740, 1996.
[This is the paper which was
written as the
definitive
summary of my experience with block sorting compression.]
"
Queue
Prediction: an efficient scheduler for fast ATM cell transmission ",
ACSC'97, Sydney, Australia Feb 1997.
[This paper describes a new
technique for controlling ATM cell transmission.]
"Block
Sorting Text Compression",
ACSC'96, Melbourne, Australia Jan 1996.
[This paper is the
general overview of block sorting compression.]
"Differential
ZivLempel Text Compression",
J.UCS
Vol
1, No 8 pp 587 598 Aug 1995.
[ An account of an attempt to
combine ZivLempel (lossless) compression with Vector Quamtisation
(lossy) compression. It includes some predictions on the limits of
LZ compression.]
"Huffman
code efficiencies for extensions of sources",
IEEE
Trans. Comm.,
Vol 43 No 2 pp163 165, Feb 1995.
[Athough coding the extensions
of sources is a fundamental concept in Information Theory, there
seems to be no work describing what actually happens in practice.
This paper arose from a student assignment which showed that
increasing the code extension can lead to a poorer code! Also
available as Tech Rep 89.]
"Highradix
Division with Approximate Quotientdigit Estimation",
Journal
for Universal Computer Science
(J.UCS)
Vol 1, No 1, pp 2 22 Jan 1995.
[A major problem in highradix
division is estimating the correct digit (and divisor multiple).
While it is easy to get the right answer most of the time, it is
much harder to get it right all of the time. This paper shows that
allowing a bad estimate to be corrected can lead to much simpler
control logic, with very little speed penalty.]
"A
new data structure for cumulative frequency tables",
Software
Practice and Experience,
Vol 24, No 3, pp 327 336 Mar 1994.
[This paper describes a new
data structure which is especially designed to handle the cumulative
frequency tables of arithmetic compression. See also Tech
Rep 88 & 110.]
"
Reflections on Burrows Wheeler
compression", The
University of Auckland, Department of Computer Science Report No
172,
Jul. 2004.
[This report presents some speculations on the nature
of the Burrows Wheeler transform, from analogies with the
betterknown techniques of signal processing. These analogies, plus
considerations of symmetry and inverse operations, suggest some
possible developments to the Burrows Wheeler transform for lossless
compression. ]
"
A fast, constantorder, symbol
ranking text compressor", The
University of Auckland,
Department
of Computer Science Report No 145,
Apr. 1997.
[A symbol ranking text compressor which is designed
for speed rather than good compression. A software implementation
3.56 bits/byte on the Calgary Corpus, running at 1 Mbyte/s on a
275MHz DEC Alpha. A version more suited to hardware implementation
compresses to 3.95 bits/byte and should be able to run at hundreds
of megabytes per second.]
"
Punctured Elias codes for
variablelength coding of the integers",The
University of Auckland,
Department
of Computer Science Report No 137,
Dec. 1996.
[A general discussion of all of the variable length
codings I have managed to discover, followed by a variant of the
Elias gamma code which is useful for some text compression
codings.]
Later
note  these codes are not really that good....
"Symbol
Ranking Text Compression", The
University of Auckland,
Department
of Computer Science Report No 132,
Jun. 1996.
[This describes an introductory implementation of a
text compressor which follows directly from Shannon's 1951 paper on
the information content of English text. For each context, the
compressor prepares a list of symbols, ranked from mostlikely to
least likely, and emits the position of the actual symbol in the
ordered list.]
"
Block Sorting Text Compression  Final Report",
The
University of Auckland,
Department
of Computer Science Report No 130,
Apr. 1996.
[The final report on block sorting compression,
extending somewhat the results of the ACSC paper.]
"
Improvements to the Block Sorting Text Compression Algorithm" ,
The
University of Auckland,
Department
of Computer Science Report No 120,
Aug. 1995.
[Further work on the block sorting compression,
intermediate between TR111 and the ACSC paper.]
"Experiments with a Block Sorting Text Compression Algorithm", The University of Auckland, Department of Computer Science Report No 111, May 1995. [Initial results and miscellaneous thoughts, on the block sorting algorithm. It includes many experimental results which are not found elsewhere.]
"
A New Data Structure for Cumulative Probability Tables : an Improved
FrequencytoSymbol Algorithm", The
University of Auckland, Department of Computer Science Report No
110,
Feb. 1995.
[Correction to the S P & E paper above, and to
Tech Rep 88]
"Fast LZ77 string matching", (with P. Gutmann) The University of Auckland, Department of Computer Science Report No 102, Sep. 1994.
"Huffman code efficiencies for extensions of sources", The University of Auckland, Department of Computer Science Report No 89, Feb. 1994.
"A new data structure for cumulative frequency tables", The University of Auckland, Department of Computer Science Report No 88, May 1993. [Essentially the S P & E paper.]
For this I simply reproduce the abstract and introduction from my report. Read further if you are interested!!
This report describes the solution of a problem which arose about 40 years ago and seemed eminently suitable for computer solution. But numerical limitations made it quite unsuitable for complete solution on the computers then available. It is only now that a solution is feasible on generallyavailable machines.
The problem itself involves the solution of potentials over a Halleffect plate driven with a constant voltage instead of the more usual constant current. The change from currentdrive to voltagedrive greatly reduces the temperature sensitivity of the Hall output voltage, but at the cost of nonlinear response at high magnetic fields. The problem is to find the shape of the response.
This is the problem that introduced me to computing. For reasons that will become apparent, its solution languished and it is only now, 40 years later, that I have at last completed it.
In the summer of 19634 I was working as a casual laboratory technician in the University of Auckland Physics Department. Much of my work seemed to be with magnets for Advanced Laboratory experiments  I remember working with two large electromagnets, both weighing 10s of kilogrammes. One produced a high field of about 1 Tesla (10,000 gauss for the oldfashioned) for studying the Zeeman effect. The other produced a lower field, about 0.3 T, but very uniform, for demonstrating Nuclear Magnetic Resonance.
Also working with things magnetic I built a Halleffect fluxmeter to measure magnetic fields. The Hall element was driven by a constant voltage, using AC at about 1 kHz, with a switched gain amplifier and synchronous rectification. Its maximum fullscale sensitivity was about 100 microTesla, or 1 gauss, comparable to the Earth's field.
But I knew that the output saturated at high fields (say B>0.1T) and wondered just how the output varied with a high magnetic field. The University had then just got its first computer, an IBM 1620 with all of 20k decimal digits of storage and a FP multiply time of 10 ms. Murray Johns suggested that I solve my problem on the computer as it seemed a good one for numerical solution. So I taught myself Fortran (with Format).
But you may read the complete report as a PDF file....
For those who are really, really interested, here is my complete publications list –
Journal Publications
Fenwick, P.M., “PPM Compression without Escapes”, Software – Practice and Experience, Article first published online : 25 APR 2011, DOI: 10.1002/spe.1070
Fenwick, P.M., “Zeckendorf Integer Arithmetic”, Fibonacci Quarterly, Vol 41.5, Nov 2003, pp 405 – 413.
Fenwick, P.M., “Burrows Wheeler Compression with Variable Length Integer Codes”, Software – Practice and Experience, Vol 32, No 13, Nov 2002, pp 1307–1316..
Fenwick, P.M., “VariableLength Integer Codes Based on the Goldbach Conjecture, and Other Additive Codes”, IEEE Trans. Inform. Theory, Vol 48, No 8, pp 2412–2417,Aug 2002.
Fenwick, P.M., “Fast string matching for multiple searches”, Software – Practice and Experience, Vol 31, pp 815–833 April 2001.
Fenwick, P.M., “Some Perils of Performance Prediction : a case study on Pattern Matching”, Software – Practice and Experience, Vol 31, pp 835–843 April 2001.
Fenwick, P.M., “Symbol ranking text compressors: review and implementation”, Software – Practice and Experience, Vol 28, No 5, pp 547–559, April 1998.
Fenwick, P.M., “The Burrows–Wheeler Transform for Block Sorting Text Compression — Principles and Improvements”, The Computer Journal, Vol 39, No 9, pp 731–740, 1996.
Fenwick, P.M., “Symbol Ranking Text Compression with Shannon Recodings”, J.UCS Vol 3, No 2 pp 70–85 Feb 1997. http://www.iicm.edu/jucs_3_2
Fenwick, P.M., “A New Data Structure for Cumulative Frequency Tables: an Improved FrequencytoSymbol Algorithm”, Software – Practice and Experience, Vol 26, No 4, pp 399–400, April 1996
Fenwick, P.M., “Differential ZivLempel Text Compression”, J.UCS Vol 1, No 8 pp 587–598 Aug 1995 http://www.iicm.edu/jucs_1_8
Fenwick, P.M., “Huffman code efficiencies for extensions of sources”, IEEE Trans. Comm., Vol 43 No 2 pp 163–165, Feb 1995
Fenwick, P.M., “Highradix Division with Approximate Quotientdigit Estimation”, Journal for Universal Computer Science (J.UCS) Vol 1, No 1, pp 2–22 Jan 1995 http://www.iicm.edu/jucs_1_1
Fenwick, P.M., “A new data structure for cumulative frequency tables”, Software – Practice and Experience, Vol 24, No 3, pp 327–336 Mar 1994.
Fenwick, P.M., “A New Technique for Self Organising List Searches”, Computer Journal, pp 450–454, Oct. 1991.
Fenwick, P.M., “Performance Measurements on a 39 km Extended Ethernet”, Computer Journal, pp 314–322, August 1990
Fenwick, P.M., “A FastCarry Adder with CMOS Transmission Gates”, Computer Journal, Vol 30, No 1, pp 77–79, 1987.
Fenwick, P.M., “Some Aspects of the Dynamic Behaviour of Hierarchical Memories”, IEEE Trans. Comput. Vol C34, pp 570–573, June 1985.
Fenwick, P.M., “A Binary Representation for Decimal Numbers”, Aust. Comp. Journal, Vol 4, pp 146–149, Nov 1972.
Fenwick, P.M., “Pulse Generator Using R.T.L. Integrated Circuits”, Radio and Electronic Engineer, Vol 37, pp 374–376, June 1969
Fenwick, P.M., “Binary Multiplication Using Overlapped Addition Cycles”, IEEE Trans. Comp. Vol 18, pp 71–74, Jan 1969.
Fenwick, P.M., J.B. Earnshaw, “A New MasterSlave Binary Scaler Design”, Nuclear Instruments and Methods, Vol 47, pp 264–272, 1967.
J.B. Earnshaw, Fenwick, P.M., “Design for a Parallel Binary Adder”, Electronic Engineering, pp 794–796, December 1966
Fenwick, P.M., J.B. Earnshaw, “A TransistorDiode Feedback Type Logic Circuit”, Radio and Electronic Engineer, Vol 32, pp 191–199, September 1966
Conference Papers (most recent first. ® denotes a refereed paper)
® Titchener, M.R., Fenwick, P.M. and Lorenz, M.. “BurrowsWheeler — alternatives to MoveToFront”, Data Compression Conference, DCC03, Snowbird, Utah Mar 1999.
® Titchener, M.R., Fenwick, P.M. and Chen, M.C. “Towards a Calibrated Corpus for Compression Testing”, Data Compression Conference, DCC99, Snowbird, Utah Mar 1999.
® Fenwick, P.M. and Brierley, S. “Compression of Unicode Files”, Data Compression Conference, DCC98, Snowbird, Utah Mar 1998.
®
Fenwick, P.M., “Symbol Ranking text compressors”, Data
Compression Conference, DCC97, Snowbird, Utah March
1997.
Fenwick, P.M., “Queue Prediction: an efficient scheduler
for fast ATM cell transmission”, ATMworks workshop, University of
Waikato, Feb 1997.
® Fenwick, P.M., “Queue Prediction: an efficient scheduler for fast ATM cell transmission”, ACSC’97, Sydney, Australia, Feb 1997.
® Fenwick, P.M., “Block Sorting Text Compression”, ACSC’96, Melbourne, Australia, Jan 1996
®
Fenwick, P.M. and Woolford, S.A., “Vector quantisation for wavelet
based image compression”, Data Compression Conference, Snowbird,
Utah March 1995
Fenwick, P.M., “Huffman code efficiencies for
extensions of sources”, Salodays in Auckland – Proceedings,
Auckland, January 1994.
® Fenwick, P.M., Gutmann, P.C., “Table–Driven Arithmetic Coding”, Data Compression Conference, Snowbird, Utah, March 1993
® Fenwick, P.M., “ZivLempel encoding with multibit flags”, Data Compression Conference, Snowbird, Utah, March 1993
®
Fenwick, P.M., “TimeDomain Measurements on an Extended Ethernet”,
Int. Symposium on Signal Processing & Applications (Gold
Coast, Queensland, Aug 1990)
Murray Thompson, Peter Fenwick, Joe
Lackey, Karl Nissen, Don Thielman and Zhang Huilian, “A 40 km
Extended Ethernet”, Telecom. Users Assn of N.Z. (Auckland,
August 1988) pp 43–50
Fenwick, P.M., “A Digital Data Network with Radio Frequency F.S.K. Transmission”, Australian Universities Network Workshop, Univ. of Sydney, Nov 1983.
® Fenwick, P.M., “A DataOriented Addressing System,” Proc. 5th Aust. Comp. Conf., pp 340–344 (Brisbane, May 1972)
Fenwick, P.M., “Data Acquisition and OnLine Analysis on a HP2116B Computer”, Data Acquisition and RealTime Systems”, pp 42–52, Fenwick, P.M. and D.E. Lawrence (Eds) University of Queensland Press, 1971.
Fenwick, P.M., “A Small Fast Microprogram Computer,” NELCON III (Auckland, 1970)
Fenwick, P.M., “A ThreeChannel Variable Delay Pulse Generator,” NELCON II (Auckland 1968)
Fenwick, P.M., “A TransistorDiode Feedback Type Logic Circuit,” N.Z. National Electronics Conf. (NELCON I), 1966
Books and Sections of Books
Data Compression Handbook, Ed Khalid Sayood, Academic Press, 2003. Invited chapters on –
BurrowsWheeler Compression
Symbol Ranking and ACB Compression
Unicode Compression
Universal Codes
Dictionary of Computer Science, Engineering and Technology, Ed Philip A. Laplante, CRC Press 2001. (Contributor on computer memory and input/output)
Comprehensive Dictionary of Electrical Engineering, Ed Philip A. Laplante, CRC Press/IEEE Press 1999. (Contributor on computer memory and input/output)
Data Acquisition and RealTime Systems, Fenwick, P.M. and D.E. Lawrence (Eds) University of Queensland Press, 1971.
Nonrefereed Contributions (most recent first)
Fenwick, P.M., “A picture is worth 1000 words (or 16000 bits)”, IEEE Computer, Vol 26, No 9, p 128, Sep 1993
Fenwick, P.M., “Figuring the angles”, IEEE Spectrum, Vol 24, No 1 p10, Jan 1987.
Fenwick, P.M., “Micros, Mainframes, and Computer Safety”, IEEE Computer., Vol 18, p 113, June 1985.
Fenwick, P.M., “On Hardware Design Effort”, IEEE Computer., Vol 18, p 112, March 1985.
Fenwick,
P.M., “Reliability and Safety”, CACM, Vol 27, No. 12, pp
1175–1176, Dec 1984.
(See also CACM Vol 28, No. 4, p
350, April 1985.)
Fenwick, P.M., “Addressing Operations for Automatic Data Structure Accessing”, Comp. Arch. News, Vol 12, No. 1, pp 44–57, March 1984.
Fenwick, P.M., “A Novel Message Control System for the B6700”, J. Users Burroughs Large Systems, Vol 8, pp 19–23, 1976.
Fenwick, P.M., “A Small Fast Microprogram Computer”, Radio, Electronics and Communications, Vol 25, No 12, pp 13–16, March 1971.
Technical Reports (most recent first)
Fenwick, P.M., “Reflections on Burrows Wheeler compression”, The University of Auckland, Department of Computer Science Report No 172, Jul. 2004.
Fenwick, P.M., “A fast, constantorder, symbol ranking text compressor”, The University of Auckland, Department of Computer Science Report No 145, April 1997
Fenwick, P.M., “Punctured Elias Codes for variablelength coding of the integers”, The University of Auckland, Department of Computer Science Report No 137, 15 pages, Dec. 1996.
Fenwick, P.M., “Symbol Ranking Text Compression”, The University of Auckland, Department of Computer Science Report No 132, June 1996.
Fenwick, P.M., “Block Sorting Text Compression — Final Report”, The University of Auckland, Department of Computer Science Report No 130, Apr. 1996.
Fenwick, P.M., “Improvements to the Block Sorting Text Compression Algorithm”, The University of Auckland, Department of Computer Science Report No 120, Aug. 1995.
Fenwick, P.M., “Experiments with a Block Sorting Text Compression Algorithm”, The University of Auckland, Department of Computer Science Report No 111, May 1995.
Fenwick, P.M., “A New Data Structure for Cumulative Probability Tables : an Improved FrequencytoSymbol Algorithm”, The University of Auckland, Department of Computer Science Report No 110, Feb. 1995.
Fenwick, P.M. and P.C. Gutmann, “Fast LZ77 string matching”, The University of Auckland, Department of Computer Science Report No 102, Sep. 1994.
Fenwick, P.M., “Huffman code efficiencies for extensions of sources”, The University of Auckland, Department of Computer Science Report No 89, Feb. 1994.
Fenwick, P.M., “A new data structure for cumulative frequency tables”, The University of Auckland, Department of Computer Science Report No 88, Feb. 1994.
Fenwick, P.M., “The University of Auckland Student Streaming Programs”, The University of Auckland, Department of Computer Science Report No 58, Nov. 1991.
Doran R.W. , Fenwick, P.M., Zheng Qun, “Virtual Registers”, The University of Auckland, Department of Computer Science Report No 57, Oct. 1991.
Fenwick, P.M. “Hierarchical Memories”, Burroughs Corp., Mission Viejo, CA. Engineering & Technical Memorandum (ETM 544) Sep 1979
Fenwick, P.M., “6800 Microprocessor Hardware”, The University of Auckland, Computer Centre Tech. Report 14, December 1978.
Fenwick, P.M., “MUL – A Language for the 6800 Microprocessor”, The University of Auckland, Computer Centre, Tech. Report 13, November 1978.
Fenwick, P.M., “A System for Processing Student Jobs on Marksense Cards”, The University of Auckland, Computer Centre, Tech. Report 12, October 1978.
Fenwick, P.M., “Student Computing Facilities at the University of Auckland”, The University of Auckland, Computer, Tech. Report 7, September 1977.
Fenwick, P.M., “Some B6700 Performance Tests”, The University of Auckland, Computer Centre, Tech. Report 5, May 1976.
Fenwick, P.M., “EAST – a High Level Assembler for the CAI Alpha16 Computer”, The University of Auckland, Computer Centre, Tech. Report 3, March 1976.
Fenwick, P.M., “A Data Link to Connect 16bit Computers”, A.N.U. Computer Centre, Tech. Report 41, August 1972
Fenwick, P.M., “A Program for the Preparation of Wiring Tables”, A.N.U. Computer Centre, Tech. Report 40, August 1972.