Sunday, May 17, 2009

diagroup was edited

Recently changed pages on diagroup

FrontPage

edited by John F. Felix

Only formatting differences

diagroup was edited

Recently changed pages on diagroup

New Septuagint Sneak Peek

edited by John F. Felix

(Use your browser back button to return here.)
The work on the Septuagint is to go beyond all previous editions. Here we attemtp to re-construct both the Greek and the Hebrew, when one or the other is not present in the textual location. In addition, this project incorporates a new Hebrew morphology, differing from Westminster and other systems, though based on rigorous attention to Masoretic Hebrew and the Koine Greek of the late centuries preceeding the Common Era.
For purposes of this interlinear, the order of the Hebrew, or supposed Hebrew, is considered primary, and the order of the Greek conforms to that order. This new Septuagint, then, is a hopefully useful "what-if" tool, for scholarly research into textual history and problems, e.g., what if what is taken in the Greek as merely stylistic actually represented a now unknown Hebrew textual variant, and what would that look like in pointed Masoretic Hebrew? The same would apply to the Alfred Rahlfs's LXX, used as the basic Greek text here. We hope to add other editions to the interlinear, as well.
In addition, the interlinear features the hard to find public domain e-text of the Septuagint to English translation (1808) of American Charles Thompson, secretary of the Continental Congress from 1774 to 1789.
The New LXX and this page Copyright (c) 2009 by John F. Felix. All rights reserved.

New Septuagint Sneak Peek

edited by John F. Felix

Only formatting differences

FrontPage

edited by John F. Felix

Only formatting differences

[Real World Atheism] Humanist Network News

American Humanist Association
http://AmericanHumanist.org
Washington, DC
800.837.3792

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

May 6, 2009

Humanist Network News

~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~

Read all articles on one page: http://www.humaniststudies.org/enews/?id=396&showAll=true

diagroup was edited

Recently changed pages on diagroup

FrontPage

edited by John F. Felix (diagroup@)
Welcome to DIAGroup - Digital Interlinear Analysts Group wiki!
Just added:
Scriptures → Greek → LXX → Genesis → Sneak Peek
General → Science Fiction → Kubrick 2001 →MoreRuminationsonKubrick's2001-Part1
General → Science Fiction → Battlestar Galactica → Ruminations on the BSG series - Part 1

Thursday, May 14, 2009

diagroup was edited

Recently changed pages on diagroup

BSG Ruminations

edited by John F. Felix

There are cases, to be argued later, when the necessity to end the show, coupled with limits imposed on their creativity, caused practical decisions to be made within a time frame the producers did not have control over. The plot thread is, of course the missing Cylon #7.
c) Kara learned she was not the missing Cylon model
apparently hinting at Kavil. fingering Kavil as the terminator. (There is some speculation that the Daniel 7 may have actually survived as Kara's father, or as a virtual piano player, but I don't subscribe to this interpretation.)
Copyright © 2009 by John F. Felix. All rights reserved.

Sunday, May 10, 2009

diagroup was edited

Recently changed pages on diagroup

More Ruminations on Kubrick's 2001 - Part 1

added by John F. Felix
Added article page

diagroup was edited

Recently changed pages on diagroup



    TRC Mark Sneak Peek

    edited by John F. Felix

    About the Textus Receptus Criticalis (TRC)
    From here you can access the Textus Receptus Criticalis:

    Friday, May 08, 2009

    diagroup was edited

    Recently changed pages on diagroup

    BSG Ruminations

    edited by John F. Felix
    Ruminations on Battlestar Galactica - Part 1
    I. Introduction
    the finali finale is, boy did we overthink this series
    What started my reading and writing about the series was

    John F. Felix has sent you a link to "BSG Ruminations" on diagroup

    John F. Felix has sent you a link to BSG Ruminations.
    Update to BSG Ruminations - expanded with new material.

    Thursday, May 07, 2009

    diagroup was edited

    BSG01.htm

    BSG01.htm - this link is no longer valid

    Wednesday, May 06, 2009

    About the Textus Receptus Criticalis (TRC).

    A little bit of info on the TRC.

    The TRC is an attempt at a new collation of at least the synoptic gospels, in order to provide functionality as an electronic tool for textual reconstruction, concordance and other applications, as an aid to further research on the so-called "synoptic problem." In addition, it is laid out in a database format for importation into Jet DBMS, for the express purpose of experimentation with sophisticated Structured Query Language queries.

    The basic notion that started it was the beginning insight that every Greek text since Erasmus has been a collation of manuscripts. To best collate the Greek scriptures, then, it seemed logical to collate the collations, as each collator(s) produced what they each believed to be the most likely Greek text that would be closest to a hypothetical original; however later actual codexes and papyrii were included in our new collation.

    The native format is in MS-Excel, mainly for the purposes of concordancing and proof-reading/correction. Therefore, an arbitrary order for the text was decided upon first, based rigorously on the NA/27 and UBS4 manuscript order. All other collated texts are re-arranged to fit this order of words. The texts for such codexes as Sinaiticus, Alexandrinus, Vaticanus, (Bezae is not shown), etc., should not be seen then as being in anything but an arbitrary order, and definitely not a guide to what the manuscripts actually record.

    The reason this order was established first is because of Excel's "auto-filter" feature, which the readers can research for themselves. In order to align all the Greek and other texts across each column, certain principles were devised, the first principle being that each text must be normalized to minimize any differences between the texts, so all diacritics that may have existed in the e-texts have been removed (and the entire corpus reduced to lowercase), and all texts are maintained in unicode. Greek words that occur at the same location, but may differ, are aligned by whatever method reduces dissimilarities, e.g., if one text uses one conjunction and another uses another conjunction, then the words are aligned, and autofiltering can show the similarities and differences for each word in each place it occurs in the corpus. There is also the concept of conceptual similarity, which I won't delve into with this blog post, but will expand upon in an article posted to a wiki page at a later date.

    Finally, the intent is to produce a new Greek text which belongs to DIAGroup, and can be filtered and otherwise manipulated to test whatever text critical principles we wish to explore. The final product, or products, will not reflect more than the standard reasoned eclecticism principles, but actual textual choices will be made when producing the desired collations, based on our collective level of scholarship. If anyone should be interested in participating with DIAGroup, the blogs and wiki all have several mechanisms through which the interested can contact us.

    John F. Felix has sent you a link to "TRC Mark Sneak Peek" on diagroup

    John F. Felix has sent you a link to TRC Mark Sneak Peek.

    Sneak peek updated Wed 5/6/2009 11:30 AM CDST!

    View the page now on Dispraxis.

    Sunday, October 19, 2008

    Libronix search short-coming.

    Morris Proctor offered a search tip on his Logos blog. One of the comments was that you would have to either locate the word to right-click it, or use Basic Search. I took a look at the DLS Object Model documentation (must be purchased), and I found that Basic Search is the only way to get the results the commenter wanted. Why? Because Libronix does not fully support jscript, but only a sub-set of the functionality. For example, it should be possible to enter in user-code and assign it to a button, but unfortunately, Libronix does not allow you to prompt the user for input. That's why Mr. Proctor did not offer a handy script. You can display a message box, but only the click events are passed back to the method, e.g., OK, Cancel, Yes or No. So, my advice would be to create a button to run Basic Search on the toolbar (and assign a short-cut keystroke). That's as close as anyone will get with the current object model implementation. :(

    Friday, May 11, 2007

    Creating a Parallel Corpus from the ``Book of 2000 Tongues''

    Creating a Parallel Corpus from the ``Book of 2000 Tongues''

    My discussion of a computational linguistics approach to the automatic tagging of bible texts with Strong's numbers, posted at: www.ancientrootsbible.com is no longer available. I may review it for re-posting on the DIAGroup wiki.

    In support of and in an effort to refine my idea posted at the site above, I was surfing and came across the article at the head of this post, by Philip Resnik, Mari Broman Olsen, Mona Diab University of Maryland Department of Linguistics and Institute for Advanced Computer Studies. Synchronicity or serendipity?

    Friday, May 04, 2007

    If you think you know it all...

    Some WHAT’S KNOWetal™? observations.

    "School-girl" writing?

    One reason alone sinks the idea of colored text, mentioned in Mr. Cross' own paper: it discriminates against a minority. Besides, colored text cannot be controlled, since colors displayed are ultimately user configurable. There are even cultural biases against writing in certain colors! Ultimately one would end up being a standards body deciding what color schemes would be acceptable to be certified.

    A whiter shade of gray?

    Subtle shades of highlighting behind text would probably work better, but suffers from some of the same drawbacks. Punctuation would soon be overused and distracting. Ms. Werner mentions "see behind," so another alternative may be that text or highlighting can be turned on or off behind the current text.

    Leaving The (confidence) View?

    Mr. Cross tested a system whereby the user must consciously select the "confidence view," but I would think there would be: a) rampant non-use, and b) the simple fact that sites such as Wikipedia et al can never have High Confidence, nor should they. If I were a teacher and a student submitted a report based on articles from Encyclopedia Titannica, rather than agreed-upon authoritative sources, they would be engaged in a titanic struggle to raise his/her grade. The purpose and reason for the success of this paradigm has little to do with knowledge, and a lot to do with getting noticed.

    Old but still true?

    The author gives some general criteria for determining confidence levels, distilled down to their essence, consisting of peer review, and since "well-documented" is contrasted with "brand new," I would consider another criteria to be age, implied also in Mr. Cross' paper, and whether something has sufficient documentation to be considered pretty true. The last criteria carries the most weight in my opinion.

    Cyclical time is merely human, the arrow of time is cosmic...?

    One objection is that there are such things as very weighty opinions, and even if Steven Hawkings voluntarily submitted a paper for certification, he might think that more of his "opinions" should be marked "High Confidence" out the gate. Also, in order for the Ribbon to mean anything, it would have to be a widely used and sought-after standard. The majority of content-creators are more interested in profit than reliability, and there are other social factors as well, when you think of why someone would create a wiki or a blog, etc. Confidence should probably not be applied to the Web in general. Although I may want one Web site of mine certified, I would not want my blog rated at all. Perhaps no one would read it or comment on it, like they do now. I hope I am not misrepresenting or misunderstanding the intention.

    Do search engines dream of artificial sheep?

    Finally, there should be a way to quantify "confidence" mathematically so that confidence can be automatically generated. Most people now use search engines as portals, so these engines should have a mathematical algorithm to give a confidence score rather than the "page rank" popularity contest. Here we are on the border of AI, I think, so although confidence is a worthy goal, my opinion is that basic changes are needed to the very structure of communication, computing and how people, in fact, are, to make this a reality.